logging my thoughts on technology, security & management

Learnings from 5 years of tech startup code audits

CTO @ Truss | Former VP of Engineering and Head of Security @ FiscalNote | ex-PKC co-founder | princeton tiger '11 | writes on engineering, management, and security.
82042
Views
Points
Comments

While I was at PKC, our team did upwards of twenty code audits, many of them for startups that were just around their Series A or B (that was usually when they had cash and realized that it’d be good to take a deeper look at their security, after the do-or-die focus on product market fit).

It was fascinating work – we dove deep on a great cross-section of stacks and architectures, across a wide variety of domains. We found all sorts of security issues, ranging from catastrophic to just plain interesting. And we also had a chance to chat with senior engineering leadership and CTOs more generally about the engineering and product challenges they were facing as they were just starting to scale.

It’s also been fascinating to see which of those startups have done well and which have faded, now that some of those audits are 7-8 years ago.

I want to share some of the more surprising things I’ve internalized from these observations, roughly ordered from most general to most security specific.

  1. You don’t need hundreds of engineers to build a great product. I wrote a longer piece about this, but essentially, despite the general stage of startup we audited being pretty similar, the engineering team sizes varied a lot. Surprisingly, sometimes the most impressive products with the broadest scope of features were built by the smaller teams. And it was these same “small but mighty” teams that, years later, are crushing their markets.
  2. Simple Outperformed Smart. As a self-admitted elitist, it pains me to say this, but it’s true: the startups we audited that are now doing the best usually had an almost brazenly ‘Keep It Simple’ approach to engineering. Cleverness for cleverness sake was abhorred. On the flip side, the companies where we were like ”woah, these folks are smart as hell” for the most part kind of faded. Generally, the major foot-gun (which I talk about more in a previous post on foot-guns) that got a lot of places in trouble was the premature move to microservices, architectures that relied on distributed computing, and messaging-heavy designs.
  3. Our highest impact findings would always come within the first and last few hours of the audit. If you think about it, this makes sense: in the first few hours of the audit, you find the lowest-hanging fruit. Things that stick out like a sore thumb just from grepping the code and testing some basic functionality. During the last few hours, you’ve fully contexted in to the new codebase, and things begin to click.
  4. Writing secure software has gotten remarkably easier in the last 10 years. I don’t have statistically sound evidence to back this up, but it seems like code written before around 2012 tended to have a lot more vulnerabilities per SLOC than code written after 2012 (we started auditing in 2014). Maybe it was the Web 2.0 frameworks, or increased security awareness amongst devs. Whatever it was, I think this means that security really has improved on a fundamental basis in terms of the tools and defaults software engineers now have available.
  5. All the really bad security vulnerabilities were obvious. Probably a fifth of the code audits we did, we’d find The Big One – a vulnerability so bad that we’d call up our clients and tell them to fix it immediately. I can’t remember a single case where that vulnerability was very clever. In fact, that’s part of what made the worst vulnerabilities bad — we were worried primarily because they’d be easy to find and exploit. “Discoverability” has been a component of impact analysis for a while, so this isn’t new. But I do think that discoverability should be much more heavily weighted. Discoverability is everything, when it comes to actual exposure. Hackers are lazy and they look for the lowest-hanging fruit. They won’t care about finageling even a very severe heap-spray vulnerability if they can reset a user’s password because the reset token was in the response (as Uber found out circa 2016). The counterargument to this is that heavily weighting discoverability perpetuates ”Security by Obscurity,” since it relies so heavily on guessing what an attacker can or should know. But again, personal experience strongly suggests that in practice, discoverability is a great predictor of actual exploitation.
  6. Secure-by-default features in frameworks and infrastructure massively improved security. I wrote a longer piece about this too, but essentially, things like React default escaping all HTML to avoid cross-site scripting, and serverless stacks taking configuration of operating system and web server out of the hands of developers, dramatically improved the security of the companies that used them. Compare this to our PHP audits, which were riddled with XSS. These newer stacks/frameworks are not impenetrable, but their attackable surface area is smaller in precisely the places that make a massive difference in practice.
  7. Monorepos are easier to audit. Speaking from the perspective of security researcher ergonomics, it was easier to audit a monorepo than a series of services split up into different code bases. There was no need to write wrapper scripts around the various tools we had. It was easier to determine if a given piece of code was used elsewhere. And best of all, there was no need to worry about a common library version being different on another repo.
  8. You could easily spend an entire audit going down the rabbit trail of vulnerable dependency libraries. It’s incredibly hard to tell if a given vulnerability in a dependency is exploitable. We as an industry are definitely underinvesting in securing foundational libraries, which is why things like Log4j were so impactful. Node and npm were absolutely terrifying in this regard—the dependency chains were just not auditable. It was a huge boon when GitHub released dependabot because we could for the most part just tell our clients to upgrade things in priority order.
  9. Never deserialize untrusted data. This happened the most in PHP, because for some reason, PHP developers love to serialize/deserialize objects instead of using JSON, but I’d say almost every case we saw where a server was deserializing a client object and parsing it led to a horrible exploit. For those of you who aren’t familiar, Portswigger has a good breakdown of what can go wrong (incidentally, focused on PHP. Coincidence?). In short, the common thread in all deserialization vulnerabilities is that giving a user the ability to manipulate an object that is subsequently used by the server is an extremely powerful capability with a wide surface area. It’s conceptually similar to both prototype pollution, and user-generated HTML templates. The fix? It’s far better to allow a user to send a JSON object (it has so few possible data types), and to manually construct the object based on the fields in that object. It’s slightly more work, but well worth it!
  10. Business logic flaws were rare, but when we found one they tended to be epically bad. Think about it — flaws in business logic are guaranteed to affect the business. An interesting corollary is that even if your protocol is built to provide provably-secure properties, human error in the form of bad business logic is surprisingly common (you need look no further than the series of absolutely devastating exploits that take advantage of badly written smart contracts).
  11. Custom fuzzing was surprisingly effective. A couple years into our code auditing, I started requiring all our code audits to include making a custom fuzzers to test product APIs, authentication, etc. This is somewhat commonly done, and I stole this idea from Thomas Ptacek, which he alludes to in his Hiring Post. Before we did this, I actually thought it was a waste of time—I just always figured it was an example of misapplied engineering, and that audit hours were better spent reading code and trying out various hypothesis. But it turns out fuzzing was surprisingly effective and efficient in terms of hours spent, especially on the larger codebases.
  12. Acquisitions complicated security quite a bit. There were more code patterns to review, more AWS accounts to look at, more variety in SDLC tooling. And of course, usually the acquisition meant an entirely new language and/or framework with its own patterns in use.
  13. There was always at least one closet security enthusiast amongst the software engineers. It was always surprising who it was, and they almost always never knew it was them! As security skillsets get more software-skewed, there’s huge arbitrage here if these folks can be reliably identified.
  14. Quick turnarounds on fixing vulnerabilities usually correlated with general engineering operational excellence. The best cases were clients who asked us to just give them a constant feed of anything we found, and they’d fix it right away.
  15. Almost no one got JWT tokens and webhooks right on the first try. With webhooks, people almost always forgot to authenticate incoming requests (or the service they were using didn’t allow for authentication…which was pretty messed up!). This class of problem led to Josh, one of our researchers, to begin asking a series of questions that led to a DefCON/Blackhat talk. JWT is notoriously hard to get right, even if you’re using a library, and there were a lot of implementations that failed to properly expire tokens on logout, incorrectly checked the JWT for authenticity, or simply trusted it by default.
  16. There’s still a lot of MD5 in use out there, but it’s mostly false positives. It turns out MD5 is used for a lot of other things besides an (in)sufficiently collision-resistant password hash. For example, because it’s so fast, it’s often used in automated testing to quickly generate a whole lot of pseudo-random GUIDs. In these cases, the insecure properties of MD5 don’t matter, despite what your static analysis tool may be screaming at you.

I’m curious if you’ve seen any of these, as well as others! Or, drop me a note if you disagree!

18 Comments

  1. Ersun

    “MD5 is used for a lot of other things … because it’s so fast” — I thought the same thing until I benchmarked it. SHA-1 is faster than MD5 on modern hardware now because of SHA specific CPU instructions (at least in my testing), so there probably isn’t any good reason to use it.

    • Ken

      Oh that’s pretty interesting, I didn’t know that SHA-1 is faster! I figured MD5, being older, would be faster, but you’re right. Come to think of it, in crypto, the NIST competitions especially in the 90s and 00s specifically favored algorithms that were super-fast in hardware (hmm – I wonder why!) So you have this interesting trend of crypto algorithms becoming not only more resistant to decoding, but also more performant – which is quite impressive if you think about it!

  2. Curtis Jones

    sudo-random?

    • Eric

      Very punny!

    • Ken

      heh – good catch, freudian slip on my part, I suppose! I fixed it!

  3. Dino

    Great article, can you elaborate a little more on point 13? What is the gain if the security enthusiast can be identified?

    • Saur

      He means you can exploit near-expert talent for line-engineer prices, at least until the engineer realizes what they’re worth.

    • Alan H

      I imagined he meant that these engineers are worth hiring over similarly qualified engineers without the security knowledge, but I’m curious too.

    • Ken

      Hey! Thanks for the question. I mostly meant point 13 just as an observation – what could be gained from the observation is an exercise I leave to the reader (and it looks like a couple people have proposed some ideas) 🙂

      Maybe some extra color I’ll add is that at PKC, we eventually concluded that the best security researchers came from software engineer backgrounds with solid background in computer science principles, rather than folks coming from IT admin or more traditional security-paths (at the time). If you want to know more about that, I cannot recommend enough Dino Dai Zovi’s 2019 Blackhat keynote: “Every Security Team is a Software Team Now” https://youtu.be/8armE3Wz0jk.

  4. Don

    Great article but I admit I was distracted by the headline and its usage of the non-word “learnings.” How about replacing it with an actual word or phrase like findings, discoveries, lessons, determinations, things learned, or knowledge gained?

  5. lwj

    Hi Ken,

    I am the editor of InfoQ China which focuses on software development. We
    like this article and plan to translate it to Chinese.

    Before we translate it into Chinese and publish it on our website, I
    want to ask for your permission first. This translated version is
    provided for informational purposes only, and will not be used for any
    commercial purpose.

    In exchange, we will put the English title and link at the end of
    Chinese article. If our readers want to read more about this, he/she can
    click back to your website.

    Thanks a lot, hope to get your help. Any more questions, please let me
    know.

    • Ken

      Sure you can translate it and post!

  6. Limbo

    Great insights! Would you mind elaborating on the custom fuzzing part? What would a fuzzer do to API authentication, for example?

    • Ken

      Hey! Sure thing – I probably should write a blog post on this too…I’ll add that to my list. Basically, we’d find a way to dump out all valid routes (ex in rails, you can literally just do `rails routes`). Then depending on what we were dealing with, we’d create some sort of DSL to fill in the various id slots of the routes, maybe some different content-typed bodies. This is where authentication would come in – we’d then run some routes with or without authentication, or with malformed authentication. The main thing we’d look for is unexpected response codes – 500s we’re always very interesting.

  7. Aaron King

    Thanks for this, great read.

  8. thanh

    Hi Ken, thanks for a great article. You have saved all of us tons of time! You mentioned quickly about “badly written smart contracts”. I’d like to ask if you have any good resource to deep-dive into this topic.

    And if you have any opinions on the security of smart contract, I am all ears.

    Thanks again for writing this article.

    Thanks

    Thanh

    • Ken

      Hey Thanh! Glad you liked it! I actually don’t have many good resources on smart contract security – though it’s something I’ve been meaning to look into for a while. My observation in this article is mostly as a astonished outside observer to the news I’ve been seeing about epic smart contract heists, and from the little I’ve read in greater detail, like here (https://www.bloomberg.com/news/features/2022-05-19/crypto-platform-hack-rocks-blockchain-community) these hacks have basically boiled down to a business logic error in the smart contract. Slightly more complicated than a bank allowing you to withdraw negative amounts (aka a deposit!).

© 2024 Ken Kantzer's Blog

Theme by Anders NorenUp ↑