It’s been a while since I’ve written about AI. The last time, the catalyst was a divergence between my own thinking and the mainstream tech zeitgeist. Things have gotten divergent again, so it’s time to share some new thoughts.
But first, a few quick data points on my AI usage so you know where I’m coming from:
I read a comment on HN that sparked this article: GPT is kind of like DevOps from the early 2000s.
Here’s the hot take: I don’t see the primary value of GPT being in its ability to help me develop novel use cases or features – at least not right now.
The primary value is that it MASSIVELY lowers the barrier of entry to machine learning features for startups.
What’s my line of reasoning? Well, here are some surprising things about how we use it:
My startup Truss (gettruss.io) released a few LLM-heavy features in the last six months, and the narrative around LLMs that I read on Hacker News is now starting to diverge from my reality, so I thought I’d share some of the more “surprising” lessons after churning through just north of 500 million tokens, by my estimate.
Some details first:
– we’re using the OpenAI models, see the Q&A at the bottom if you want my opinion of the others
A note before we begin: I’m arguing that technology ROI discussions are broken, not that ROI as a decision-making tool is broken. A solid understanding of how to calculate and use ROI is an essential skill for any tech executive, and when done right, it’s a powerful decision-making tool. This post is about how technology discussions that exclusively look at ROI often result in a one-eyed analysis that lacks depth.
Technical leaders need a wider range of tools for communicating the value of technology, and especially technology innovation. Communicating the value of technology is not a trivial task—and the point of this post is that exclusive reliance on the most commonly used tool for communicating value—Return on Investment (ROI)—will lead to broken discussions.
“In the brain of all brilliant minds resides in the corner a fool.”
Aristotle
Writing about “Best Practices” can get boring, so I thought I’d take a break this week, and write about some bad engineering practices that I’ve found the absolute hardest to undo once done. Real foot-guns, you could say.
Each of these is a bit controversial in its own way, and that’s how it should be—I’d welcome any counter-views. The prose in this post is a bit more irreverent than normal—in most cases, I’m poking fun at myself (both past and present!), as I’ve been guilty of each of these foot-guns—and a lot of them, frankly I still struggle with. Hopefully this post will generate some “motivation through transparency” 🙂
Engineering Foot-gun #1—Writing clever code instead of clear code
It’s because optimizing is fun. https://xkcd.com/1691
I’ve been thinking recently about how to discover and hire great engineers in the hottest job market in decades. One of the biggest hurdles to hiring good engineers, and especially experienced engineers, is that they’re so. unbelievably. expensive.
Just take a look at some of the total compensation packages on levels.fyi:
Data is for GOOG (other companies are similar). Courtesy of the data at Levels.fyi (htps://www.levels.fyi/charts.html). This data was eyeballed quickly into a spreadsheet, check the source for actuals.Continue reading
There’s always a reason to rebuild your app. Always. But once you’ve been through a few rebuilds, you realize that talk of rebuilds, like talk of tax reform or anarchy, is just a tad bit dangerous—you never know what kind of danger you’ll end up in if you actually convince yourself to go through with it.