I read a comment on HN that sparked this article: GPT is kind of like DevOps from the early 2000s.
Here’s the hot take: I don’t see the primary value of GPT being in its ability to help me develop novel use cases or features – at least not right now.
The primary value is that it MASSIVELY lowers the barrier of entry to machine learning features for startups.
What’s my line of reasoning? Well, here are some surprising things about how we use it:
(But first, a caveat: there are a LOT of ways to use GPT in products – I think we haven’t even discovered the most valuable ones yet. A lot of the more novel ways to use GPT, though, like assistants and agents, we just haven’t found very useful in practice, at least in B2B SaaS. They’re just unreliable and too wonky. Not yet anyway!)
In our product, we primarily use GPT for these 4 cases, ranked in decreasing order of value:
Classification. Given a block of text, what type is it, from this list?
Data extraction. Here’s a JSON schema and a block of text, fill out the JSON schema.
Long-summary. Write an email that summarizes this block of text.
Short-summary. Give me 2-3 words to describe this block of text, so I can use it as a header (think, the ChatGPT summaries of each convo listed on the right).
Notice something interesting? The top 2 use-cases are things traditional ML can do really well.
So why are we using GPT and not traditional ML?
My previous company did a lot of traditional ML, and my main takeaway was that it was incredibly expensive to produce something valuable. This made it hard to experiment. It made it hard to maintain these features.
But now, I can literally spend a few minutes writing a prompt.
So why is it the new Heroku?
What is (was) special about Heroku? It’s a very, very expensive infrastructure platform (relatively to rolling it yourself) that promises (and delivers) on the value proposition: no devops needed.
You can be a normal engineer and have a very scalable, very stable, very robust app (complete with logging, restarts, alerts, patches, high availability, zero trust, secret management, etc, etc) without needing to know any devops.
And the expensiveness is ok, because it scales directly with usage, pay-as-you-go.
This is exactly what OpenAI has given developers with GPT: a very, very expensive way to do ML features without needing an ML team. It’s actually not even that expensive compared to the value, but it is expensive compared to cheaper locally run LLMs and using a traditional ML model, once it’s been trained.
There’s one final overlooked aspect to this
Even if the personnel costs of hiring ML engineers weren’t prohibitive to startups, on top of that, traditional ML is impossible to do without large amounts of training data. That’s a huge moat.
Startups have a bootstrapping problem when it comes to the training data for ML features.
But GPT is zero shot. That is huge. It means that the barrier to entry to ML features is now effectively zero.
Conclusion
I like thinking about OpenAI this way because it also explains why Google is having such a problem capturing this space.
Google fundamentally doesn’t have the problem that GPT (the API) primarily solves. It has gobs of money. It has gobs of ML expertise. It doesn’t want to build something that chips away at that moat.
Of course, GPT does a lot more than substitute in for traditional ML operations. But that’s where I’ve seen the value in practice right now, and boy is it valuable! It’ll be fascinating to see where all this goes.
0 Comments
2 Pingbacks