No, things aren't moving too fast
Summary
Don’t trust the clammy scam artists who tell you AI is moving too fast for you to think straight about this technology. LLMs may well be cool, but most gains are coming from fine-tuning and post-training. Feel free to slow down and make well-considered moves for yourself and your business.
If you’re a knowledge worker, you probably think that generative AI is developing at a frenetic pace. Apparently, mathy math programs are getting so good that we may have to hand them the keys to our cities soon. Godlike AI is right around the corner, and if we aren’t careful, it’ll turn us into paperclips as Nick Bostrom’s doomer prophecy foretold.
I believed that story. It’s a lie. The models aren’t getting exponentially better. The jumps we saw from GPT 3.5 to GPT 4 didn't carry over to GPT 5. Most of the gains the labs are advertising have not come from scaling, but from good old post-training, where they fine-tune a model for a specific use case, e.g. coding (GPT Codex) or image generation (Nano Banana). Cal Newport has written about these potential scaling limits in a New Yorker piece last year.
Now, I know what you’re thinking. “Sumeet, that article is from August 2025! Things have changed a lot since then. Things are changing really fast!”
Yep. To my original point. You’re not alone in thinking this way.
2025 is too far back, isn't it?
I’ve recently read three books about AI that everyone must read.
More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity - by Adam Becker
The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want – Exposing Surveillance Capitalism and Artificial Intelligence Myths in Information Technology Today - by Emily Bender and Alex Hanna
Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI - by Karen Hao
Each time I picked up one of these books, though, I hesitated because they were all from 2025. Was I reading an outdated perspective? Weren’t things “moving very fast”?
Of course, I gave myself a metaphorical kicking and reminded myself that all these books are from 2025, not 2005. So, yeah, they are current enough, and no, things aren’t “moving very fast”.
Three incredible books about AI, that you must read
A manufactured storyline
"Moving very fast" is, of course, a story a few billionaires have built up to keep people guessing. AI labs want you to believe, through their constant drip of hype and influencer marketing, that the models are getting better faster than you can track them. The moves are highly coordinated.
OpenAI and Anthropic are almost in lockstep on what they’re releasing. If OpenAI updates Codex, Anthropic updates Claude Code. Each release comes with a marketing push. AI will be all over whichever feed you consume.
Companies that build LLM wrappers, like Cursor, will then follow suit and tell you how awesome their product is as a result of the new updates. Recently, Cursor wrote about how they built a browser from scratch, only for developers to tear down their claims. The thing doesn’t even compile. But who cares about the details?
Then you hear other lab affiliates (yes, Stripe, I’m looking at you) talk up how they’re writing 90%, even 100% of their code using AI. Yeah, no kidding. 100%. Could they be lying? I don’t know. Where did Greg Brockman come from? What about Daniela Amodei? Which incubator did Airbnb come from? Was Sam Altman the president of that incubator (cough, yCombinator) at that time? I don’t know, is it likely that all these people are good mates? Oh, and I don’t even want to talk about how all this tech originated at Google. That’s a whole other story.
If you follow the money trail, you’ll develop a healthy scepticism for the hype. If you muck around with these tools yourself, you’ll realise that these language models still can’t write properly. Ask them to write in sentence case instead of Title Case, and the models still don’t get it. Yes, they can do a few cool things, but if anyone tells you these word-guessing machines are doing end-to-end jobs, we should be suspicious.
When we mistrust everything
A strong belief that "things are moving too fast" leads you to mistrust any reporting older than a week, which runs counter to the hype. At the time that the AI boosters were claiming end-to-end automation, the Remote Labour Index published a study showing that the best-performing model achieves an automation rate of 4.17%. Yes, Opus 4.6 fails 96% of the time. Of course, the publishers of this report are dodgy characters too, but for the time being, it casts a massive shade on the labs’ claims.
Meanwhile, the labs remain pretty sketchy about their claims and expect you to treat their blogposts as research. If you’ve convinced yourself that you can’t trust anyone else, then of course, you believe that “things are moving too fast.” Curl into a ball and believe the labs already!
The other pernicious effect of "things are moving too fast" is that well-meaning corporate and political leaders, removed from the details, have no choice but to keep investigating and experimenting. Tech was always about working off well-laid, well-established foundations, because tech runs large businesses, governments and whatnot. But today, we're happy to keep spinning the GenAI probability roulette. Your entire business is in the service of stochastic parrots.
Slow down and ignore the grift
The key is to slow down and recognise that scaling may no longer be working. Most gains are coming from post-training. These aren't exponential gains. A lot of the social media hype may or may not translate into reality. Much of the hype follows a narrative served up by the labs.
Most importantly, these labs are horribly unprofitable with no path to profitability. They have an incentive to maintain an illusion of progress. The average human being shouldn't fall for their cheap parlour tricks. Oh, and if AI is moving too fast, then don’t give in to FOMO — you can get on that train at a station of your choosing.