The trouble with AI overconfidence

Summary

As the circular funding economy of AI becomes more transparent, innovation slows and costs increase, it may help us all to temper our AI enthusiasm. There are many indications that the current state of generative AI is neither essential nor as transformative as the frontier AI companies will have us believe. If anything we could make the case that generative AI is a net negative for most stakeholders.

It’s my job to be an enterprise AI booster. Call me a charlatan if you will, but a man needs to earn his living. As I’ve conceded before, generative AI has its uses, and I use it every day, especially for the work that I’d rather not do. But it’s OK to go back on a stated position, and I’m OK with going back on a previously stated position that generative AI is like an intern. Interns learn and get better. Generative AI doesn’t do that. My view of interns was uncharitable, and I was overenthusiastic when I wrote those words.

Over the last few months, my enthusiasm for AI has dampened. Yes, it makes a lot of work easier.

  • I don’t spend hours on the thumbnails for my blog posts or the banner images.

  • Prototyping concepts and automating workflows is much faster these days. 

  • Editing my writing and summarising boring documents is simpler.

  • Outside work, image editing has seen a few nifty improvements.

And there’s a lot more. You don’t need me to enumerate what AI can and cannot do. I even spend my own money on AI tools. But none of this changes my life at work or outside work, in a meaningful way. Occupational hazards aside, if I stopped using AI, I wouldn’t miss a lot. Here’s where the mainstream AI narrative induces cognitive dissonance for me.

Never say never

While your mileage may vary with AI, I sense a fatalistic inevitability about the AI narrative. Here’s a statement I hear all the time.

“We’ll never go back to the old way of working.”

How are we so sure about this shift? Remote work was a profound shift in the way of working, demonstrating over two years that location had little to do with productivity or collaboration. Many people, yours truly included, predicted that “we’ll never go back to the old way of working”.  And yet, haven’t we seen the news of RTOs everywhere? If the battle scars of wrong predictions have taught me anything, it’s this – we can’t be overconfident about the future of work. Yet, I sense overconfidence in how AI will shape the future of work.

AI overconfidence is a house of cards

There’s nothing wrong with exploiting a technology that’s available to us. There’s also nothing bad about being curious or enthusiastic about it and finding novel uses for it. All that’s great, but we can’t ignore some truths about the business of generative AI.

  • None of the frontier-AI companies is profitable. OpenAI and Anthropic are the only two players that count, and they burn obscene amounts of cash, energy and data centre capacity. xAI, Google’s Gemini, and others are also-rans who don’t count for much. I encourage you to read Ed Zitron’s 18,000-word analysis if only as an academic exercise, to know how unprofitable these businesses are.

  • Much of AI-optimism rests on subsidised access to LLMs. Even then, companies like Perplexity and Replit aren’t profitable yet. Even Anysphere, the company that recently hit $500m ARR and makes Cursor – the poster child of vibe-coding – isn’t profitable yet. You’d imagine that the typical SaaS model of building a user base and then turning the profit lever would work here, but if the frontier AI companies are losing billions, then what profit lever are we talking about? 

Cory Doctorow coined the term “enshittification” — the process by which platforms and services on the internet start great for users and degrade as users lock themselves in and companies seek profitability. If the frontier AI companies, in their quest for profitability, jack up prices in a classic bait-and-switch, many products will stop enjoying this super cheap access to LLMs. This bait-and-switch manoeuvre is already in play, by the way. 

  • AI-fatalism rests on the popularity and coverage of AI services. The popularity of Perplexity, Replit, Cursor and other AI-native products has led to a slew of other AI innovations. Products that never needed AI have now pivoted to become AI products, or they won’t be able to raise funding. SaaS has given itself a shot in the arm, because now you can (theoretically) sell an AI version of your product for an extra fee. And technology services companies that were already under siege from the hyperscalers have managed to put together some AI-gobbledygook to generate FOMO and attract new clients. After all, “we’ll never go back to the old way”, right? But when access to LLMs becomes expensive, many of these products and services could become unviable or enshittified.

AI isn’t getting cheaper. It’s only getting more expensive. And while tomorrow could be nothing like yesterday, I find the breathless overconfidence a bit tiring. AI’s economic fragility and our overconfidence about it aren't just business risks; they cause active harm for many stakeholders.

We’re betting on the success of a handful of AI companies. If they fail, we may all fail.


When I wear my AI booster hat, I’ll probably talk your ear off singing hosannas about AI. But today’s not the day. As of 31 October 2025, I’m starting to believe that generative AI, in particular, may be a net negative. 

  • It’s bad for capitalism because of the amount of capital that’s going into useless data centres that we can’t use for anything else. That same capital can go somewhere else to something that solves a real problem.

  • It’s bad for the environment because LLMs need those power-hungry, land-and water-hungry data centres.

  • It’s bad for workers because executives who don’t understand real work use it as an excuse for layoffs.

  • It’s bad for creatives because it mooches off their work. e.g. Studio Ghibli.

  • It’s bad for enterprises because, if you believe MIT’s study, most of these AI rollouts are failing

  • It’s bad for experts because we often end up using AI in a one-step-forward, two-steps-backwards pattern, trying to describe in imprecise English what we could have completed with a few controls or a few lines of precise code.

  • It’s bad for rookies because it’s devouring their jobs, but it doesn’t learn the way a rookie eventually would.

  • It’s bad for the knowledge work industry because eventually, we’ll have a mid-level gap between the really senior, highly skilled people who still have jobs and the intern-level AI.

I end this month and this week tired. Tired of perpetual betas. Tired of industry bosses who can’t stop drum beating about AI. I’m tired of experimenting with every input and output and trying to induce determinism in probabilistic software. I’m sick to the core hearing that the people who struggle with AI “aren’t doing it right”. 

A new month will roll in on Monday, and I’ll probably shrug off my pessimism and be a bit more optimistic about how “AI will change everything”. Meanwhile, if you’re the ever-optimistic AI nerd, I challenge you to watch one episode of the “fully AI-powered” Mahabharat by JioHotstar. If you remain optimistic after that ordeal, I’ll doff my hat to you. Go on then. Give it a go!

Next
Next

Are you a chef or a restaurateur?