AI visualization concept

Photo by Steve Johnson on Unsplash

November 2022, ChatGPT launched. We were promised everything would change. AGI was around the corner. Knowledge work would be automated within months. Lawyers, programmers, writers—all obsolete. The future arrived overnight.

It’s now November 2025. What actually happened?

Let’s start with what works. Code assistance is genuinely transformative. Developers who use AI tools like GitHub Copilot report significant productivity gains on routine tasks. Writing boilerplate, debugging errors, understanding unfamiliar codebases—these are legitimate use cases where AI delivers measurable value. Not replacing programmers, but making them more efficient.

Summarization works. Throw a long document at Claude or GPT-4 and ask for key points—you’ll get a useful summary most of the time. Research assistance works. AI can synthesize information from multiple sources faster than manual reading. These are narrow but real capabilities that justify investment for specific workflows.

Customer service chatbots have improved. They still frustrate users, but the baseline competence is higher. More queries get resolved without human escalation. The cost savings are real even if the user experience remains imperfect.

Now let’s talk about what doesn’t work. Self-driving cars remain perpetually five years away. The MIT study on workforce replacement found AI can technically replace 11.7% of jobs—but technical capability doesn’t equal practical deployment. According to their research, actual AI adoption concentrates in just 2.2% of wage value. The gap between “could automate” and “has automated” remains enormous.

AI agents were supposed to autonomously complete complex tasks by now. They mostly don’t. Give an AI agent a multi-step workflow and watch it fail at edge cases, lose context, and require human intervention. The demos look impressive; the production deployments remain limited.

Creative AI remains a tool rather than a replacement. DALL-E and Midjourney generate impressive images, but professional creative work requires iteration, context, and judgment that AI doesn’t provide. Writers using AI as first-draft generators find the editing work substantial. The promise of AI replacing creative professionals has not materialized.

The enterprise AI market is particularly revealing. According to industry surveys, most organizations have launched AI pilots but few have achieved production scale. The gap between “experimenting with AI” and “transformed by AI” is where most companies exist. Proof of concepts don’t translate to ROI.

What’s actually happening is more mundane than either utopians or doomers predicted. AI is becoming infrastructure—useful, incremental, occasionally impressive, but not revolutionary. It’s augmenting existing workflows rather than replacing them. The productivity gains are real but modest. The job displacement is happening but slowly.

The MIT Iceberg Index suggests this will change. The technical capability for broader automation exists; deployment lags capability. But the timing of that deployment depends on factors beyond technology: firm strategies, worker adaptation, policy choices. The future remains contingent rather than inevitable.

Two years in, the honest assessment is: AI is useful. It’s not magic. The hype exceeded the reality because hype always exceeds reality. What remains is a genuinely powerful technology that’s changing some jobs, eliminating fewer, and creating others. The revolution was televised, and it turned out to be more evolutionary than revolutionary.