Connect with us

Thought Leaders

AI Fatigue Is Real. But It’s Not What You Think

mm

There’s a narrative building right now that’s getting a lot of attention: AI is draining us. Engineers are shipping more code than ever and feeling worse than ever. The term “AI fatigue” is making the rounds, and the takes are piling up.

A software engineer writes in Business Insider that last quarter was his most productive and most exhausting. Steve Yegge, who literally wrote the book on vibe coding, tells The Pragmatic Engineer he naps during the day and caps real AI-augmented work at three hours. Startup founders hit a wall at 2pm. One of the most widely-shared posts this month warns that AI has a “vampiric effect” on the people who use it most.

Here’s what nobody seems to notice: the people reporting the most fatigue aren’t the skeptics. They’re the true believers.

The engineers stuck at level one on Yegge’s adoption scale, the ones ignoring AI entirely, feel fine. A bit anxious, maybe, but not drained. It’s the ones at levels five, six, seven, the ones who’ve gone all-in, running multiple agents, orchestrating complex workflows, shipping at speeds they never imagined, who come home cooked.

That pattern should tell us something. And I think what it tells us is that “AI fatigue” is the wrong diagnosis entirely.

You Don’t Have a Fatigue Problem. You Have a Training Problem.

Think about the first time you ever deadlifted. Not a particularly heavy weight. Just the movement itself. You woke up the next morning and your entire body felt like it had been disassembled and put back together wrong. Your legs were sore. Your back was sore. Muscles you didn’t know existed made themselves known in the most unpleasant way possible.

If someone had measured your output that day, it would have looked terrible. You could barely sit down without wincing. You might have reasonably concluded that deadlifting is unsustainable, that the human body isn’t built for it, that the cost outweighs the benefit.

But of course, six months later you’re lifting twice the weight and feeling fine afterward. Your body built new pathways. It adapted. The movement that once required every ounce of conscious effort became automatic. The soreness didn’t mean you were broken. It meant you were building something new.

This is exactly what’s happening with AI-augmented work.

The Cognitive Load Nobody Talks About

When you write code the traditional way, your brain is running a well-worn program. You’ve done it thousands of times. You know the keystrokes, the patterns, the debugging rhythms. It’s like driving your daily commute: technically complex, but so practiced that you can do it while thinking about dinner.

AI-augmented work is a fundamentally different cognitive task. You’re not writing code anymore. You’re directing, evaluating, deciding, context-switching between multiple agents, reviewing output you didn’t write, holding architectural intent in your head while an AI makes implementation choices you need to validate in real time.

That’s not the same job done faster. It’s a different job entirely. And your brain hasn’t built the efficient pathways for it yet.

Every decision is still conscious. Every review takes active effort. You’re monitoring quality, maintaining context across parallel workstreams, making judgment calls about AI output constantly. That’s why three hours of this work can leave you more drained than eight hours of traditional coding. It’s the cognitive equivalent of your first week at the gym.

The Adoption Curve Is Really an Exhaustion Curve

Yegge’s eight-level framework for AI adoption maps almost perfectly onto an exhaustion curve, though I don’t think that was his intention.

At levels one and two, you’re barely using AI. Autocomplete here, a question there. Not much cognitive load. Not much fatigue.

At levels three through six, you’re in the deep end. You’ve given the agent more autonomy, you’re reviewing less line-by-line and more holistically, you’re running multiple agents, and you’re constantly navigating a workflow that didn’t exist 18 months ago. This is where the drain lives. This is the heavy deadlift.

At levels seven and eight, something interesting starts to happen. You’ve built orchestration systems. The AI works more autonomously. You’ve learned what to trust and what to check. You describe outcomes and walk away. Matt Shumer describes exactly this: telling AI what to build, leaving for four hours, and coming back to finished work. The adaptation is starting to take hold.

The exhaustion isn’t evenly distributed. It peaks in the middle, right where most early adopters are sitting right now. And that’s why the fatigue feels universal: the people talking about AI the most are disproportionately the ones in the hardest part of the learning curve.

Nobody Wrote Articles About “Driving Fatigue”

Remember learning to drive? The first time you merged onto a highway, you probably gripped the steering wheel like your life depended on it (which, to be fair, it did). You came home from a 30-minute drive completely wiped. Your brain had been running at maximum capacity: checking mirrors, managing speed, anticipating other drivers, processing road signs, all simultaneously and all consciously.

Now you drive an hour while half-listening to a podcast and eating a sandwich. The task hasn’t changed. You changed. Your brain built efficient neural pathways for driving, compressing what used to require full conscious attention into background processes.

Nobody wrote think pieces about “driving fatigue” as an existential crisis. Nobody suggested that cars have a “vampiric effect” on their operators. We understood, intuitively, that the exhaustion was temporary. It was the cost of learning something new.

That’s the part the current discourse is missing. “AI fatigue” is being treated as a permanent condition, a fundamental feature of the technology, when it’s actually a transition cost. It’s training soreness, not chronic illness.

Why This Matters More Than Comfort

This distinction isn’t just semantic. How you diagnose the problem determines what you do about it.

If AI fatigue is a permanent feature of the technology, then Yegge’s three-hour cap is the ceiling forever. Companies should plan for engineers who can only be productive for a fraction of the day. The “vampiric effect” is the price of admission, and we just have to live with it.

But if it’s training soreness, then the playbook is completely different. You manage the load. You build gradually. You don’t skip the gym because you’re sore. And critically, you don’t assume today’s exhaustion level is tomorrow’s.

The engineers who push through this phase, who build the cognitive pathways for directing AI work, reviewing at the right altitude, and maintaining architectural intent across parallel workstreams, will eventually do this as naturally as driving. The three-hour wall will move to five, then seven. Not because they’re working harder, but because the work stops being effortful in the same way.

Meanwhile, the engineers who read about “AI fatigue” and decide to stay at level two, comfortable, familiar, not drained, will find themselves in a much worse position.

Not because they failed to keep up with a trend, but because they never started the training that everyone else already got through.

The Real Risk: Confusing Soreness With Injury

I want to be clear about something. There’s a difference between training soreness and actual injury, and it applies here too.

If you’re “vibe coding” for 14 hours a day, sleeping four hours, and running on adrenaline because the novelty is intoxicating, that’s not training. That’s overtraining. And just like in the gym, overtraining doesn’t build anything. It breaks you down.

Yegge’s three-hour observation is valuable not as a permanent ceiling, but as a signal about current recovery needs. When you’re early in training, you need more rest between sessions. As you adapt, you can handle more volume. The people who burn out aren’t the ones doing three focused hours of AI-augmented work. They’re the ones who can’t stop because the feedback loop is too compelling, which is exactly the slot-machine dynamic I’ve written about before.

The answer isn’t to avoid the gym. It’s to train smart: intense sessions, real recovery, gradual progression.

A Prediction Nobody Else Is Making

Here’s what I think happens over the next 12 to 18 months.

The “AI fatigue” narrative will peak sometime this year. There will be more articles, more hand-wringing, probably a few high-profile engineers who publicly “take a break from AI tools.” It will feel like a meaningful backlash.

Then it will quietly fade. Not because people stopped using AI, but because the early adopters finished adapting. The three-hour wall will feel like a distant memory for people who’ve been doing this for a year and a half. They’ll direct AI workflows the way they once wrote for-loops: without thinking about it.

And the gap between those who pushed through the soreness and those who didn’t will be enormous. Not because AI skills are rare, but because the adaptation itself, the ability to think in terms of direction, evaluation, and orchestration rather than line-by-line implementation, will have become second nature for one group and completely foreign to the other.

The worst response to training soreness has always been the same: stop going to the gym.

What This Means for Leaders

If you’re running an engineering team right now, understand what you’re actually looking at. Your most productive engineers are also your most tired. That’s not a contradiction. It’s the clearest signal you have that adaptation is underway.

Don’t respond by dialing back AI adoption. Don’t respond by pretending the fatigue isn’t real, either. Respond the way a good coach would: manage the training load. Expect intense, focused sessions of AI-augmented work followed by genuine recovery. Give people permission to operate at what feels like reduced hours while they’re building new cognitive skills. The output will still be multiples of what it was before.

The companies that get this right will have adapted teams by the end of the year. The ones that either ignore the fatigue or retreat from AI in response to it will find themselves with the worst of both outcomes: exhausted engineers who never got through the hardest part of the curve.

We are not experiencing the side effects of a new technology. We are in the early weeks of training for a new way of working. The soreness is the proof that it’s working. Lean into it, manage it, and trust that your brain, like every other adaptive system in nature, will do what it has always done.

It will adapt.

Andrew Filev is founder/CEO of Zencoder. He transformed collaborative work management by founding Wrike (20k+ customers, sold for $2.25B), was featured in Forbes & The NY Times, and his passion for AI & innovation continues to shape the future of work.