For a while now, the corporate case for AI has rested on a fairly simple proposition: the tools get smarter, the work gets faster, the costs come down, and everyone applauds the foresight of leadership.
Lovely theory. Shame about the evidence.
Because once you get past the blithering hype and the usual executive fog about “unlocking efficiency”, the productivity story looks rather less triumphant. That little old bank, Goldman Sachs just announced they found no meaningful relationship between AI and productivity at the economy-wide level. What would they know. And some recent Harvard-linked research has pointed instead to work intensification, which seems like a polite way of saying “people are doing more, faster, and not especially loving it”. But this is only Harvard, so it’s hardly credible. Even studies from software companies, supposedly AI’s easiest win, are also finding that developers can end up working longer, not less.
Oh dear, this is all getting a bit awkward given how many firms have already started talking as though the gains are in the bank. Good thing we didn’t already lay off half the workforce... oh... wait.
This is where things come a little undone. Businesses are laying people off, freezing hiring, squeezing teams and pointing to AI as though the machine has already done the job of three competent adults and barely broken a sweat. But has it? Or has AI become a useful cover story for cuts leaders wanted to make anyway? Because those are not quite the same thing.
Your employees know the truth. Does your EVP? At Fathom we measure the "Credibility Gap" between your promise and their reality.
Plenty of big talk, not much visible payoff
One of the more revealing recent pieces came via Fortune, reporting on Goldman Sachs’ review of fourth-quarter earnings calls. AI came up constantly. Roughly 70 percent of S&P 500 management teams mentioned it. More than half linked it to productivity and efficiency. Which sounds impressive until you hit the bit that matters.
Only a small share could quantify any concrete impact. Almost none could point to a clear effect on earnings. Goldman’s verdict was as blunt as an elephant's foot: there is currently no meaningful relationship between AI and productivity at the economy-wide level.
There were gains in narrower use cases, around 30 percent in customer support and software development tasks, according to the note. Fine. Useful, even. But that is not the same as the broad transformation of work we were all recently sold. It is a local improvement, not the second coming of output.
And yet the rhetoric remains absolutely enormous. This has been one of the odder features of the AI boom. The confidence arrived first. The proof seems to have missed the train.
The problem with “productivity” is that it often means “more”
The Harvard material is helpful here because it cuts through a lot of the nonsense. The issue is not simply whether AI helps somebody complete a task more quickly. In many cases it does. The issue is what happens next.
In one study covered by Dexerto and discussed in Harvard Business Review, employees using generative AI were not finding themselves with lovely open stretches of time for deeper thinking or an earlier finish. They were getting more work. The faster they moved, the more their workload expanded. Tasks multiplied. Expectations rose. Work spilled further into the day. Those sneaky capitalist bastards!

Okay, so this is one of those grim little workplace magic tricks. A tool makes something quicker. Management interprets that not as spare capacity to be protected, but as room to pile on more. Suddenly the reward for efficiency is further efficiency.
Nobody says thank you. They just send another brief.
Honestly... did anybody out there really believe we’d all be suddenly working three day working weeks just because we got copilot to summarise our emails or write our reports for us? Really?
That is why so much of the AI productivity story feels like acid induced trip. It confuses output with relief. The employee has produced more, sometimes, maybe. Whether their job is better, saner or more sustainable is another matter entirely.
Quite a lot of this so-called gain seems to amount to “you can now do the work of several people”, followed immediately by “marvellous, we’ll fire a few of your colleagues and pass their work to you”.
In software, where AI was meant to shine, things get especially awkward
Software development was supposed to be the easy bit. If AI could not obviously transform coding, what exactly was it meant to transform?
And yet some of the most interesting evidence is coming from developers who appear to be getting a very modern deal: more commits, more pull requests, more pace, and many more after-hours clean-ups.
Scientific American’s recent piece on developers working longer hours is useful because it captures the contradiction nicely. AI can help teams ship more. It can also create more instability, more rework and more pressure to keep moving. Faster output, in other words, does not always mean cleaner output. Sometimes it just means the mess arrives at speed.

That fits with data from the 2025 DORA report, which found widespread AI use among developers and questionable belief that it boosts productivity. It also fits with findings that more code does not necessarily mean better software, and may in fact mean more rollbacks, more patching and more evenings ruined by machine-generated confidence.
The really sinister part is that people can feel more productive while measurable productivity remains somewhere else entirely. It’s a new phenomenon and should worry leaders more than it seems to.
A worker who thinks the tool is helping while quietly spending more time correcting, checking and firefighting is less productive, and probably less happy. The illusion of productivity will likely be a dangerous new battleground for Hunan Resources to get their head around.
Then there is the little matter of layoffs
This is where the conversation tends to become suddenly vague.
Firms announce cuts. Executives mutter about automation, efficiency and the changing nature of work. Markets nod approvingly. Everyone behaves as though a law of physics has asserted itself.
But has it?
Another recent Harvard Business Review piece argued that companies are laying off workers because of AI’s potential, not its proven performance. That feels about right. Much of this is anticipatory. AI has not necessarily replaced the work in a clear, measured and durable way. It has, however, given leaders a compelling future-tense justification for making workforce cuts in the present.
This opens up three possibilities.
- Either they are right and the gains are coming shortly.
- Or they are wrong, and a lot of firms have traded capability for a story.
- Or, perhaps most plausibly, AI is being used as a respectable public explanation for cost-cutting decisions driven by older and less fashionable motives.
Take your pick – none are exactly great news for the 99% of us who aren’t CEOs or major shareholders.
It would be unfair to say every AI-linked layoff is cynical or foolish. Some businesses will genuinely automate parts of the workflow and save money. Some already have. The point is that the evidence base remains much thinner than the conviction with which many leaders are speaking.
And this nuance is important. Why? Because when a company says AI made the cuts unavoidable, the obvious follow-up is: unavoidable on what evidence, exactly?
Measured gains? Proven quality? Sustained output? Or just a strategic hunch, dressed up in machine language? And, before you know, trust is killed. Trust from employees, trust from the markets, trust from customers. The stakes should be seen as too high to just fudge it.
There is also the small issue of quality, which keeps refusing to die
One reason the productivity story remains so unresolved is that businesses are often measuring the easy bit.
They can count tasks completed. They can count tickets closed. They can count code shipped, slides made, responses drafted, summaries generated. What is harder to count is the drag introduced afterwards. The checking. The rework. The customer frustration. The awkward feeling that everything is technically finished and somehow still not very good.
AI is excellent at producing something. Ask it anything and it’ll do it right away. That has never quite been the same as producing something good.
Helping HR, talent acquisition, employer branding, and company culture professionals find careers worth smiling about.
Which is why the economic promise and the lived experience have diverged so badly. On paper, a team supported by AI should be flying. In practice, many workers seem to be moving faster while spending more time managing volume, fixing mistakes and absorbing pressure.
Not quite the workplace coming of the Messiah that we all though.
So when do the easy gains turn up?
Possibly later. Possibly unevenly. Possibly not in the glorious sweeping fashion some leaders have already put into next year’s headcount plan.
That is the trouble with grand technological claims. They are usually true in patches first. A narrow workflow improves here. A support function speeds up there. A strong team with clean systems and sensible oversight gets real value. Everyone else gets another layer of clutter.
The broad miracle, meanwhile, remains scheduled for some time after the next earnings call. This does not mean AI is useless. It plainly is not. Nor does it mean the gains will never come. They may. Some almost certainly will.
But the current version of the story feels suspiciously like this: the benefits are always just large enough to justify the decision already made, and just vague enough to avoid being pinned down. That is a very convenient kind of innovation.
What leaders may regret
The gamble being made today is a human. It’s layoffs, livelihoods, and burning out the remaining workforce with extra workloads they didn’t ask for.
If people experience AI not as support but as acceleration, surveillance, thinning headcount and endless demands for more, they will draw their own conclusions. Candidates will hear those conclusions. So will customers. So will the decent employees with options.
At some point, employers may discover that they did not just buy software. They bought a reputation.
And if the past two years have shown anything, it is that many leaders are perfectly willing to chase savings now and sort out the consequences later. Quality can be patched. Burnout can be reframed. Human cost can be parked under “change management”.
That may work for a while. But one suspects a few of them may one day look back fondly on the era when they still had enough experienced humans around to catch the mistakes before they reached the client, the candidate or the regulator.
The real question
So no, the question is not really whether AI can improve productivity. In some places it clearly can.
The real question is what organisations do with that improvement, assuming it arrives in any meaningful form at all. Do they use it to remove drudgery, protect focus and make work more humane? Or do they use it the way many businesses use every gain: to cut a bit deeper, load a bit more onto the people left standing, and call the whole thing transformation?
So far, a fair amount of AI productivity looks less like liberation and more like compression.
The machine may be fast but the human consequences are catching up.
Takeaways
1. The productivity boom appears to be running slightly behind schedule
For all the noise, the broad-based gains are still hard to spot. Plenty of firms are talking about AI as if the savings are already in the bank. The evidence, so far, is much less obliging.
2. Faster work is not the same thing as less work
In many cases, AI seems to make tasks quicker, then management responds by adding more tasks. The result is not relief. It is a fuller plate, served at greater speed.
3. “Efficiency” can be a polite word for work intensification
Some of the clearest research suggests AI is not removing pressure so much as redistributing it. The drudgery may shrink, but the pace, scope and expectations tend to grow to fill the gap.
4. Businesses may be cutting jobs on faith, not proof
A fair number of AI-linked layoffs look less like the result of measured productivity gains and more like a bet on what AI might do eventually. That is not strategy at its finest. It is hope with a headcount reduction attached.
5. The easy metrics flatter AI, the harder ones tell a grimmer story
Yes, more tickets can be closed and more code can be shipped. Less glamorous questions, quality, rework, customer frustration, burnout, are where the story starts to wobble.
6. Leaders may not need spectacular gains for AI to feel worth it
If AI helps justify smaller teams, tighter budgets and higher output expectations, some executives may decide that is victory enough. Whether the work is better, or merely cheaper, can become an awkward detail for later.
7. The biggest risk may be reputational, not technical
If employees experience AI as a tool for pressure, cuts and corner-cutting, that story will travel. Employer brand has a way of catching up with operational reality, usually at the least convenient moment.
8. The real question is who gets the benefit
If AI does deliver meaningful gains, who keeps them? Workers in the form of better jobs and less drudge work, or businesses in the form of leaner teams and higher demands. That argument is only just getting started.
References




