Why We Should Question the AI Future of Work Narrative
Developers using AI worked 19% slower—so why are we shrinking headcount? The driver isn’t productivity; it’s belief.
Depending on the week and the papers being cited, the diffusion of new AI tools into the workplace is either already negatively impacting employment or it actually isn’t driving unemployment (at least not yet)... or we just don’t really know because the question of causality here is rather complex.
You can find smart people with interesting studies making good arguments and critiques on both sides, and you definitely don’t need my unqualified take on that question. Instead, I want to explore a different but related question here: If we were to go ahead and assume that the diffusion of AI tools into the workplace IS driving firms to reduce headcounts and slow hiring, would the most likely reason for those decisions actually be that the tools are increasing productivity?
I think the answer might be “No.” While the latest models and tools are increasingly capable, the capability-reliability gap persists, and that gap may account for the limited evidence of AI-enabled productivity gains in real-world work.
Earlier this summer, METR published a paper on a study designed to understand how AI tool use affects the “productivity of experienced open-source developers working on their own repositories,” and the conclusion was surprising.
“When developers are allowed to use AI tools, they take 19% longer to complete issues—a significant slowdown that goes against developer beliefs and expert forecasts. This gap between perception and reality is striking: developers expected AI to speed them up by 24%, and even after experiencing the slowdown, they still believed AI had sped them up by 20%.” (emphasis added)
The productivity gains weren’t there. But the belief was. And I think belief might be a key to understanding the complex and very weird bigger picture too.
We are living in a decision environment that is being profoundly shaped – and arguably warped – by a dominant narrative holding that the truly transformative AI-enabled future of work is very nearly at hand. (And this is actually the comparatively mild, non-”super intelligence” version of that narrative!)
This narrative anchors a whole lot of belief, and it’s being pushed relentlessly by interests whose investments and valuations depend on maintaining the belief – regardless of how long it takes to deliver a transformative future of work. For the big AI players, pushing that narrative makes sense (and mountains of dollars).
For the rest of us? Maybe not.
Firms that make hiring decisions based largely on a belief about the future of AI, which is in turn based on a story about the future of AI, might be rushing ahead of any potential productivity J-curve and jumping into an organizational future of enshittification and burnout. As a friend of mine – who has recently found herself asked to take on extra work (using AI tools!) that would previously have gone to more junior staff – put it to me: “Yes, I can have a thousand designers now, but they’re shitty designers. A thousand copywriters, but they’re shitty copywriters.”
This isn’t to say that the big AI boosters will be wrong in the long run. They very well might not. But it’s looking like they were wrong about 2025, and they might be wrong about 2027 and 2029 too.
The timeline matters – for leaders, learners, and labor markets. We need to recognize the effect that the dominant AI future narrative has on the decisions we make today that will dramatically shape tomorrow. And while we’re at it, we should also recognize that the dominant narrative isn’t the only possible narrative.
It might not even be the most likely one.
@Jeffrey
Thanks Jeffrey and I don't disagree if the focus is productivity but if the measure is more nuanced to include quality then my personal experience is a significant gain in partnering with AI