Stories of Discontinuity
Every vision of the future is fiction. Some are just more comfortable than others.
Heard any wild AI stories lately? The market certainly has – and not just one of them.
Last week, it was a viral piece of AI doom-inflected speculative macro fiction from Citrini Research that sketched out a scenario where the rapid disruption of white-collar work and service-industry business models tips off a broad economic crisis. The post wasn’t news and wasn’t even really analysis, but it sent the prices of software and finance stocks (especially those unfortunate enough to figure into the scenario by name) reeling just the same. And the Citrini post was actually the second massively viral AI-takeoff-ravages-the-labor-market story to spook investors in the span of just a few weeks.
Now as you’d expect, plenty of commentators jumped in both times with critiques (1, 2, 3) and counterarguments (my favorite), and even a couple of full-blown speculative counternarratives. Many of those commentators rightly pointed out that the whole Citrini thing (like the Shumer post) is, well… just a story.
But well… so are all of our other visions of the future.
Thinking about – and attempting to plan for – the future is fundamentally an act of imagination. That act might be grounded in historical data and built on the extrapolation of today’s evident, quantifiable trends into the space of tomorrow, but once we get into the tomorrow, we are in the realm of imagination, assumption, projection, story.
The future stories that strike us as most plausible or even probable are often stories of continuity, where the tomorrow doesn’t look so drastically different from today. The path of continuity is easier to imagine and also often feels more “real” because it’s grounded in more historical data. But all of that data is about the past, and our most important decisions are about the future.
Stories of discontinuity feel unfamiliar. That’s the point. They can widen the aperture of our imagination, expand the scope of conversation and awareness, offer a fresh perspective on present practice and strategy, and maybe even enable us to discover non-intuitive paths forward.
Now, is all of this to say that I think the Citrini narrative points to a particularly probable future – or that it’s even a particularly well-crafted bit of speculative fiction? Not really, no.
But I appreciate the opportunity that these viral narratives of discontinuity offer for us to engage critically with alternative future stories – and to then turn that same critical lens onto the spectrum of future narratives that we don’t so easily recognize as “stories”. Sometimes that’s because they’re narratives of continuity grounded in historical data and past experience. Sometimes it’s because they come from ostensible authorities. Sometimes it’s because they feel too deeply entrenched to ever be shaken loose or challenged.
We can and should do this at the macro level, more carefully examining big stories about AI and societal futures – asking where each story originates, what assumptions are baked in, whose interests and agendas are served, who has real agency, etc.
And we can do this at the micro/org level too. Some of the most interesting conversations I’ve been having lately have been with HR & People leaders about the AI augmented-futures of their organizations. One thing that keeps coming up and sticks with me: The AI-future vision of the org is rarely people-centric. It’s typically constructed around optimization and efficiency first, and almost every other value or stakeholder interest figures as an afterthought. That’s a bet and an argument, and it’s also a story that leaders within the org are telling themselves about the future.
And make no mistake: There’s a set of assumptions baked into that vision just as surely as there is in the Citrini memo or Matt Shumer’s viral blog post. And if we unpack them and find ourselves dissenting, what alternative framings or narratives are we putting out there to show other possible paths forward?
So, I’ll ask again: Heard any wild AI stories lately?
@Jeffrey

