Magnitude Over Frequency
The Mental Shift Required to Win at High-Stakes Innovation, or Why Leaders Struggle with 'Extreme Speculation' and How to Get It Right
Recently, I have been re-reading Nassim Nicholas Taleb’s excellent (and somewhat lengthy) New York Times bestselling book “Antifragile – Things That Gain From Disorder” from 2012. The first time I read the book, I was mostly just curious about the broader implications, having come off reading his breakthrough book “The Black Swan - The Impact of the Highly Improbable.” Little did I (and probably most of the rest of the world) know that only a few short years later, Taleb’s ideas would be put to the ultimate test when the planet officially declared “We’re Closed” as a result of the COVID-19 pandemic.
My current re-read looks at the book from a different perspective, though: The question that has lingered in my mind ever since I finished writing “Disrupt Disruption – How to Decode the Future, Disrupt Your Industry, and Transform Your Business” was somehow less “how do companies innovate – and maybe even disrupt,” but rather “how do you create companies that thrive in a world that becomes more uncertain by the day?” Related, but not the same. One looks at what Taleb would call a “robust” company – one that can absorb shocks and stay on the path. The other asks how you create organizations that not merely absorb but integrate and grow from uncertainty and the resulting barrage of (external) shocks.
In Taleb’s book, you’ll rediscover, under a different name, a good old friend of ours – the concept of core and edge, in Taleb’s world, the “barbell strategy.” Antifragility requires you to make investments in both the known and safe, as well as the unknown, speculative, and inherently unsafe. None of us is new to remarking on this; both Taleb and our own telling stand on the shoulders of countless giants, and much ink has been used to explain the concept and necessity for a diversified portfolio (and the challenges organizations have with managing those two aspects of their business, which is what we focused on in “Disrupt Disruption”).
Taleb’s choice of words to describe this approach made me look and connect it to an observation that, I believe, points to one of the main challenges with organizations and leaders adhering to the insight and advice. Instead of core and edge, Taleb references the necessity to invest about 80-90% of your resources into “extreme safety,” with the remaining 10-20% going into projects that are classified as “extreme speculation.” To distinguish the two ends of the spectrum:
Extreme Safety: The bulk of the company’s resources is dedicated to its core, reliable, and predictable business. This is the cash cow that provides stability and funds for experimentation. It is protected at all costs.
Extreme Speculation: A small, capped portion of resources is invested in high-risk, high-reward “bets.” These could be R&D projects, exploring new markets, or skunkworks teams. The key is that their failure is non-fatal to the company, but their success can be transformative.
I am fairly sure, and know from countless conversations, that leaders across industries and organizations will inherently agree with the idea that this barbell strategy is a firm necessity to build antifragile organizations. And yet they typically struggle to implement this, admittedly, simple concept. I believe, at least to a degree, it comes down to this distinction:
If you think about any (innovation) initiative, you can assess it on both an axis of its impact on your bottom line if you get it right (the “magnitude of correctness”) and the likelihood of getting it right (the “frequency of correctness”). I doubt any company and its leaders struggle with the idea that for bets in the “extreme safety” category, you need to get things right pretty much all of the time – and companies regularly do. It’s typically things they have been doing for a long time, incremental innovation, with lots of available market data to minimize risk. Where the struggle gets real is when companies embark on their “extreme speculation” journey – a fair number of companies never get there, as it’s too scary, too ambiguous, and too complex. But even those who try often fall for the illusion that you can (and should) be able to get outlandish returns with a high (or average) level of correctness.
This, of course, is illusory and wishful thinking (as much as actual unicorns are a fantasy). By definition, you will suffer countless false starts, wrong turns, and dead ends when you aim for “extreme speculation.” This requires you to not only mentally but practically treat these projects radically differently from initiatives in your core (the haven of “extreme safety”) – something many companies fail to do. Projects in the “extreme speculation” category are frequently over-resourced, over-staffed, over-engineered, suffer from premature optimization, aren’t killed quickly enough, and learnings aren’t extracted properly. The KPIs are all wrong, expectations are messed up, the wrong type of people are responsible… The list goes on and on (and warrants another post – or maybe we write a book about how to design and lead antifragile organizations).
So, I leave you with a challenge: Look at your own organization’s “barbell.” How much of your portfolio is truly dedicated to extreme speculation? And more importantly, how do you treat those bets? Are you judging them on their frequency of correctness, or are you giving them the space to achieve a magnitude of correctness? The answer will determine whether you’re building a company that is merely robust or one that is truly antifragile.
@Pascal
Not to be pedantic, but this misconception about Covid being a black swan keeps cropping up.
The association even irritated Taleb himself:
https://www.newyorker.com/news/daily-comment/the-pandemic-isnt-a-black-swan-but-a-portent-of-a-more-fragile-global-system
Covid has been more accurately called a "gray rhino": a rare probability but a very well-known thing in existence that has coexisted with us this whole time. And it was only a matter of time before appearing. There were movies, Bill Gates reports, SARS, Zika, MERS, the 1918 epidemic, etc. that were flashing warnings left and right. Yet few paid attention and acted dumbfounded when it arrived.
Side note: this gets into my own theory of social forgetting and humanity's finite knowledge capacity. That so little of the 1918 pandemic entered history lessons and public consciousness (masking?! quarantines?!) that we seem wired to move on and forget pandemics in order to deny the risk.
But to put this in the context of your post, "Covid = black swan" thinking is akin to a business going all-in on AGI and acting shocked and dumbfounded if there's a coming AI market correction or bubble. The barbell approach would be to build out half of Texas as a giant datacenter ... while still hedging with extreme speculative bets that, say. quantum computing will completely upend any AI based on binary logic to represent our quantum universe.
Pascal, this is awesome! Love a good revisit to a classic. The "type of people" for staffing consideration also brings to mind the Pioneers, Settlers, Town Planners model from Simon Wardley.
That unicorn quadrant sure is what people are looking for, isn't it?