Winning Is A Vanity Metric
Why Spotify is optimizing for zero-success experiments, the reason most innovation teams get fired, and the 2026 blueprint for outlearning the market
Happy New Year, friend! We hope you had a wonderful holiday season and a fantastic start to 2026 – may the year ahead be nothing short of epic!! And welcome (back) to the radical Briefing. 🤗
Before we get into it – in case you missed it, we would love your feedback on how to improve the Briefing in the year(s) to come! We have three quick questions for you – we promise it won’t take longer than a minute to respond: Take our survey here
A member of the radical community recently pointed us to a post on Spotify’s R&D team’s blog, “Beyond Winning: Spotify’s Experiments with Learning Framework.” Spotify, alongside companies such as Shopify and the FAANGs (Facebook, Amazon, Apple, Netflix, and Google), has long been at the forefront of experimentation, doing so in a planned, systematic, and data-driven way.
What Spotify has done – and something we all can learn from – is, first, to reduce the cost of experimentation by creating a dedicated experimentation platform (aptly called “Confidence”), and then shift their focus and what the company measures from merely the number of experiments to the level of insights gained from those experiments (something the company calls “Experiments with Learning (EwL)”).
The important bit here – and what we would like to direct your attention to – isn’t the specific approach, the platform, or the framework Spotify developed, but rather their shift in focus: Instead of merely looking at the “win rate” of their experiments (which is, arguably, what most companies focus on), the company widens its aperture to the learnings gained from their experiments. In essence, you can have a zero percent win rate (i.e., none of your experiments pans out) and still gain an incredible amount of valuable insights with huge business value. Oftentimes, and somewhat counterintuitively, it is as valuable to know what not to do and why as it is to know what to do.
All of which sounds logical and straightforward, until you remember that we’ve built entire careers around the mythology of winning. Most leaders would recoil at an innovation team with a zero percent success rate, no matter what those teams learned along the way. We’ve watched this play out countless times: the innovation team gets disbanded, the learning evaporates, and the organization remains exactly where it started, only now more risk-averse than before. We eliminate the team but keep the conditions that made real innovation impossible in the first place.
As we’re writing this during the first week of 2026, let us offer you something more valuable than a resolution: Make this the year you shift your aperture from outcomes to insights. What if success wasn’t about how many experiments worked, but about how much your organization learned? What if the metric that mattered was how quickly you could integrate those learnings into your next move?
As Dashun Wang, professor of management and organization at the Kellogg School of Management, puts it in his seminal research paper “Quantifying dynamics of failure across science, startups, and security”: “[…] learning reduces the number of failures required to achieve success […]”
Wang’s research confirms what many of us have learned the hard way: it’s not the failure that stops us, it’s the failure to extract and apply the learning. Every setback either moves you forward or leaves you circling the same challenges. The difference isn’t luck or timing, it’s whether you’ve built the capacity to learn from what didn’t work.
And if this interests you, we are writing a book about the larger topic of experimentation and learning (“OUTLEARN – The Art of Learning Faster Than the World Can Change”) which we aim to publish at the end of Q1 2026. Join our authors community to get early access to the book and follow along on the writing process.
@Pascal and @Jane


Great post. The other side is you can quite literally win too often, which is absolutely counterintuitive.
Having lead & orchestrated global teams on experimentation, we would monitor our win rates for a variety of reasons. Sometimes it was for indirect measures, such as an increased rate of failure in experiment setup and/or planning. But some of it was purely cultural: essentially Goodhart's law.
Marissa Mayer's "41 shades of blue" infamy at Google may have made many designers quit, but Google has the collective volume to make the slightest incremental improvements worth millions of dollars. But most of us are not Google.
Teams that value winning over learning end up in an anti-pattern of incrementalism with really long experiments (= opportunity costs) and minimal statistical significance. In other words, taking risks goes out the window. Which is why you should be experimenting in the first place: safe-to-fail risk-taking and learning. You don't go from 10% to 10x with safe bets.
My team sometimes held an "intervention" when a product team got too cozy with win rates. Arming ourselves with data science, we'd show alternatives of sporadic but higher value growth curves instead of continuous incrementalism. We even created an annual company award for our "Biggest Loser" of the year (framed as "Biggest Learning"), elevating the team that took the greatest (losing) risk and yet produced one of the best learnings.