Discussion about this post

User's avatar
Swag Valance's avatar

Great post. The other side is you can quite literally win too often, which is absolutely counterintuitive.

Having lead & orchestrated global teams on experimentation, we would monitor our win rates for a variety of reasons. Sometimes it was for indirect measures, such as an increased rate of failure in experiment setup and/or planning. But some of it was purely cultural: essentially Goodhart's law.

Marissa Mayer's "41 shades of blue" infamy at Google may have made many designers quit, but Google has the collective volume to make the slightest incremental improvements worth millions of dollars. But most of us are not Google.

Teams that value winning over learning end up in an anti-pattern of incrementalism with really long experiments (= opportunity costs) and minimal statistical significance. In other words, taking risks goes out the window. Which is why you should be experimenting in the first place: safe-to-fail risk-taking and learning. You don't go from 10% to 10x with safe bets.

My team sometimes held an "intervention" when a product team got too cozy with win rates. Arming ourselves with data science, we'd show alternatives of sporadic but higher value growth curves instead of continuous incrementalism. We even created an annual company award for our "Biggest Loser" of the year (framed as "Biggest Learning"), elevating the team that took the greatest (losing) risk and yet produced one of the best learnings.

Expand full comment

No posts

Ready for more?