Measuring the Other Thing that Matters Before It Really Matters
Building learning metrics when the stakes are still low – before change demands them
If you were to pick 100 leaders from 100 different organizations across a wide range of industries and leadership functions, wake them up in the dead of night, and ask each to name a key metric highly valued and carefully tracked at their respective firm, I suspect you’d hear the same things over and over: metrics focused on operational efficiency, throughput, productivity, revenue growth, profitability and so on.
You’d probably hear a lot about doing things effectively at speed and scale. And you’d probably hear a lot less about learning. That’s a problem.
Metrics focused on scaling efficiency are great in times of relative stability, when we maximize the benefits of what we know (e.g., product-market fit, clear roadmap, etc) and make all the money we can off hard-earned and perhaps exclusive knowledge. But the relative stability – like the exclusivity of the knowledge and the benefits it confers – won’t last forever. It never does. And when volatility and uncertainty make their inevitable return, scaling efficiency can only take us so far. Sadly, that’s a diminishing returns curve, friends.
Whatever opportunity exists in the volatility and uncertainty has to be realized through learning and by leaders and teams that know how to learn effectively – which brings us back to metrics. If we want to build organizational learning as a foundation of agility and sustainable relevance (and more generally, the future), we better know how to measure and incentivize it, and the measurement comes first.
So how might we start to measure what matters here (before it really matters – at, like, an existential level)?
A decade ago, Bezos was already preaching the gospel of rapid experimentation: “Our success at Amazon is a function of how many experiments we do per year, per month, per week, per day.” If you’re looking for solid learning agility metrics, Experiment Velocity is a good place to start your measurement.
Pair that with Learning Cycle Time, essentially the time from hypothesis to experiment to validated learning, so that you’re not only measuring the number of experiments but also how good the organization is at running, coordinating, and eventually, accelerating them with the design of shorter experimental loops.
But don’t stop there. Data from experimentation, like feedback from customers, is just information. An effective learning organization needs to be able to turn information into a valuable, transferable body of shared knowledge and insights by building learning infrastructure rather than just learning moments. Consider tracking something like Knowledge Reuse Rate to measure how often a documented solution or insight originating with one team is cited, adapted, or implemented by others.
And now, it’s worth zooming out to capture one more key metric that should inform (and meaningfully align) the entire learning agenda. Call this one Contact with Your Ultimate Reality. We mentioned customers a moment ago: How much time is actually being spent with customers – and not only by those on the front lines – to understand how their world is changing?
Without the right metrics in place, it’s hard to appropriately value and incentivize learning. The present trumps the possible, and the latter goes unexplored. The org defaults to what it knows, builds everything around playing that version of the game exceedingly well, and is perfectly positioned to be wrongfooted when the game changes.
Which it will – because it always does.
@Jeffrey


Great post.
Of course, as we continuously learn about what metrics we should focus on, more complex, composite metrics can sometimes get us one step closer to what is important.
Netflix has taken stabs at this (https://netflixtechblog.com/improve-your-next-experiment-by-learning-better-proxy-metrics-from-past-experiments-64c786c2a3ac) as have academics (https://arxiv.org/abs/2402.03915).
But one of my favorite illustrations came a couple months ago from Spotify, who developed an "Experiments with Learning" (EwL) framework: https://engineering.atspotify.com/2025/9/spotifys-experiments-with-learning-framework