“Stop asking people for roadmaps to places they’ve never been.”
This was the capstone to an insightful rant-in-miniature that our friend and colleague Stefanie Falconi delivered against the expectation that a broadly applicable playbook for AI in the public interest should have arrived by now.
That statement has been lodged in my brain for weeks, and the more I turn it over, the more it strikes me as a perfect encapsulation of where we are not only with AI strategy in particular but also with the larger challenge of leading teams and organizations in the present moment and probable future.
Operating without a map, we face uncertainty. Waiting for a map, we face the loss of agency. And while the former is uncomfortable, it’s also inescapable. The latter – on the other hand – is a choice, and it’s a bad one.
Before generalizing too much to the field of leadership at large, let’s talk AI strategy, or more specifically – adoption strategy as it pertains to generative AI/LLM-based tools. As things stand, this is a domain characterized by
High Urgency - Leaders are bombarded with messaging about the transformative potential of AI and the risk that their orgs will be left behind if they miss the wave of opportunity. AI adoption FOMO is real.
Low Clarity - The capability of Gen AI tools far outstrips reliability. And the hype narrative far outstrips both. Leaders struggle to map solutions from a bewildering marketplace onto problems/opportunities that are often poorly defined themselves.
High Complexity - Again, the reliability thing – while the jagged frontier of capability is still fitfully advancing. But all within a context of human organizations and systems that often have low risk tolerance and high consequences for failure.
Low Certainty - Everything from the complexity bucket + the murkiness around definitions and benchmarks, metrics, demonstrated ROI, bigger picture timelines for the industry and effects on the labor market, regulation, etc. Did we mention that there are no roadmaps?!?
Yikes. So what to do?
The answer is most assuredly not “nothing” – and we don’t think it’s “wait for someone to hand you a map” either.
Rather, leaders should focus on the dimensions of the domain challenge that they can meaningfully address within their organizations – i.e., the clarity and urgency – while acknowledging the complexity and uncertainty. They can pursue clarity even amid uncertainty with a strategy that has demystification and iterative learning through systematic experimentation as core principles.
As a rule, experimentation drives learning and learning improves clarity (with regard to problem definition, specific tasks to be automated or augmented, reliable / scalable capabilities of tools, how those tools can fail, what rates and kinds of failure can be tolerated, etc). Improved clarity, in turn, brings a better sense of the degree of urgency that might – or might not – be warranted and how to proceed accordingly.
The best AI adoption strategy for a given team or org or sector isn’t going to be found on a roadmap or even in a readily applicable body of established best practice. More likely, it’s going to develop as emergent practice that can only come through exercising our agency as learners and learning leaders.
Getting comfortable with and skilled in a “test and learn” adaptive approach to strategy allows leaders to exercise their agency even in complex environments – and to find their way to the opportunity that always exists in uncertainty and times of change.
There is no roadmap – no single, definitive framework or model. No known path. Leaders will only discover the path by walking it.
@Jeffrey
This is a well-articulated article and spot on.
RIP best practices. Long live emergent practices.
Been saying that for years. :)