Problem Definition and the Growing Automation Trap
Why Problem Definition Remains Critical in the Age of AI: A Healthcare Leader's Perspective on Digital Transformation
While co-facilitating a leadership program for a healthcare client earlier this year, I had a chance to sit in on a Q&A with the company’s chief digital officer. Not surprisingly, one of the first questions out of the gate was about how to plan an AI-forward digital transformation of processes and business units (that might take months – conservatively) when the capabilities of AI tools seem to be advancing almost by the day.
The CDO made several thoughtful points in her answer, but the insight that resonated most with me – and, judging by their reactions, with the audience – was her emphasis on the critical importance of problem definition. She argued that a rigorous understanding and definition of the problems to be solved wasn’t just a first step but actually the linchpin in the whole process – a necessary foundation for any meaningful digital solution to be built, regardless of the tools ultimately available.
On one hand, rigorous problem definition is hugely helpful in breaking a problem down into component tasks, understanding which can be attacked in parallel, and selecting the appropriate tool(s) to bring to the table – while also identifying aspects of the proposed solution to which those tools might not be so appropriate after all. On the other hand, diving into digital transformation initiatives without clarity on the core problems to be solved heightens the risk of digitizing and automating aspects of legacy processes that not only don’t fit the future of the business but aren’t even models of efficiency or customer-centricity today.
This is one version of the contemporary AI-accelerated automation trap: Organizations can uncritically equate digitization and automation with progress and wind up entrenching inefficiencies and bad practices rather than eliminating them. The rapid evolution of AI today means that we can now generate “solutions” faster than ever, but without precise problem framing, we risk “solving” the wrong issues – and doing stupid things more easily at unprecedented speed and scale.
As we’ve discussed in previous pieces, the new era of AI-enabled “superagility” raises the stakes for strategic alignment. Speed isn’t a virtue when you’re going in the wrong direction, and one significant unintended consequence of gen AI-driven productivity is our enhanced ability to rapidly create, deploy, and scale poorly designed solutions that we might not even understand (shoutout to vibe coding and deep research!) to problems that were themselves not properly understood in the first place.
The counterintuitive lesson once again: The ability to slow down and think critically may confer the ultimate competitive advantage in an age of relentless acceleration.
@Jeffrey
This is a holdover from old IT/software projects as a rule. Garbage in, garbage out. There is a huge difference between building the software right and building the right software.
This gets to the heart of the efficiency vs effectiveness question, as AI is all about the efficiency domain.