2 Comments

I like how you rightly point out the agility/alignment form of Barry Johnson's polarity map here: unsolvable rather binary problems that contextually depend on continual re-weighting between the two.

But agents in every field of knowledge work? This definition and envisioning of superagility is heavily burdened with reductionist thinking. The idea that everyone in a company is maximally efficient by ruthlessly pursuing their own individual goals without ever being burdened to slow down and coordinate/recoordinate with each other any.

This is also one of the contributors to CEOs now questioning the limits of company effectiveness - not so much efficiency - when no one physically meets anymore (in a shared office or otherwise).

What's missing is organizational intelligence is not the same as individual intelligence. UC Berkeley ML/AI legend, Michael I Jordan, touched on this last week at the AI Action Summit in Paris by bringing up the microeconomics, collective intelligence, and mitigation of uncertainty by markets that these reductionist visions of AI agents do not possess:

https://www.youtube.com/live/W0QLq4qEmKg?t=3811s

Expand full comment

Love the concept of superagility and totally agree with the 'AND' of superagillity ALIGNED with your organization's purpose, vision, and values. Four years ago Pascal highlighted that Finance and Accounting are operating in an environment of rapid change and high uncertainty and that finance needs a new skill - agility – defined broadly here as the ability to experiment and iteratively learn and unlearn at speed. This just adds a big exclamation point to this with the concept of 'superagility'. Great work and thanks for continuing to inspire us Jeffrey & Pascal!

Expand full comment