Meet Us in the Middle for Better Conversations about the Future
Exploring the Messy Middle: How Nuanced Dialogue Can Unlock Innovative Solutions for Complex Futures
Here’s an interesting idea to play with: What if the best, richest & most valuable conversations about the future require us to find—and focus on—the messy middle ground?
Think back to tumultuous 2020. In the US, it seemed easy—almost instinctive—for people to envision the unfolding pandemic as conforming to either of two extreme scenarios: a near-apocalyptic doomsday contagion (that might look a lot like a Hollywood viral disaster film) or a barely-real threat that had been wildly exaggerated to serve the interests of some shadowy cabal. Neither of these imaginings came close to capturing the complex reality of the situation that the human community would actually need to navigate, and moreover, neither of them was socially adaptive nor particularly useful when it came to actually navigating that complex reality.
Surely, we would have all been better served if we could have staked out those two extreme possibilities and then asked ourselves what might exist between them—perhaps a range of futures with greater probability and also more opportunity to explore agency and nuanced conversations about collective safety, reasonable risks, evolving knowledge, etc.
Pascal often argues for the importance of really understanding the middle ground when envisioning the possible futures of technologies like full autonomy for shipping trucks. That one, specifically, is a domain where it’s highly likely that the biggest ROI is to be realized with tech that’s just good enough to get the human driver out of the truck—and perhaps working remotely as a teleoperator. Full autonomy isn’t required to achieve meaningful change and isn’t likely to be feasibly/safely scaled in the near future anyway. The real space for creative innovation and action isn’t waiting at the theoretical end of the exponential curve: It’s somewhere in the middle.
Similarly, it’s been fascinating to observe the conversation following recent announcements and teases from OpenAI and Google as both rush to package their respective AI models as productivity enhancing smart assistants—“Clippy 2.0”, as less-charitable commentators have quipped. Should we simply read this latest turn in the AI (hype) race as confirmation that the existential threat debates folks were engaged in just months ago were wildly, even comically overblown? Or can we, perhaps, see this moment as an invitation to find and explore the middle ground—and engage in the conversations to be had after we acknowledge that while AGI isn’t exactly at hand, the impact of today’s AI tools is likely to be significant and far-reaching even if they were to advance little beyond the current status?
Finding and focusing on the messy middle (neither utopian nor dystopian, neither all-powerful nor total hoax) is difficult. In the AI conversation, it seems pretty clear that the big players have had an interest in stoking existential fears about the tech they were developing to strengthen their own hands, keep funding flush, and distract policy makers and other interested parties from digging into near-term concerns like rampant copyright violation, etc. Beyond that, we as humans living in a state of near-perpetual overwhelm (/polycrisis) are probably drawn to big headlines, clear binaries, the sense of certainty, and black/white or good/evil thinking because it makes the decision trees seem simpler and easier to navigate. But I’d argue that this draw also poses a real threat to our collective ability to exercise agency in shaping futures for the better.
Consider a final example: If we can only envision ecological futures where either (1) we’re doomed to complete environmental catastrophe or (2) we’re saved by some deus-ex-machina horizon technology (e.g., scalable carbon removal), aren’t we likely to limit ourselves to a set of actions that might not maximize our collective capacity to innovate, collaborate, empathize, and generally do our creative human best to make a preferred future just a bit more likely?
@Jeffrey
Yes important points Jeffrey - one of the reasons I have enjoyed using Joseph Voros futures cone which talks about the following:
Preposterous Futures
The outermost cone represents futures that we currently consider ridiculous, impossible, or that will "never" happen. However, Voros included this cone to remind us not to dismiss such futures entirely, as they may become possible with new knowledge or technological advances.
Possible Futures
The next cone inwards represents futures that we think "might" happen based on some future knowledge we do not yet possess but could potentially acquire.
Plausible Futures
This cone contains futures that we think "could" happen based on our current understanding of how the world works (physical laws, social processes, etc.).
Probable Futures
Even narrower, this cone represents futures that are likely to happen, usually based on current trends and extrapolations from the present.
Preferable Futures
These are futures that we think "should" or "ought to" happen, reflecting our aspirations and goals. A preferable future can belong to any of the other future categories.
Projected Future
At the core is the singular projected future, which is the default "business as usual" extrapolation of current trends.