Everything is awesome. Everything is terrible.
How to Make Sense of the Generative AI Hype and Ground Your Organization in Reality
Another week, another set of wildly contradictory headlines, hot takes, and deep dives on the future of Generative AI. If you’re still attempting to keep your head above water amid the flood of AI news, you might be struck – as I often am – by the wide and persistent gulf that exists between the consistently bullish analysts and the deeply skeptical. And you might well be asking yourself: How can we reconcile all of this, and what’s an engaged and strategically inclined leader/reader to do?
Fret not, gentle reader. Your friends at radical are here with a few suggestions to orient you toward sensemaking and coherence as we close out one crazy year in the development of AI – and look forward to yet another.
Be sure to distinguish between your AI future and the bigger-picture future of the AI industry. Many – but not all – of the ”Gen AI is a hype bubble bound to burst/bust” hot takes are more about the latter. Success for an industry poised to spend another quarter of a trillion dollars on AI infrastructure in the next year is radically different (and measured on a radically different scale) than success at the level of genuinely useful consumer or enterprise AI products. The biggest AI companies may need to achieve AGI – or something like it – to make good on eye-watering investments, but transformative products and services for users could be (and arguably already have been) realized well short of an AGI future that might remain stubbornly out of reach.
Related: Mind the moving goalposts. For Big Tech/AI, the moving of the goalposts is effectively their own doing. The more outrageous the sums plowed into scaling models and amassing GPUs, the more radical the return has to be to make it look like the industry hasn’t lost its collective mind. Hundreds of billions of dollars should buy seriously world-changing stuff, right?
For the rest of us, the goalposts have also moved – arguably as an effect of the dynamic playing out within Big AI. The cycles of debate around not only AGI but also defining the measure of human-level intelligence and how we’ll know when we’ve surpassed it threaten to distract us from just how wild the capabilities leap we’ve already witnessed has been.
Over at Platformer last week, Casey Newton made a strong argument about the dangers of AI skepticism, and in one section, he compiled a compelling (and still very much incomplete) list of some of the impressive and highly varied things that people had managed to do with Gen AI tools in 2024. I’m quoting Newton’s full list (links intact) here for effect:
Cut customer losses from scams in half through proactive detection, according to the Bank of Australia.
Preserved some of the 200 endangered Indigenous languages spoken in North America.
Detected the presence of tuberculosis by listening to a patient’s voice.
Enabled persecuted Venezuelan journalists to resume delivering the news via digital avatars.
Pieced together fragments of the epic of Gilgamesh, one of the world’s oldest texts.
Caused hundreds of thousands of people to develop intimate relationships with chatbots.
Created engaging and surprisingly natural-sounding podcasts out of PDFs.
Remember the now-quaint idea of the Turing test that we held up for… decades? Seems almost trivial now. And make no mistake, friends: We already have capabilities that the 2022 versions of ourselves would have regarded as genuinely astonishing. But most organizations are far, far behind the state-of-the-art curve in terms of figuring out how to actually utilize those capabilities.
The ground truth, then, is something like this: The future of your organization may have already played out in a research lab somewhere this year, and you’re just catching up.
Accordingly, be intentional about curating your AI information diet. Find sources that help you cut through the noise, launch useful experiments, and develop clarity on current (and near-future) capabilities against your specific use cases and objectives. We hope to be one of these for you, and we make a point of promoting sources that do the same work for us at radical.
Lastly, a reminder: Watch out for motivated reasoning. The higher the stakes for Big AI grow in 2025, the more likely we are to encounter information and analyses deployed to support an investor’s POV or agenda – or valuation. Stay critical. And always remember: Anyone who tells you that they know what the AI future holds in the next couple of years is almost certainly trying to sell you something. :)
@Jeffrey