The Great AI Delusion
While Microsoft begs for permission to burn energy, MIT finds ChatGPT weakens your brain, and the first bespoke gene-edited baby arrives.
Dear Friend,
In the great, holy land of the future, in which we have all become AI-enhanced cyborgs, there is a rift forming: Vision and reality are diverging. While our AI overlords keep touting the earth-shattering benefits of their creations, the reality is rather sobering. Section AI’s latest report digs into the thorny issue of AI proficiency – and it doesn’t look good. The subtitle says it all: “Leaders think their AI deployments are succeeding. The data tells a different story.” Give it a read. At least it’s worth contemplating.
Talking about “disconnects”: Jane and I will be heading out to Patagonia for a truly epic adventure – for the next two weeks we will be sailing through the Beagle Channel and summiting some of the countless peaks and glaciers in the area. This means that our trusted Briefing will be on a short hiatus – I’ll have next Tuesday’s deep dive ready, and then we will see each other again for the Friday, February 13th edition (hopefully not an ominous sign).
And now, this…
Headlines from the Future
How AI Destroys Institutions. Here’s a sobering read – in the form of a research paper – on how and why AI might destroy institutions. I am not saying I agree, nor disagree, with the authors, but it is too important a topic to ignore.
Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.
━━━━━
Personalized Gene Editing Is Here. First we had general-purpose gene editing to treat (and cure) diseases based on singular genetic mutations, such as sickle cell anemia. And that was already a big deal. Personalized gene editing kept being an illusive goal, but now it’s here. A baby (KJ) was successfully treated for a rare genetic disorder which left his body unable to remove toxic ammonia from his blood. It’s stil early days, but this could be the beginning of something big (and important).
KJ’s doctors will monitor him for years, and they can’t yet say how effective this gene-editing approach is. But they ~plan to launch a clinical trial to test such personalized treatments~ in children with similar disorders caused by “misspelled” genes that can be targeted with base editing.
━━━━━
AI Is Here. Now What? Microsoft’s CEO, at this year’s World Economic Forum, warned that “we must ‘do something useful’ with AI or they’ll lose ‘social permission’ to burn electricity on it.” Amen. Yet, as the author of this article points out:
I also find automatic transcription tools useful, but if I were banking on general purpose LLMs being as revolutionary as personal computers and the internet, I’d find it worrying how many applications boil down to transcribing audio, summarizing text, and fetching code snippets.
Amen. Again.
━━━━━
Your Brain on ChatGPT. A study from MIT’s Media Lab compared the neural and behavioral consequences of LLM-assisted essay writing. Comparing groups of participants who either wrote an essay without the help of any tools, using a search engine, or using ChatGPT, the researchers found that:
EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.
Not good. In this context, read the post below as well…
━━━━━
Giving University Exams in the Age of Chatbots. Fascinating insight into higher education’s effort to triage the use of AI in students’ work. Now, take this with a grain of salt as it is a singular class’s experience – plus arguably one where students might be somewhat self-selecting (the class in question is on “Open Source Strategies”).
Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.
Good read. Even if you are not in higher education.
What We Are Reading
We Are Living in a Time of Polycrisis. If You Feel Trapped – You’re Not Alone We are living through a time of radical uncertainty, but we are also more resilient than we think. @Jane
DeepMind and Anthropic CEOs: AI Is Already Coming for Junior Roles at Our Companies Regarding how to deal with AI taking over more and more jobs, the Anthropic CEO says: “My worry is as this exponential keeps compounding, and I don’t think it’s going to take that long – again, somewhere between a year and five years – it will overwhelm our ability to adapt.” @Mafe
Ads Are Coming to ChatGPT: Here’s How They’ll Work A textbook early signal of enshittification: once revenue incentives creep into a trusted interface, the question stops being if the experience degrades – and becomes how fast. @Kacee
America Is Slow-Walking Into a Polymarket Disaster Americans have discovered a new pastime: gambling on real-world events. The implications extend far beyond an individual’s bank account. @Pascal
Down the Rabbit Hole
💡 It used to be that we said, “Ideas are cheap and plentiful. Execution is hard.” Not anymore – at least when it comes to AI-assisted execution.
🤖 Skills are reusable capabilities for AI agents. Install them with a single command to enhance your agents with access to procedural knowledge. Here is a repository.
🚀 Now that you have skills for your AI agent (see above), you need production-ready patterns. Together you’ll have a solid foundation for using AI agents in your software development workflow.
👨🏼💻 We start taking agentic coding to its logical conclusion: No code at all. Here is a software library with no code.
🗺️ Super nerdy, but if you have a little bit of technical understanding, this is pretty cool: Run this Python script, give it a city, and it generates a neat grid-map as a poster.
🖊️ This is pretty fun (especially if you, like me, grew up on this thing): Seletti’s Bic Lamp can be hung from the ceiling, mounted to a wall, or used as a standing floor lamp.
↗ Dive into the deep end: Access our complete collection of 2,500+ radical links.
Pascal is all packed up and excited to be back in Tierra del Fuego.
Should We Work Together?
Hi! I’m Pascal from radical. This newsletter is our labor of love. When we’re not writing, we run radical, a firm that helps organizations navigate the future without the “innovation theater.” Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don’t run “projects”; we build your organization’s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you’re interested, let’s jump on a call to see if we’re a good fit. Click here to speak with us.

