CEOs Are Volunteering to Be Replaced
The Internet tips majority-bot, the encryption window closes in 2029, and a new Wharton paper argues AI has fundamentally restructured how humans think – not just what they do
Dear Friend,
This week has been one of contemplation – as is evidenced in our “Headlines from the Future” section. While AI keeps moving at lightning speed, it feels like we (the collective “we”) are starting to get our feet under us and figure things out…
Meanwhile, a quick personal note before the links: my new book OUTLEARN – The Art of Learning Faster Than the World Can Change – launches April 28 on Amazon. It’s the first volume in a new series called Built for Turbulence: short, framework-dense field manuals for leaders operating in volatile environments. I’ll share more next week in the Tuesday deep-dive. 🤘🏼
And now, this…
Headlines from the Future
The AI-CEO Threat. Here’s an interesting one – the CEOs of major companies are stepping down to make room for people with a better grip on AI.
“In a pre-AI, a pre-gen-AI mode, we made a lot of progress. But now there’s a huge new shift coming along,” Quincey said. While he said he’s leaning into the technological advances, he believes the beverage company needs “someone with the energy to pursue a completely new transformation of the enterprise.”
It does make you wonder a) how many CEOs are hanging on to their jobs by the skin of their teeth, b) how many CEOs are oblivious to what the AI transformation actually means for their companies, and c) how many more CEOs we will see throwing in the towel and handing over the reins to new generations. Now might be a good time for folks with CEO aspirations (and a solid grip on AI) to step up…
━━━━━
Thinking Fast, Slow, and Artificial. In 2011, Nobel Prize winner Daniel Kahneman published his bestselling book “Thinking, Fast and Slow.” In it, he describes the two modes of thinking we all operate in: System 1, which is fast and intuitive, and System 2, which is slow and deliberate. Now, in a new paper, Steven D. Shaw and Gideon Nave from The Wharton School argue that AI introduced a third mode of thinking:
People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways.
And, as you would expect, with it comes a whole host of questions: “System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.” The study is worth reading…
━━━━━
AI Learning Curves Are Real. Anthropic, maker of Claude, released yet another report on the usage of AI (I applaud them for doing this – their reports tend to be actually useful, and not the usual company-sponsored “look how great we are” puffery). This time, they dug into the use of AI across the economy. Lots of good nuggets in the paper; the one standout for me is their insight into how the jagged edge, the concept popularized by Ethan Mollick, plays out in the real world (this is paraphrased):
There’s a compounding dynamic at play: experienced users bring harder problems, get better results, and develop sharper instincts for working with AI – while later adopters are still figuring out the basics.
In essence: Early adopters with high-skill tasks have more successful interactions with Claude than later, less technical adopters – and these early-adopting users may simultaneously be the most exposed to AI-driven disruption and most aided by AI in these initial, augmentative waves of adoption. As my mom used to say: Be careful what you wish for.
━━━━━
Is AI Slop Our Future? AI Slop is seemingly everywhere these days. And it’s getting worse. But here is an interesting counter-argument (at least when it comes to code):
[…] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.
In simple words: “AI will write good code because it is economically advantageous to do so.” I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out – there might be a real incentive for AI companies to compete on quality, which would be a very “free market” thing to do.
What We Are Reading
The Jobs AI Can’t Do – and the Young Adults Doing Them A new generation is redefining what a good job looks like. Hands-on trades are shedding their stigma, replaced by something more compelling: skilled work no machine can replicate. @Jane
Apple Turns 50 Wozniak on Apple: The secret to the company’s success was it managed its brand well and didn’t make “lousy junk” that breaks down. @Mafe
The Case Against Political Prediction Markets Straight from dystopia, a valuable lesson that we keep relearning: Maybe not everything should be a market. @Jeffrey
What Leaders Get Wrong About Responsibility Leaders love to “hold people accountable” – fewer know how to build systems where responsibility organically shows up. @Kacee
PIEZODANCE Not a read this week, but a video. And not just a video, but a contemporary dance video – this year’s winner of the “Dance your PhD Thesis” competition is all about energy – and it’s stunning. @Pascal
Down the Rabbit Hole
🔒 We have been talking about this since the early days of Singularity University, now it’s closer than ever: “Google warns quantum computers could hack encrypted systems by 2029.” Time to update your security keys (there are quantum-secure password-generating algorithms; you just have to use them).
🚦 Similarly, we have been talking about vertical AI models for a while now (well, “a while” in AI-timeline terms) – they, also, are closer than ever: The age of vertical models is here.
💸 BlackRock’s Larry Fink warns that “artificial intelligence could widen wealth inequality if ownership does not broaden alongside it” – i.e., those who invest in stocks will benefit; those who cannot will be left behind.
🧑🏼🏫 Tech up, testscores down: Amid declining test scores, Sweden has pivoted away from screens and invested in back-to-basics school materials (i.e. books).
🐭 Hold my beer: Disney’s Sora disaster shows AI will not revolutionize Hollywood.
🔬 Surprised is no one: The shocking speed of China’s scientific rise.
🦜 All it takes is five seconds of your voice – Mistral’s newest voice cloning AI is scarily good.
👔 The easy way out: Tech CEOs suddenly love blaming AI for mass job cuts. Why?
🎭 This is just too good: Someone trained a large language model solely on Victorian-era literature. The result: Mr. Chatterbox
🤖 AI bots now make up more than 50% of all Internet traffic.
↗ Dive into the deep end: Access our complete collection of 2,700+ radical links.
Should We Work Together?
Hi! I’m Pascal from radical. This newsletter is our labor of love. When we’re not writing, we run radical, a firm that helps organizations navigate the future without the “innovation theater.” Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don’t run “projects”; we build your organization’s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you’re interested, let’s jump on a call to see if we’re a good fit. Click here to speak with us.

