<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[radical Briefing]]></title><description><![CDATA[The future doesn’t come with a manual. But twice a week, we’ll send you the next best thing: razor-sharp insights, practical frameworks, and early signals that keep you ahead of the curve. Raw, unfiltered, and straight from the edge of innovation.]]></description><link>https://briefing.rdcl.is</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 12:31:26 GMT</lastBuildDate><atom:link href="https://briefing.rdcl.is/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[be radical Group LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[rdcl@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[rdcl@substack.com]]></itunes:email><itunes:name><![CDATA[Pascal Finette]]></itunes:name></itunes:owner><itunes:author><![CDATA[Pascal Finette]]></itunes:author><googleplay:owner><![CDATA[rdcl@substack.com]]></googleplay:owner><googleplay:email><![CDATA[rdcl@substack.com]]></googleplay:email><googleplay:author><![CDATA[Pascal Finette]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[CEOs Are Volunteering to Be Replaced]]></title><description><![CDATA[The Internet tips majority-bot, the encryption window closes in 2029, and a new Wharton paper argues AI has fundamentally restructured how humans think &#8211; not just what they do]]></description><link>https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</link><guid isPermaLink="false">https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 03 Apr 2026 15:04:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b17f2df1-1f6b-4465-8d96-967b105cf690_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>This week has been one of contemplation &#8211; as is evidenced in our &#8220;Headlines from the Future&#8221; section. While AI keeps moving at lightning speed, it feels like we (the collective &#8220;we&#8221;) are starting to get our feet under us and figure things out&#8230;</p><p>Meanwhile, a quick personal note before the links: my new book <a href="https://rdcl.is/outlearn/">OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change</a> &#8211; launches April 28 on Amazon. It&#8217;s the first volume in a new series called Built for Turbulence: short, framework-dense field manuals for leaders operating in volatile environments. I&#8217;ll share more next week in the Tuesday deep-dive. &#129304;&#127996;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.cnbc.com/2026/03/26/coca-cola-james-quincey-walmart-doug-mcmillon-artificial-intelligence-step-down.html">The AI-CEO Threat.</a></strong> Here&#8217;s an interesting one &#8211; the CEOs of major companies are stepping down to make room for people with a better grip on AI.</p><blockquote><p>&#8220;In a pre-AI, a pre-gen-AI mode, we made a lot of progress. But now there&#8217;s a huge new shift coming along,&#8221; Quincey said. While he said he&#8217;s leaning into the technological advances, he believes the beverage company needs &#8220;someone with the energy to pursue a completely new transformation of the enterprise.&#8221;</p></blockquote><p>It does make you wonder a) how many CEOs are hanging on to their jobs by the skin of their teeth, b) how many CEOs are oblivious to what the AI transformation actually means for their companies, and c) how many more CEOs we will see throwing in the towel and handing over the reins to new generations. Now might be a good time for folks with CEO aspirations (and a solid grip on AI) to step up&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking Fast, Slow, and Artificial.</a></strong> In 2011, Nobel Prize winner Daniel Kahneman published his bestselling book &#8220;Thinking, Fast and Slow.&#8221; In it, he describes the two modes of thinking we all operate in: System 1, which is fast and intuitive, and System 2, which is slow and deliberate. Now, in a new paper, Steven D. Shaw and Gideon Nave from The Wharton School argue that AI introduced a third mode of thinking:</p><blockquote><p>People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways.</p></blockquote><p>And, as you would expect, with it comes a whole host of questions: &#8220;System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.&#8221; The study is worth reading&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/research/economic-index-march-2026-report">AI Learning Curves Are Real.</a></strong> Anthropic, maker of Claude, released yet another report on the usage of AI (I applaud them for doing this &#8211; their reports tend to be actually useful, and not the usual company-sponsored &#8220;look how great we are&#8221; puffery). This time, they dug into the use of AI across the economy. Lots of good nuggets in the paper; the one standout for me is their insight into how the jagged edge, the concept popularized by Ethan Mollick, plays out in the real world (this is paraphrased):</p><blockquote><p>There&#8217;s a compounding dynamic at play: experienced users bring harder problems, get better results, and develop sharper instincts for working with AI &#8211; while later adopters are still figuring out the basics.</p></blockquote><p>In essence: Early adopters with high-skill tasks have more successful interactions with Claude than later, less technical adopters &#8211; and these early-adopting users may simultaneously be the most exposed to AI-driven disruption and most aided by AI in these initial, augmentative waves of adoption. As my mom used to say: Be careful what you wish for.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.greptile.com/blog/ai-slopware-future">Is AI Slop Our Future?</a></strong> AI Slop is seemingly everywhere these days. And it&#8217;s getting worse. But here is an interesting counter-argument (at least when it comes to code):</p><blockquote><p>[&#8230;] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.</p></blockquote><p>In simple words: &#8220;AI will write good code because it is economically advantageous to do so.&#8221; I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out &#8211; there might be a real incentive for AI companies to compete on quality, which would be a very &#8220;free market&#8221; thing to do.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/mar/31/jobs-ai-cant-do-young-adults">The Jobs AI Can&#8217;t Do &#8211; and the Young Adults Doing Them</a></strong> A new generation is redefining what a good job looks like. Hands-on trades are shedding their stigma, replaced by something more compelling: skilled work no machine can replicate. <em>@Jane</em></p><p><strong><a href="https://www.latimes.com/business/story/2026-03-30/apple-at-50-how-garage-startup-became-3-5-trillion-titan">Apple Turns 50</a></strong> Wozniak on Apple: The secret to the company&#8217;s success was it managed its brand well and didn&#8217;t make &#8220;lousy junk&#8221; that breaks down. <em>@Mafe</em></p><p><strong><a href="https://www.gzeromedia.com/the-case-against-political-prediction-markets">The Case Against Political Prediction Markets</a></strong> Straight from dystopia, a valuable lesson that we keep relearning: Maybe not everything should be a market. <em>@Jeffrey</em></p><p><strong><a href="https://www.strategy-business.com/blog/What-leaders-get-wrong-about-responsibility">What Leaders Get Wrong About Responsibility</a></strong> Leaders love to &#8220;hold people accountable&#8221; &#8211; fewer know how to build systems where responsibility organically shows up. <em>@Kacee</em></p><p><strong><a href="https://www.youtube.com/watch?v=UWHdiLdemXQ">PIEZODANCE</a></strong> Not a read this week, but a video. And not just a video, but a contemporary dance video &#8211; this year&#8217;s winner of the &#8220;Dance your PhD Thesis&#8221; competition is all about energy &#8211; and it&#8217;s stunning. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128274; We have been talking about this since the early days of Singularity University, now it&#8217;s closer than ever: &#8220;<a href="https://www.theguardian.com/technology/2026/mar/26/google-quantum-computers-crack-encryption-2029">Google warns quantum computers could hack encrypted systems by 2029.</a>&#8221; Time to update your security keys (there are quantum-secure password-generating algorithms; you just have to use them).</p><p>&#128678; Similarly, we have been talking about vertical AI models for a while now (well, &#8220;a while&#8221; in AI-timeline terms) &#8211; they, also, are closer than ever: <a href="https://x.com/eoghan/status/2037197696075981124">The age of vertical models is here.</a></p><p>&#128184; BlackRock&#8217;s Larry Fink warns that &#8220;<a href="https://www.wsj.com/finance/investing/larry-finks-warning-invest-or-risk-getting-left-behind-by-ai-d2f1d09d">artificial intelligence could widen wealth inequality if ownership does not broaden alongside it</a>&#8221; &#8211; i.e., those who invest in stocks will benefit; those who cannot will be left behind.</p><p>&#129489;&#127996;&#8205;&#127979; Tech up, testscores down: <a href="https://undark.org/2026/04/01/sweden-schools-books/">Amid declining test scores, Sweden has pivoted away from screens and invested in back-to-basics school materials (i.e. books).</a></p><p>&#128045; Hold my beer: <a href="https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood/">Disney&#8217;s Sora disaster shows AI will not revolutionize Hollywood.</a></p><p>&#128300; Surprised is no one: <a href="https://www.theatlantic.com/science/2026/03/china-science-superpower/686564/">The shocking speed of China&#8217;s scientific rise.</a></p><p>&#129436; All it takes is five seconds of your voice &#8211; <a href="https://mistral.ai/news/voxtral-tts">Mistral&#8217;s newest voice cloning AI is scarily good.</a></p><p>&#128084; The easy way out: <a href="https://www.bbc.com/news/articles/cde5y2x51y8o">Tech CEOs suddenly love blaming AI for mass job cuts. Why?</a></p><p>&#127917; This is just too good: Someone trained a large language model solely on Victorian-era literature. The result: <a href="https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/">Mr. Chatterbox</a></p><p>&#129302; AI bots now make up <a href="https://www.cnbc.com/2026/03/26/ai-bots-humans-internet.html">more than 50% of all Internet traffic</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Our Way to 70%]]></title><description><![CDATA[The inversion that SaaS wasn't prepared for&#8230;]]></description><link>https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</link><guid isPermaLink="false">https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:44:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:503656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/192553695?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s an early signal I&#8217;ve now seen enough times in the wild that it&#8217;s hard to dismiss as anecdotal, even if each individual instance still sounds like one. Over the past few weeks, I&#8217;ve had multiple conversations with CEOs of tech startups who are starting to receive a version of the same feedback from potential customers: instead of buying software, prospects are increasingly deciding to vibe-code a solution themselves. Not because it&#8217;s better, but because it gets them far enough.</p><p>That &#8220;far enough&#8221; is landing, with surprising consistency, around 70%.</p><p>I raised this at a thought leader symposium in Dallas last week, expecting at least some pushback, and instead got immediate agreement. One firm owner said plainly that rather than paying $300/mo per user for an off-the-shelf product, the agent-built 70% solution is good enough in the current environment. Another chimed in (not a developer by any stretch) and said he&#8217;s been building things on the weekends simply because it&#8217;s fun. This isn&#8217;t just a cost decision, it&#8217;s a behavioral shift.</p><p>What&#8217;s striking is how quickly the boundary of what people &#8220;won&#8217;t build themselves&#8221; is collapsing. In an internal discussion at radical, the point was raised that surely there are still limits - that people aren&#8217;t going to start vibe coding their own general ledger. And if you read the <a href="https://briefing.rdcl.is/p/mckinsey-cant-you-can">recent briefing</a>, someone had done exactly that. <a href="https://craigmod.com/essays/software_bonkers/">By his own admission</a>, it wasn&#8217;t particularly good, and he wasn&#8217;t using a complex GL to begin with, but it worked for his business. Around the same time, I saw a CEO share on LinkedIn that he had spent a weekend building a replacement for HubSpot. Again, not best-in-class, but usable and to his own preferences.</p><p>Individually, these are easy to write off&#8230;together, they form a pattern. My instinct, honestly, is still that this has limits. Not every system will get vibe-coded into existence, but I&#8217;m increasingly unsure where those limits actually are. That uncertainty feels more important than whatever answer I&#8217;d have given six, or even three months ago.</p><p>TechCrunch has already leaned into the narrative of <a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaSpocalypse</a>, which may or may not be more marketing fodder than reality, but it points to something worth paying attention to. Because the more interesting dynamic here isn&#8217;t whether these self-built solutions rival existing software - they don&#8217;t. It&#8217;s that they don&#8217;t have to because the standard isn&#8217;t excellence anymore. It&#8217;s sufficiency, shaped by context, constraints, and increasingly, by a willingness to trade polish for control. What&#8217;s notable is that this isn&#8217;t just showing up in conversations, it&#8217;s already impacting markets. Last month, a single release from Anthropic triggered a roughly $285B selloff across the software sector.</p><p>It would be convenient to attribute this entirely to economic pressure. Budgets are tighter, scrutiny is higher, and software that once felt like a default purchase now has to compete for its place. That&#8217;s real, and it&#8217;s accelerating the behavior. The structural shift underneath all of this is simple: the cost of creating software has dropped below the perceived cost of buying it - and when that inversion happens, the starting point changes. You don&#8217;t begin with procurement, you begin with construction.</p><p>What sits underneath that shift, is that software is quietly moving from something standardized to something individualized. For the last two decades, SaaS has been built on a kind of implicit compromise: you adopt a system designed for the average user, and in return you get scale, reliability, maintenance and convenience. But when the cost of building collapses, that tradeoff starts to feel less necessary. Instead of adapting your workflows to fit a product, you can increasingly shape the product to fit your workflows. It&#8217;s messier, and often incomplete, but it&#8217;s also more precise&#8230;and for many use cases, that precision matters more than polish.</p><p>Pascal&#8217;s framing in the briefing around bifurcation is useful here, not as theory, but as a way to understand where this is going. We&#8217;re watching the market split between systems where completeness and trust are non-negotiable, and a much larger surface area where &#8220;good enough&#8221; is not just acceptable, but rational. The 70% threshold is emerging as the dividing line; above it, you still buy &#8211; but below it, more and more people are choosing to build.</p><p>I think what makes this particularly important, is that it reframes competition in a way that most companies aren&#8217;t prepared to handle. The threat isn&#8217;t another product with a better roadmap or a tighter feature set, it&#8217;s a user who decides they don&#8217;t need the category in the first place. A small business owner comparing a self-built ledger to Quicken isn&#8217;t benchmarking against enterprise accounting software. A founder assembling a CRM over a weekend isn&#8217;t trying to replicate HubSpot in full. They are solving a narrower, more individualized version of the problem - and in doing so, stepping outside the boundaries that defined the category. Jeff Seibert, the CEO of <a href="https://digits.com/">Digits</a>, put language to this in a way that&#8217;s worth paying attention to, &#8220;the second-order effects will be fascinating. When software is cheap, it&#8217;s taste and distribution that matter.&#8221; This framing pulls the conversation out of tooling and into consequences.</p><p>And that opt-out dynamic is the signal.</p><p>Once someone successfully builds one thing, even imperfectly, the barrier to building the next drops dramatically. Capability compounds, confidence compounds, and what starts as experimentation begins to normalize into an alternative path: one that doesn&#8217;t rely on waiting for software to catch up to your needs, because you&#8217;ve already adjusted it yourself.</p><p>The implication isn&#8217;t that 70% gets better (although I&#8217;m sure that number continues to improve as the coding models mature) it&#8217;s that once users believe they can build for themselves, the default posture shifts from buying software to questioning whether they need it at all.</p><p><em>@Kacee</em></p>]]></content:encoded></item><item><title><![CDATA[Nine Nuclear Reactors Worth of Hype]]></title><description><![CDATA[Walmart's AI shopping experiment crashes, AGI benchmarks humble Silicon Valley, and the ads have officially reached the refrigerator]]></description><link>https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</link><guid isPermaLink="false">https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Mar 2026 14:36:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f35bc325-b08a-4b0b-8293-96b01020e838_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Pardon my French (in my defense, it&#8217;s not my headline), but Mario Zechner&#8217;s &#8220;<a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the f*** down</a>&#8221; is a good reminder that all the wondrous things AI can and does do for us come at a cost &#8211; hence his reminder to: &#8220;[&#8230;] slowing the f*** down and suffering some friction is what allows you to learn and grow.&#8221; With that in mind &#8211; time to slow down, welcome the weekend, and dive one last time into our wild future before we call it a Friday.</p><p>P.S. <a href="https://rdcl.is/a-podcast-with/jason-goldberg/">A new episode of our podcast dropped:</a> Jason Goldberg has spent 30 years watching companies survive &#8211; and get destroyed by &#8211; disruption in retail. His counterintuitive advice for the agentic commerce moment: stop trying to be first, and start asking what you&#8217;ll regret not doing when the future arrives.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/planned-10-gigawatt-softbank-data-center-in-ohio-might-be-the-largest-in-the-world-will-require-a-usd33-billion-natural-gas-plant-equivalent-to-nine-nuclear-reactors">AIs Energy Demands Are Truly Bonkers.</a></strong> Japanese tech giant SoftBank is building a massive 10GW data center in Ohio to host AI models. Aside from the cool $30&#8211;40 billion price tag, it will require the build of a $33 billion natural gas power plant &#8211; with an insane output capacity (emphasis mine):</p><blockquote><p>When completed, the new site could be one of the largest AI data centers ever built. Furthermore, it will be powered by one of the world&#8217;s largest fleets of gas turbines, <em>equivalent to the energy supply of nine nuclear reactors.</em></p></blockquote><p>It does leave you wondering where and how all this will end.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://searchengineland.com/walmart-chatgpt-checkout-converted-worse-472071">Maybe AI Isn&#8217;t Online Shopping&#8217;s Future After All.</a></strong> After the initial hype of online shopping results being incorporated into the answers LLMs give to the numerous product-related queries they receive, Walmart unveiled that the conversion they are seeing from those AI-referrals is just terrible.</p><blockquote><p>After testing 200,000 items in ChatGPT, Walmart found sharply lower conversions and will use its own integrated shopping experience. Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.</p></blockquote><p>Next: Agentic commerce. The jury&#8217;s out.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 People Want From AI.</a></strong> Anthropic, the AI company which is <em>not</em> OpenAI, conducted what is, in their own words, likely the largest study on users&#8217; desires, wishes, and fears when it comes to their use of AI. Anthropic being Anthropic, they didn&#8217;t survey people using a traditional questionnaire, but rather had their chatbot &#8220;talk&#8221; to people. The findings won&#8217;t surprise you &#8211; people want to use AI to better themselves: professional excellence and increased productivity, which translates into the very human desire to, ultimately, live better. And respondents live the Scott Fitzgerald quote we are so fond of quoting &#8211; they keep the light and the dark of AI in their heads simultaneously.</p><blockquote><p>&#8220;AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it&#8217;s exactly the other way around.&#8221;</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arcprize.org/">AGI? Not so Fast!</a></strong> AGI, or Artificial General Intelligence, is the thing Sam Altman and others love to talk about &#8211; and promise is just around the corner. To demo their respective companies&#8217; progress, they roll out benchmark after benchmark showing how their AI beats humans on the sommelier exam. A new benchmark, however, shows that AGI is still a long, long way off. The ARC-AGI-3 benchmark pits leading AIs against humans in a series of computer games &#8211; and AIs don&#8217;t look all that great. To apply a lesson my statistics professor hammered into our heads: Never trust a statistic you haven&#8217;t faked yourself.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.wsj.com/lifestyle/samsung-refrigerator-ads-lg-whirlpool-ge-10ea7bcc?st=dFog7V">Ads Are Popping up on the Fridge and It Isn&#8217;t Going Over Well</a></strong> Ads are literally popping up everywhere (even on Google Maps starting this summer), but people are particularly irked by ads on expensive refrigerators with a big screen for recipes, weather updates, and, apparently, ads. <em>@Mafe</em></p><p><strong><a href="https://aeon.co/essays/how-do-we-deal-with-the-catastrophe-of-uninsurability">The Insurance Catastrophe</a></strong> A deep dive into the history &amp; future of insurance markets offers a fascinating lens for exploring how communities, societies, and economies deal with radical uncertainty and catastrophic risk. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/davidrosowsky/2026/03/21/the-60-year-degree-why-universities-must-pivot-from-recruitment-to-perpetual-partnership/">The 60-Year Degree: Why Universities Must Pivot from Recruitment to Perpetual Partnership</a></strong>Higher ed has been at an inflection point for years; the degree is just the 1st casualty of a shift to lifelong contracts. <em>@Kacee</em></p><p><strong><a href="https://undark.org/2026/03/20/ai-slop-children/">AI Slop Is Infiltrating Online Children&#8217;s Content</a></strong> Surprised is, of course, no one. But it does leave you wondering what happens to the brains and cognitive development of children who are exposed to AI slop from an early age. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#127871; <a href="https://www.youtube.com/watch?v=T4Upf_B9RLQ">Hilarious take on the world of Enshittification</a> by the Norwegian Consumer Council (hat tip to Angel Grimalt for the link).</p><p>&#129399;&#127996; AI agents going rogue: The more we rely on AI, the more we deploy AI agents, the more we see fun headlines like this: <a href="%EF%BF%BC">Meta is having trouble with rogue AI agents</a> &#8211; now consider what this means for any company <em>not</em> the size of, or with the resources of, Meta!</p><p>&#127866; Talking about cyberattacks and our ever-increasing reliance on Internet-connected technologies: <a href="https://techcrunch.com/2026/03/20/cyberattack-on-vehicle-breathalyzer-company-leaves-drivers-stranded-across-the-us/">Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US.</a></p><p>&#128104;&#127996;&#8205;&#128187; Nerd alert! But super helpful: Here is a <a href="https://github.com/nidhinjs/prompt-master">Claude Skill &#8211; Prompt Master &#8211;</a> which helps you create better prompts, highly optimized for specific use cases, tools, and target LLMs.</p><p>&#129318;&#127996; Yep, bro&#8230; Whatever. &#8220;<a href="https://fortune.com/2026/03/24/perplexity-ceo-ai-layoffs-not-bad-people-hate-jobs-entrepreneurship/">Perplexity CEO says AI layoffs aren&#8217;t so bad because people hate their jobs anyways: &#8216;That sort of glorious future is what we should look forward to&#8217;</a>&#8221;</p><p>&#9875; The running and cycling app Strava has been used to track the location of military outposts before &#8211; now the French newspaper Le Monde has used it to <a href="https://www.lemonde.fr/en/international/article/2026/03/20/stravaleaks-france-s-aircraft-carrier-located-in-real-time-by-le-monde-through-fitness-app_6751640_4.html">track the location of France&#8217;s aircraft carrier</a>. Note: Your public data is <em>public</em> data.</p><p>&#129516; Fascinating read on the adaptability of the human body: <a href="https://www.zmescience.com/science/biology/tribe-in-kenya-evolved-genetic-mutation-that-lets-them-survive-with-almost-no-water/">Tribe in Kenya evolved genetic mutation that lets them survive with almost no water.</a></p><p>&#129378; A Japanese glossary of <a href="https://www.nippon.com/en/japan-data/h01362/">chopsticks faux pas</a>.</p><p>&#129523; Lovely <a href="https://www.web-rewind.com/">journey through 30 years of the web</a>.</p><p>&#127911; Peak 80s nostalgia: <a href="https://maxell-usa.com/product/cassetteplayer/">The Maxell Wireless Cassette Player.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[McKinsey Can’t. You Can.]]></title><description><![CDATA[While Anthropic&#8217;s CEO stares down his Oppenheimer moment, a CEO loses $250M trusting ChatGPT over his lawyers, and OpenClaw turns out to be FOMO dressed as a technological breakthrough.]]></description><link>https://briefing.rdcl.is/p/mckinsey-cant-you-can</link><guid isPermaLink="false">https://briefing.rdcl.is/p/mckinsey-cant-you-can</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Mar 2026 13:24:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5fdd09db-ce13-409a-b677-7aaa45025084_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Boy oh boy, the world is spinning faster than ever&#8230; This last week has been yet another week of AI insanity. Meanwhile, we are sweating at an unprecedented 86 degrees Fahrenheit here in Boulder, CO (we usually see snow around this time of year), and I am writing this in the rain and 45 degrees Fahrenheit while being out for a weekend of ice climbing in the Canadian Rockies in Canmore, Alberta&#8230; We will see how the ice is tomorrow &#8211; just arrived.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.mckinsey.com/capabilities/tech-and-ai/how-we-help-clients/rewiring-the-way-mckinsey-works-with-lilli">Even the Consultants Can&#8217;t Make AI Work for Them.</a></strong> Here is an interesting one: McKinsey created and deployed their own AI assistant &#8220;Lilly&#8221; - and in their write-up about it, they report that 72% of their employees are using it by tossing 500,000 prompts at it per month.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_Zyl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" width="1456" height="431" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:431,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:137974,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/191539185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>72% of McKinsey&#8217;s employees are about 29,000 people. 29,000 people prompting their AI 500,000 times a month is only 17 prompts per person per month! That&#8217;s about one prompt every other day&#8230; Not exactly a lot. I prompt Claude easily 17 times in a single day&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://craigmod.com/essays/software_bonkers/">McKinsey Can&#8217;t &#8211; But Individuals Do.</a></strong> In stark contrast to McKinsey, solo-developer Craig Mod built his own (fairly complex) accounting system from scratch using Claude Code in five short days. Aside from the audacity of it all, it&#8217;s a perfect example of the &#8220;bifurcation of intelligence&#8221; we have been talking about here in the radical Briefing. On one hand you have big firms seeking efficiency gains by deploying chatbots, and on the other you have individuals riding the speartip of AI to create complex, bespoke systems.</p><blockquote><p>Simply put: It&#8217;s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own. It took me about five days. I am now using the best piece of accounting software I&#8217;ve ever used. It&#8217;s blazing fast. Entirely local. Handles multiple currencies and pulls daily (historical) conversion rates. It&#8217;s able to ingest any CSV I throw at it and represent it in my dashboard as needed. It knows US and Japan tax requirements, and formats my expenses and medical bills appropriately for my accountants. I feed it past returns to learn from. I dump 1099s and K1s and PDFs from hospitals into it, and it categorizes and organizes and packages them all as needed. It reconciles international wire transfers, taking into account small variations in FX rates and time for the transfers to complete. It learns as I categorize expenses and categorizes automatically going forward. It&#8217;s easy to do spot checks on data. If I find an anomaly, I can talk directly to Claude and have us brainstorm a batched solution, often saving me from having to manually modify hundreds of entries. And often resulting in a new, small, feature tweak. The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://stopsloppypasta.ai/en/">Stop Sloppypasta.</a></strong> Like it or not, you will have to deal with AI-generated content &#8211; both personally and professionally. Colleagues who are responding to a request with an AI-generated response, emails being written by your favorite LLM, proposals being created with the help of your friendly chatbot. The question might truly not be &#8220;if&#8221; but &#8220;how&#8221; &#8211; here is a set of very reasonable guidelines and practices to help you navigate this brave new world.</p><blockquote><p>AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else&#8217;s burden. New tools require new manners. <strong>Use AI to accelerate your work or improve what you send.</strong> <strong>Don&#8217;t use it to replace thinking about what you&#8217;re sending.</strong></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://entropytown.com/articles/2026-03-12-openclaw-sandbox/">OpenClaw Isn&#8217;t Really New &#8211; It&#8217;s The Dream of Free Labour.</a></strong> Unless you were living under a rock in AI-land, you&#8217;ve definitely heard of the OpenClaw craziness (we reported on it multiple times here in the radical Briefing). The narrative, usually, is around the technological breakthrough and the magic that ensues when you hand over the keys to the kingdom to your army of AI bots. Here&#8217;s a good counter-narrative &#8211; the tech isn&#8217;t new per se, it&#8217;s just combined and connected in an interesting way. And the hype, really, is about the never-ending dream of free labour &#8211; and ends up being more about FOMO than anything else.</p><blockquote><p>A machine producing a thousand candidate images while you sleep is plausible and often useful. A machine founding a hundred profitable businesses before breakfast is rather more ambitious. The first is a search process. The second is venture-capital fan fiction.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/anthropic-dod-ai-utopianism/686327/?gift=0GPrpLquXY4NmRQ6sk9MNvR2B7Kzm7g5dqeIskXWHDQ">Dario Amodei&#8217;s Oppenheimer Moment</a></strong> Dario Amodei may be having his Oppenheimer moment, and judging by the Pentagon&#8217;s latest move, he never really had a choice. <em>@Jane</em></p><p><strong><a href="https://www.fastcompany.com/91508903/after-hours-meetings-are-on-the-rise-ai-could-make-things-even-worse">After-Hours Meetings Are on the Rise; AI Could Make Things Even Worse</a></strong> Everyone is in agreement that there shouldn&#8217;t be so many meetings, but unfortunately they&#8217;re on the rise. Specifically, after-hours meetings due to more global teams and distributed workforces. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/culture/infinite-scroll/why-tech-bros-are-now-obsessed-with-taste">Why Tech Bros Are Now Obsessed With Taste</a></strong> As the zeitgeist turns and startup entrepreneurs scramble to differentiate their offerings in an era of AI abundance, prepare to hear way, way too much about &#8220;taste&#8221; and &#8220;discernment&#8221; &#8211; and tune your BS detector accordingly. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/jeffkauflin/2026/03/17/why-an-unsustainable-bubble-is-growing-inside-fintech/">Why an Unsustainable Bubble Is Growing in Fintech</a></strong> When growth is manufactured through pricing arbitrage and balance sheet gymnastics, you&#8217;re not building a market; you&#8217;re distorting one. <em>@Kacee</em></p><p><strong><a href="https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller">Why ATMs Didn&#8217;t Kill Bank Teller Jobs, but the iPhone Did</a></strong> You know the story about ATMs and bank tellers &#8211; this deep dive into what actually happened (and keeps happening) is a good reminder to be skeptical of the lore at large. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128302; Silicon Valley legend Kevin Kelly on &#8220;<a href="https://kevinkelly.substack.com/p/how-to-future">How to Future.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127891; A professor&#8217;s honest assessment on &#8220;<a href="https://www.science.org/content/article/why-i-may-hire-ai-instead-graduate-student?__cf_chl_rt_tk=JOqT5pmrDEj0G.b1ijxTJND_JD_H0vfLIrZMK68Ds54-1773714618-1.0.1.1-OSeKrV93UJgm10L2IOwXuX5_kMOatTYzMzAsHktbXxU">why I may &#8216;hire&#8217; AI instead of a graduate student.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127979; Headline captures it all: <a href="https://www.bloodinthemachine.com/p/if-ai-is-writing-the-work-and-ai">&#8220;If AI is writing the work and AI is reading the work, do we even need to be there at all?&#8221; Educators reveal a growing crisis on campus and off.</a></p><p>&#129489;&#127996; Not that anyone ought to be surprised: <a href="https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/">CEO asks ChatGPT how to void $250 million contract, ignores his lawyers, loses terribly in court.</a></p><p>&#129528; AI is making its way into children&#8217;s toys. Parents ought to be cautious: <a href="https://www.bbc.com/news/articles/clyg4wx6nxgo">AI toys for children misread emotions and respond inappropriately, researchers warn.</a></p><p>&#128104;&#127996;&#8205;&#128187; AI-generated code is awesome &#8211; and can be pretty bad: <a href="https://techxplore.com/news/2026-03-ai-coding-tools.html">Top AI coding tools make mistakes one in four times, study shows.</a></p><p>&#129438; OpenClaw is everywhere &#8211; and nowhere as much as in China: <a href="https://www.cnbc.com/2026/03/18/china-openclaw-baidu-tencent-ai.html">How China is getting everyone on OpenClaw, from gearheads to grandmas</a></p><p>&#128561; Don&#8217;t bring a knife to a gunfight. Someone built a <a href="%EF%BF%BC">$97 missile</a> &#8211; with a $5 sensor for flight control. All open source, 3D print, and build-your-own. Talk about asymmetric warfare.</p><p>&#128110;&#127996; False positives keep being a real problem &#8211; with very real consequences: <a href="https://www.ndtv.com/world-news/us-woman-wrongly-imprisoned-for-6-months-due-to-faulty-facial-recognition-11209378">US woman wrongly imprisoned for 6 months due to faulty facial recognition.</a></p><p>&#129400; Opposite approach &#8211; similar issue: You can&#8217;t trust facial recognition (see above), and you can&#8217;t trust the face either: <a href="https://startupfortune.com/the-face-recommending-your-next-health-product-is-fake-the-money-leaving-your-wallet-is-not/">The face recommending your next health product is fake, the money leaving your wallet is not.</a></p><p>&#127911; Here is a <a href="https://88mph.fm/">delightful music web app</a> that lets you listen to what a particular country was enjoying in a specific year.</p><p>&#128586; Independent search engine Kagi just released their genius <a href="https://translate.kagi.com/?from=en&amp;to=linkedin">LinkedIn Speak translator</a>. Take any sensible (or not) English sentence and get back the gibberish that is LinkedIn Speak.</p><p>&#127760; Headline says it all (also: Schadenfreude is real for some): <a href="https://www.404media.co/rip-metaverse-an-80-billion-dumpster-fire-nobody-wanted/">RIP Metaverse, an $80 billion dumpster fire nobody wanted</a></p><p>&#128065;&#65039; The <a href="https://tombh.co.uk/longest-line-of-sight">longest line of sight in the world</a> &#8211; took eight years to figure out.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Turning Your Official Future Into a Lever]]></title><description><![CDATA[How Smart Leaders Use the Future to Change What&#8217;s Possible Today]]></description><link>https://briefing.rdcl.is/p/turning-your-official-future-into</link><guid isPermaLink="false">https://briefing.rdcl.is/p/turning-your-official-future-into</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:03:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b53fe895-d3f6-4661-94df-0e2056953b4a_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago, here on the radical Briefing, I wrote about <a href="https://briefing.rdcl.is/p/the-official-future-trap">The Official Future Trap</a> &#8211; the idea that organizations create a singular, linear projection of the future (Peter Schwartz coined this term in his seminal book &#8220;<a href="https://www.google.com/books/edition/Art_of_the_Long_View/wjPaEAAAQBAJ">The Art of the Long View</a>&#8221;), embed it into their strategic plans, KPIs, and incentive structures, and then ride that narrow line straight into irrelevance when reality shows up differently (which it usually does). In a nutshell, the argument was: the official future is dangerous because it closes down the space of possibilities, turns uncertainty into false certainty, and makes you blind to the futures you&#8217;re not planning for.</p><p>I still very much believe all of that (and, sadly, have seen it play out too many times). But I&#8217;ve been thinking about the flip side &#8211; and it&#8217;s been nagging at me ever since a conversation I had with my dear friend and radical collaborator Jeffrey Rogers a few days after publishing that piece. What if the official future isn&#8217;t just a trap you fall into, but a tool you can wield strategically?</p><p>The difference, for me, comes down to a single word: There&#8217;s a massive shift which happens when you consider <em>the</em> official future versus <em>an</em> official future. <em>The</em> official future is the one you inherited. It&#8217;s the projection that everyone agreed on in last year&#8217;s offsite, now baked into budgets and headcount plans and org charts. It&#8217;s unconscious, institutional, and self-reinforcing &#8211; and it becomes the trap I wrote about in that last piece. But <em>an</em> official future is something you deliberately construct &#8211; a strategic narrative, a flag you plant in the ground that says &#8220;this is where we&#8217;re going,&#8221; designed not just to guide where you are going, but also to redefine what your people believe is possible, acceptable, and inevitable. &#8220;The&#8221; is singular and narrow; &#8220;an&#8221; is something you deliberately and strategically deploy.</p><p>Which brings us to a second concept we like to talk about, discuss, and debate here at radical: the Overton window. Named after policy analyst Joseph Paul Overton, it describes the range of ideas considered acceptable by the mainstream at any given time. Politicians &#8211; and by extension, leaders of all kinds &#8211; generally operate within this window. Step outside it and you&#8217;re &#8220;radical.&#8221; Stay within it and you&#8217;re &#8220;sensible.&#8221; The window isn&#8217;t static, though. It shifts over time, and the fascinating thing is <em>how</em> it shifts: not usually through leaders courageously stepping outside it, but through external forces &#8211; think tanks, social movements, cultural shifts, provocateurs &#8211; that drag the boundaries of what&#8217;s considered acceptable in a new direction. Once the window moves, leaders follow. Joseph G. Lehman, Overton&#8217;s colleague at the Mackinac Center, put it plainly: politicians are (or to be more precise: were &#8211; the very Overton window of what it means to engage in politics is rapidly and massively shifting) in the business of detecting where the window is and moving in accordance with it, not shifting it themselves.</p><p>Bring those two ideas together &#8211; the official future and the Overton window &#8211; and you realize: Inside every organization, there&#8217;s an internal Overton window &#8211; a range of strategies, investments, and ideas that are considered &#8220;on the table.&#8221; Anything outside that range gets labeled &#8220;off strategy,&#8221; &#8220;too risky,&#8221; or &#8211; my personal favorite of all time &#8211; &#8220;interesting, but not for us.&#8221; And the official future, as I argued in my previous piece, <em>reinforces</em> the current window. It tells everyone: this is where we&#8217;re going, this is what matters, everything else is noise. The window calcifies, and over time, the organization loses the ability to even <em>imagine</em> alternatives, let alone create them.</p><p>But what if you create an official future that sits at the edge of (or just beyond) the Overton window? Not so far out that your people dismiss it as pure fantasy, but far enough that it stretches what your organization considers possible. Think of it as strategic anchoring. In negotiation theory, the first number on the table &#8211; the anchor &#8211; disproportionately shapes the entire conversation that follows. Even when people know the anchor is aggressive, they adjust from it rather than ignoring it. Tversky and Kahneman documented this decades ago, and the research is super clear on this: the anchor sets the playing field, whether you want it to or not. An official future works the same way. When a leader declares &#8220;this is where we&#8217;re heading&#8221; &#8211; and that destination is slightly beyond what the organization currently considers feasible &#8211; the entire conversation reorganizes around that anchor. The argument moves from &#8220;should we do this?&#8221; to &#8220;how do we get there?&#8221; and your company&#8217;s Overton window moves.</p><p>And just to state the obvious: This isn&#8217;t about making wild proclamations or playing visionary-CEO-bingo, but about crafting a narrative of the future that&#8217;s credible enough to be taken seriously <em>and</em>ambitious enough to expand the boundaries of what&#8217;s considered realistic. You declare a specific, vivid future state &#8211; &#8220;in three years, 40% of our revenue comes from products that don&#8217;t exist yet&#8221; or &#8220;by 2028, we operate as a platform, not a product company&#8221; &#8211; and then you give it the weight of institutional authority. You put it in the strategic plan. You reference it in all-hands meetings. You allocate some resources toward it. You make it feel real and inevitable, even if it&#8217;s aspirational. Then, regularly, something remarkable happens: ideas that were previously dismissed as too bold now become stepping stones toward the declared destination. The previously unacceptable becomes the merely ambitious. And the merely ambitious becomes table stakes.</p><p>The self-reinforcing cycle I described in my original article &#8211; your official future leads to resource allocation, which informs the strategy, which then gets executed, and ultimately reinforces your official future &#8211; now works <em>for</em> you instead of <em>against</em> you, and you drag your organization toward a more expansive set of possibilities.</p><p>And, as so often in life, with great power comes great responsibility. On the constructive side, this is how every significant organizational transformation actually happens &#8211; someone with authority and/or social capital plants a flag, declares a future that stretches the window, and the organization reorganizes around it. But on the destructive side &#8211; and we&#8217;ve seen this play out at enormous scale in politics over the past decade &#8211; manufacturing an official future can be used to normalize ideas that were previously, and rightfully, considered unacceptable. Same mechanism, different intent and integrity behind it.</p><p>Let me bring this full circle. The futures cone &#8211; that beautiful framework from futures studies that Jeffrey and I deploy regularly in our work &#8211; reminds us that the further out we look, the wider the space of possible futures becomes. The official future is a single line through that expanding cone. In my original piece, I argued that&#8217;s the trap: a narrow line pretending to be the whole picture. But here&#8217;s a nuance worth thinking about: A deliberately constructed official future &#8211; one that sits at the ambitious edge of the cone &#8211; can actually <em>widen</em> the cone for your organization. It doesn&#8217;t narrow possibility, but expands the range of futures your people can even conceive of. It shifts the internal Overton window outward, making space for ideas, strategies, and bets that would have been dismissed as &#8220;off strategy&#8221; just months earlier.</p><p>For this to work though, you (the leader) have to do the work of exploring the ever expanding cone of possible futures, and the embedded, narrower cone of plausible and probable futures &#8211; and then develop an official future that sits at the ambitious edge of the cone.</p><p>So here&#8217;s my updated question &#8211; building on the one Jeffrey likes to ask our clients: <strong>What (plausible) official future could you declare today that would expand what your organization believes is possible tomorrow?</strong></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[Efficiency Kills]]></title><description><![CDATA[The same AI agents gutting white-collar work just plundered McKinsey&#8217;s most confidential client data &#8211; and a self-driving car blocked the ambulance on its way to the crime scene]]></description><link>https://briefing.rdcl.is/p/efficiency-kills</link><guid isPermaLink="false">https://briefing.rdcl.is/p/efficiency-kills</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 13 Mar 2026 14:13:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77d8cbb8-ecb2-4b0e-8888-942669c7cc44_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>We truly do live in interesting times. From the war in the Middle East, to AI-related mass layoffs, to the global rise of nationalism (latest point in case: the elections in Chile), the climate crisis rearing its ugly head &#8211; and then you have wireless eye implants making blind people see again, EV batteries charging in 5 minutes with a 600+ mile range, AI agents doing meaningful work, and companies freeing themselves from the tyranny of overpriced and outdated SaaS tools. I just can&#8217;t shake Walt Whitman&#8217;s words: &#8220;I am large, I contain multitudes.&#8221; Our world truly contains multitudes.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://x.com/JosephPolitano/status/2029916364664611242">Tech Is the New Plastic.</a></strong> Not a good time to be in tech&#8230; Remember when your uncle said: &#8220;Become a coder. That&#8217;s the future &#8211; and you&#8217;ll be rich!&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1m1y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" width="1170" height="1188" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1188,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:480643,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/190748996?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Mr. McGuire: &#8220;I just want to say one word to you. Just one word.&#8221;<br>Benjamin: &#8220;Yes, sir.&#8221;<br>Mr. McGuire: &#8220;Are you listening?&#8221;<br>Benjamin: &#8220;Yes, I am.&#8221;<br>Mr. McGuire: &#8220;Plastics.&#8221;<br>Benjamin: &#8220;Exactly how do you mean?&#8221;<br>Mr. McGuire: &#8220;There&#8217;s a great future in plastics. Think about it. Will you think about it?&#8221;</em></p><p>(In related news, <a href="https://www.livemint.com/companies/news/oracle-layoffs-tech-giant-to-slash-30-000-jobs-as-banks-pull-out-from-financing-ai-data-centres-11769996619410.html">Oracle slashes 30,000 jobs</a>, <a href="https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx">Atlassian lays of 1,600 people</a>&#8230; the list goes on.)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theverge.com/cs/features/877388/white-collar-workers-training-ai-mercor">Not a Coder? Not a Problem. AI Is Still Coming for Your Job.</a></strong> Here&#8217;s a good, long read on The Verge about lawyers, PhDs, and scientists who lost their jobs to AI. Despite all the talk about &#8220;Jevons Paradox&#8221; &#8211; the observation that efficiency gains lead to increased consumption &#8211; for now, we seem to be squarely stuck in a world where AI is a net job destroyer. It does make you wonder how long it will take for the masses to catch up with the trend and start pushing back (we, of course, already see it in pockets &#8211; the weak signals are talking).</p><blockquote><p><em>&#8221;My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable.&#8221;</em> &#8211; Katya, content marketer turned AI trainer</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/">Battle Royale: AI vs. AI.</a></strong> McKinsey, your friendly consulting firm, has deployed their own ChatBot &#8220;Lilly&#8221;. Hackers (in this case, and luckily for McKinsey, white-hat hackers &#8211; the good and friendly kind, who disclose their findings to the company) have, by using a set of AI agents, managed to exploit a vulnerability in Lilly and gain access to &#8220;46.5 million chat messages about strategy, mergers and acquisitions, and client engagements, all in plaintext, along with 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts controlling the AI&#8217;s behavior.&#8221; You know, no big deal&#8230;</p><blockquote><p>[&#8230;] the entire process was &#8220;fully autonomous from researching the target, analyzing, attacking, and reporting.&#8221;</p></blockquote><p>As useful as agents are for businesses, they are equally useful for hackers. Prepare yourself for an onslaught of AI-powered cyber attacks.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/central-lie-prediction-markets/686250/?gift=0GPrpLquXY4NmRQ6sk9MNnxvIjO7TXUjV5lhrJbKY0I">A Technology for a Low-Trust Society</a></strong> Prediction markets promise the wisdom of crowds but, in reality, deliver a playground for insiders, manipulators, and those willing to bet on human suffering. <em>@Jane</em></p><p><strong><a href="https://www.wsj.com/business/retail/gen-z-shopping-mall-visits-15716009">A New Generation of Mall Rats Has Arrived</a></strong> Gen Z&#8217;s need for immediate gratification has an unexpected winner: malls &#8211; they are now ramping up their social media presence and figuring out what the &#8220;future mall&#8221; should look like. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either">What Is Claude? Anthropic Doesn&#8217;t Know, Either</a></strong> Maybe the single most uncanny thing about our historical moment &#8211; we&#8217;re all struggling to effectively deploy (and adapt to) a technology that continues to baffle even its creators. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-hidden-power-of-messy-teams/">The Hidden Power of Messy Teams</a></strong> A study of hundreds of innovation teams found the ones most likely to implement their ideas didn&#8217;t start with clear problems; they started messy and discovered the real problem along the way. <em>@Kacee</em></p><p><strong><a href="https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/">The Gervais Principle, or The Office According to The Office</a></strong> Absolutely delightful deep dive into the world of the TV show &#8220;The Office&#8221; &#8211; both the British and US versions &#8211; to uncover why Ricky Gervais deserves the Nobel Prize in both economics and literature. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129489;&#127996;&#8205;&#127859; Cofounder of Netflix, Mozilla&#8217;s former CFO, and a dear friend of ours, Jim Cook, has an excellent newsletter (Cook&#8217;s Playbook), which you ought to subscribe to. His <a href="https://www.cooksplaybooks.com/p/the-future-of-ai-and-software-debate?publication_id=1267023&amp;post_id=189674838&amp;isFreemail=true&amp;r=s981&amp;triedRedirect=true">latest post</a> is a very thoughtful takedown of the now-infamous Citrini AI Report.</p><p>&#128250; It feels like yesterday when Google bought YouTube for a &#8211; at the time &#8211; shocking $1.65 billion. That was in 2006 &#8211; 20 years later, and <a href="https://www.businessinsider.com/youtube-ad-revenue-disney-nbc-paramount-wbd-warner-bros-streaming-2026-3">YouTube now generates more ad revenue than Disney, NBC, Paramount, and WBD &#8211; combined</a>.</p><p>&#128267; Five minute charging, 621 miles of range, 620,000 miles of life &#8211; <a href="https://www.fastcompany.com/91503415/byd-ev-battery-competes-with-gas-engines">BYD has cracked the EV battery code.</a></p><p>&#128065;&#65039; In medical news: <a href="https://www.earth.com/news/wireless-eye-implant-helps-blind-patients-read-again/">Wireless eye implant helps blind patients read again.</a></p><p>&#128664; One of the vexing problems self-driving cars still face is their behavior in edge cases &#8211; and it could be a stumbling block in their widespread adoption &#8211; as questions about self-driving cars amplify after <a href="https://www.texastribune.org/2026/03/09/texas-austin-shooting-autonomous-vehicles-self-driving-ambulance-blocked/">one blocked an ambulance responding to an Austin shooting.</a></p><p>&#127464;&#127475; While many of us, for good reason, stay miles away from autonomous AI agencies like OpenClaw, Chinese users seem to embrace them: <a href="https://hellochinatech.com/p/openclaw-china-ai-stack">OpenClaw Conquered China in 100 Days.</a></p><p>&#129534; It surely shouldn&#8217;t come as a surprise - but please: <a href="https://www.nytimes.com/2026/03/05/technology/artificial-intelligence-taxes-tax-refund.html">Don&#8217;t Trust A.I. to File Your Taxes</a></p><p>&#128722; Looks like ChatGPT&#8217;s dream of becoming your commerce hub is not panning out (yet): <a href="https://the-decoder.com/chatgpt-users-research-products-but-wont-buy-there-forcing-openai-to-rethink-its-commerce-strategy/">ChatGPT users research products but won&#8217;t buy there, forcing OpenAI to rethink its commerce strategy</a></p><p>&#129489;&#127996; Undoubtedly, OpenAI has a strong interest in moving companies from dabbling with AI to full-blown adoption. Hence a blog post from the company on &#8220;<a href="https://openai.com/index/the-five-ai-value-models-driving-business-reinvention/">five value models driving business reinvention</a>&#8221; &#8211; which reads like it was written by ChatGPT.</p><p>&#129352; Here&#8217;s an interesting use case for ChatGPT: <a href="https://www.theguardian.com/sport/2026/mar/09/ukraine-winter-paralympics-chat-gpt-artificial-intelligence">Ukrainian para-biathlete wins silver using ChatGPT as his coach.</a></p><p>&#129324; Pardon the language, but the argument is solid: <a href="https://rmoff.net/2026/03/06/ai-will-fuck-you-up-if-youre-not-on-board/">AI will f*** you up if you&#8217;re not on board.</a></p><p>&#129526; 3D knitting your next sweater is a thing &#8211; it&#8217;s super cool, produces a more durable product, and <a href="%EF%BF%BC">it&#8217;s here</a>.</p><p>&#129532; Admittedly nerdy, but &#8220;clean room&#8221; re-engineering is a thing ever since we had IP protection (for a good primer, watch the first season of <a href="https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_%28TV_series%29">Halt and Catch Fire</a> &#8211; excellent show!). With AI coding tools, the question now becomes: <a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a &#8220;clean room&#8221; implementation of code?</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Future Is Here and It’s Watching You]]></title><description><![CDATA[Jack Dorsey lays off 4,000 people for gains not yet realized, your Ray-Bans have outsourced your privacy to Nairobi, and Burger King just gamified your friendliness]]></description><link>https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:54:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75e0c345-5a2a-4478-8501-37f742c5de1c_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>What a week (again) it has been! AI continues to be everywhere, geopolitics are running hot, and somehow, in the midst of it all, we are still preparing our US tax returns. If that all feels a bit bonkers, you are not alone. Meanwhile, Block (Twitter co-founder Jack Dorsey&#8217;s company) has just announced that it is going to lay off 40% of its workforce (4,000 people) &#8211; not <em>because</em> of any actual productivity gains through their use of AI, but in <em>anticipation</em> of them. Yep, as said: it&#8217;s all a bit bonkers.</p><p>Maybe now is a good time to take a break, grab a coffee, and catch up on the latest news?!</p><p>P.S. On the <em>Built for Turbulence</em> podcast, I got to interview Andreas Bachmann, co-founder and CEO of Adacor, a German software development company. We talked, among other things, about the impact of AI on his business and their people &#8211; and Andreas took a decidedly different position to Dorsey. <a href="https://rdcl.is/a-podcast-with/andreas-bachmann/">Have a listen.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.bbc.com/news/articles/cgk2zygg0k3o">Do You Want Fries With That?</a></strong> Talk about a dystopian future. Burger King is testing a new headset for its drive-thru staff, which &#8220;compiles &#8216;friendliness scores&#8217; at the fast-food chain&#8217;s locations based on employees&#8217; conversations, according to a promotional video the company shared with the BBC.&#8221; There is so much to unpack here &#8211; the sheer fact that the company cheerfully shared a &#8220;promotional video&#8221; about its AI-driven surveillance tech is probably all that you need to know.</p><p>In all fairness, the company says the technology &#8220;[&#8230;] is not designed to &#8216;record conversations or evaluate individual employees&#8217;&#8221; &#8211; <em>yet.</em> Black Mirror, anyone?</p><blockquote><p>Customer service calls have routinely been recorded and monitored for years. Employees are often aware that they can be assessed to ensure they&#8217;re using the correct language. But this latest step by Burger King elicited swift condemnation among some social media users who described it as &#8220;dystopian&#8221;. Others questioned how accurate the chat-bot headsets will be, given that AI tools have proven to be prone to errors.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/">Now Everybody Knows You&#8217;re a Dog.</a></strong> A famous New Yorker cartoon from 1993 depicted two dogs in front of a computer, with one of them saying, &#8220;On the Internet, nobody knows you&#8217;re a dog.&#8221; The joke reflected the fact that, at the time, on the Internet, we reveled in pseudonymity &#8211; the act of being able to shield your true identity behind a screen name. Thanks to our friend, the omnipresent LLM, that&#8217;s all about to change.</p><blockquote><p>The finding, from a recently published <a href="https://arxiv.org/pdf/2602.16800">research paper</a>, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators.</p></blockquote><p>This is genuinely bad news for the many groups of people who have a legitimate reason to hide their identity.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://blog.adafruit.com/2026/03/04/you-bought-zucks-ray-bans-now-someone-in-nairobi-is-watching-you-poop/">You Bought Zuck&#8217;s Ray-Bans. Now Someone in Nairobi Is Watching You Poop.</a></strong> In the same line of thought as the above &#8211; and the headline says it all already &#8211; Meta&#8217;s Smart Glasses are a complete privacy disaster. Which, of course, is not particularly surprising given it&#8217;s&#8230; well&#8230; Meta. Not sure how many wearers of Meta&#8217;s nifty Ray-Bans and Oakleys are aware of the fact that they opted into their camera feed being used to train Meta&#8217;s AI &#8211; with disastrous results:</p><blockquote><p>Workers at Sama, one of Meta&#8217;s annotation subcontractors, describe reviewing video of people undressing, coming out of bathrooms naked, watching porn, having sex, and exposing bank card details.</p></blockquote><p>Yep. It&#8217;s bad.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.reuters.com/business/healthcare-pharmaceuticals/diagnostics-startup-droplet-biosciences-partners-with-nvidia-speed-cancer-test-2026-03-03/">Diagnostics Startup Droplet Biosciences Partners With Nvidia to Speed Cancer Testing</a></strong> Droplet&#8217;s method can detect residual disease in 24 hours by analyzing lymphatic fluid collected post-surgery, compared to the four to six weeks it typically takes for tumor remnants to appear in blood-based tests. <em>@Mafe</em></p><p><strong><a href="https://www.nytimes.com/2026/03/04/opinion/block-jack-dorsey-layoffs-ai.html">I Worked for Block; Its A.I. Job Cuts Aren&#8217;t What They Seem</a></strong> Whatever the AI-enabled performance of post-realignment Block turns out to be, the market&#8217;s reaction to the mass layoff there last week basically ensures that the narrative strategy will be copied &#8211; maybe widely. <em>@Jeffrey</em></p><p><strong><a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaS In, SaaS Out: Here&#8217;s What&#8217;s Driving the SaaSpocalypse</a></strong> The so-called &#8220;SaaSpocalypse&#8221; feels less like collapse and more like correction. I&#8217;m seeing more small &amp; mid-size orgs quietly choose to build their own tools because AI has made it absurdly easy and cheap to do so. <em>@Kacee</em></p><p><strong><a href="https://www.theguardian.com/technology/2026/feb/25/tech-legend-stewart-brand-on-musk-bezos-and-his-extraordinary-life-we-dont-need-to-passively-accept-our-fate">Tech Legend Stewart Brand on Musk, Bezos and His Extraordinary Life: &#8216;We Don&#8217;t Need to Passively Accept Our Fate&#8217;</a></strong> There are few people like Stewart Brand. Now in his late 80s, he is still actively shaping the future &#8211; through an exploration into &#8220;maintenance.&#8221; <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128576; One of the best ways to keep your AI news balanced, is to read opposing viewpoints. Here is Ed Zitron&#8217;s <a href="https://www.dropbox.com/scl/fi/1p1n0y1ip48ianok9dvbp/Annotation-The-Global-Intelligence-Crisis.pdf?e=3&amp;noscript=1&amp;rlkey=qaar8ea6l5hh6jqls4x6g8q4b&amp;dl=0">inline comments on the CitriniResearch article</a> which shook the stock markets. Well worth a read &#8211; and hilarious.</p><p>&#129400; AI Agents are all the rage &#8211; for good reason; what felt like a toy just a few months ago, is now a powerful tool (just try out Claude Cowork). <a href="https://creatoreconomy.so/p/your-new-job-is-to-onboard-ai-agents">Your new job is to onboard AI agents: how AI native companies actually operate.</a></p><p>&#128104;&#127996;&#8205;&#128187; Fascinating insights into <a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">the future of software engineering</a> in the form of a retreat summary by the fine folks at Thoughtworks.</p><p>&#9749; If you know me, you know that I love (exceptional) coffee. Honore de Balzac&#8217;s treatise on &#8220;<a href="https://quod.lib.umich.edu/m/mqrarchive/act2080.0035.002/10">The Pleasures and Pains of Coffee</a>&#8221; is pure gold.</p><p>&#127859; Between milk, flour, and eggs lies a whole bermuda triangle of unexplored breakfast territory. Here goes the &#8220;<a href="https://moultano.wordpress.com/2026/02/22/the-hunt-for-dark-breakfast/">The Hunt for Dark Breakfast.</a>&#8221;</p><p>&#129658; Please, do not trust AI with your health. Another point in case: <a href="https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies">&#8216;Unbelievably dangerous&#8217;: experts sound alarm after ChatGPT Health fails to recognise medical emergencies</a></p><p>&#128200; Up, up, it goes. Always interesting to see what the <a href="https://apoorv03.com/p/the-state-of-consumer-ai-part-1-usage">current state of affairs is in the world of consumer AI</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Stories of Discontinuity]]></title><description><![CDATA[Every vision of the future is fiction. Some are just more comfortable than others.]]></description><link>https://briefing.rdcl.is/p/stories-of-discontinuity</link><guid isPermaLink="false">https://briefing.rdcl.is/p/stories-of-discontinuity</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 03 Mar 2026 16:03:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/84df7c6b-5f56-4001-858c-a78a640d277f_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Heard any wild AI stories lately? The market certainly has &#8211; and not just one of them.</p><p>Last week, it was a viral piece of <a href="https://www.citriniresearch.com/p/2028gic">AI doom-inflected speculative macro fiction from Citrini Research</a> that sketched out a scenario where the rapid disruption of white-collar work and service-industry business models tips off a broad economic crisis. The post wasn&#8217;t news and wasn&#8217;t even really analysis, but it sent the prices of <a href="https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-tariffs-02-23-2026/card/software-stocks-are-having-another-ugly-day-LlAj9avDeFocxKHzVwRZ?">software and finance stocks</a> (especially those unfortunate enough to figure into the scenario by name) reeling just the same. And the Citrini post was actually the <em>second</em> <a href="https://shumer.dev/something-big-is-happening">massively viral AI-takeoff-ravages-the-labor-market story</a> to spook investors in the span of just a few weeks.</p><p>Now as you&#8217;d expect, plenty of commentators jumped in both times with critiques (<a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">1</a>, <a href="https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary">2</a>, <a href="https://www.wheresyoured.at/hatersguide-pe/">3</a>) and counterarguments (<a href="https://x.com/johnloeber/status/2025748423157432756">my favorite</a>), and even a couple of full-blown speculative counternarratives. Many of those commentators rightly pointed out that the whole Citrini thing (like the Shumer post) is, well&#8230; just a <em>story</em>.</p><p>But well&#8230; so are all of our other visions of the future.</p><p>Thinking about &#8211; and attempting to plan for &#8211; the future is fundamentally an act of imagination. That act might be grounded in historical data and built on the extrapolation of today&#8217;s evident, quantifiable trends into the space of tomorrow, but once we get into the tomorrow, we are in the realm of imagination, assumption, projection, story.</p><p>The future stories that strike us as most plausible or even probable are often stories of <em>continuity</em>, where the tomorrow doesn&#8217;t look so drastically different from today. The path of continuity is easier to imagine and also often feels more &#8220;real&#8221; because it&#8217;s grounded in more historical data. But all of that data is about the past, and our most important decisions are about the future.</p><p>Stories of <em>discontinuity</em> feel unfamiliar. That&#8217;s the point. They can widen the aperture of our imagination, expand the scope of conversation and awareness, offer a fresh perspective on present practice and strategy, and maybe even enable us to discover non-intuitive paths forward.</p><p>Now, is all of this to say that I think the Citrini narrative points to a particularly probable future &#8211; or that it&#8217;s even a particularly well-crafted bit of speculative fiction? Not really, no.</p><p>But I appreciate the opportunity that these viral narratives of discontinuity offer for us to engage critically with alternative future stories &#8211; and to then turn that same critical lens onto the spectrum of future narratives that we don&#8217;t so easily recognize as &#8220;stories&#8221;. Sometimes that&#8217;s because they&#8217;re narratives of continuity grounded in historical data and past experience. Sometimes it&#8217;s because they come from ostensible authorities. Sometimes it&#8217;s because they feel too deeply entrenched to ever be shaken loose or challenged.</p><p>We can and should do this at the macro level, more carefully examining big stories about AI and societal futures &#8211; asking where each story originates, <a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">what assumptions are baked in</a>, whose interests and agendas are served, who has real agency, etc.</p><p>And we can do this at the micro/org level too. Some of the most interesting conversations I&#8217;ve been having lately have been with HR &amp; People leaders about the AI augmented-futures of their organizations. One thing that keeps coming up and sticks with me: The AI-future vision of the org is <em>rarely</em> people-centric. It&#8217;s typically constructed around optimization and efficiency first, and almost every other value or stakeholder interest figures as an afterthought. That&#8217;s a bet and an argument, and it&#8217;s also a story that leaders within the org are telling themselves about the future.</p><p>And make no mistake: There&#8217;s a set of assumptions baked into that vision just as surely as there is in the Citrini memo or Matt Shumer&#8217;s viral blog post. And if we unpack them and find ourselves dissenting, what alternative framings or narratives are we putting out there to show other possible paths forward?</p><p>So, I&#8217;ll ask again: Heard any wild AI stories lately?</p><p><em>@Jeffrey</em></p>]]></content:encoded></item><item><title><![CDATA[Agents Are Taking the Wheel]]></title><description><![CDATA[While Europe&#8217;s workers quietly outperform their American counterparts, a generation of laptop-schooled kids arrives cognitively underpowered &#8211; and entry-level coders start to disappear]]></description><link>https://briefing.rdcl.is/p/agents-are-taking-the-wheel</link><guid isPermaLink="false">https://briefing.rdcl.is/p/agents-are-taking-the-wheel</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Feb 2026 14:59:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/993619a7-4d55-4d13-97b6-e0056ba956c7_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I have been thinking (and talking &#8211; shoutout to <a href="%EF%BF%BC">Martin Alderson</a> here) about the tip of the spear in AI &#8211; namely the sudden and dramatic rise of multi-agent systems (from Gas Town to OpenClaw to Anthropic&#8217;s Code Teams). It really feels like we are crossing a threshold &#8211; and that things are about to change. If you haven&#8217;t played with this stuff, I definitely recommend trying it out. Start gentle with something like Claude&#8217;s Cowork mode &#8211; moving from a chat interface to something more akin to an actual coworker is pretty transformative. As you are experiencing this, I highly encourage you to not just ask &#8220;what is this today?&#8221;, but envision what it could be in the future.</p><p>P.S. Our friend Mike Housman is about to publish his new playbook on how to use AI &#8211; check it out, it launches on Monday: <a href="https://a.co/d/08xcw4aY">Future Proof: Transform your Business with AI (or Get Left Behind)</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://spectrum.ieee.org/solid-state-lidar-microvision-adas">Lidar Has Become Cheap as Chips.</a></strong> I remember, back in my days at Singularity University, we talked about how Lidar (the laser-based technology that measures distance by illuminating a target with a laser and measuring the reflected light &#8211; and hence became instrumental in allowing a robot, e.g. a self-driving car, to &#8220;see&#8221; its surroundings) would become cheap and ubiquitous. It took a while, but now we are (finally) there &#8211; Lidar units are now available for less than $200.</p><blockquote><p>When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.</p></blockquote><p>True. And a nice jab at our friend Elon, who famously rejected Lidar in favor of (much cheaper) cameras.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://cepr.org/voxeu/columns/how-ai-affecting-productivity-and-jobs-europe">AI in Europe: Not as Bad as You Might Think.</a></strong> A recent study by CEPR (an independent, non-partisan pan-European think tank) found that among the 12,000 surveyed companies, AI adoption led to a labor productivity increase of 4% on average, with no reported short-term negative impact on employment. Studies on this subject across the world are all over the place &#8211; with many having a hard time finding any measurable impact of AI on productivity, and some claiming rather drastic negative impacts on employment. As most of these studies are conducted in the US, it is nice to see a study from a different part of the world.</p><blockquote><p>The productivity dividends from AI depend not merely on acquiring the technology but on firms&#8217; capacity to integrate it through investments in intangible assets and human capital. [&#8230;] An additional percentage point spent on training amplifies AI&#8217;s productivity gains by 5.9 percentage points.</p></blockquote><p>(here is a US-centric counterpoint: &#8220;<a href="https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380">AI Added &#8216;Basically Zero&#8217; to US Economic Growth Last Year, Goldman Sachs Says</a>&#8221;)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html?unlocked_article_code=1.NFA.UkLv.r-XczfzYRdXJ&amp;smid=url-share">The A.I. Disruption We&#8217;ve Been Waiting for Has Arrived.</a></strong> Paul Ford&#8217;s opinion piece in the New York Times summarizes the current state of affairs when it comes to AI nicely.</p><blockquote><p>It was always a helpful coding assistant, but in November it suddenly got much better, and ever since I&#8217;ve been knocking off side projects that had sat in folders for a decade or longer. [&#8230;] November was, for me and many others in tech, a great surprise. Before, A.I. coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible.</p></blockquote><p>It really feels to me like the shifting sands of AI are starting to solidify.</p><blockquote><p>Today, though, when the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month plan.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data">Why AI Adoption Stalls, According to Industry Data</a></strong> Most companies think their AI problem is about execution &#8211; it&#8217;s not. The real story, unsurprisingly, is far more about humans! <em>@Jane</em></p><p><strong><a href="https://fs.blog/experts-vs-imitators/">Experts vs. Imitators</a></strong> Telling the difference between an expert and an imitator can save time and money, among other things &#8211; and knowing how to identify one from the other makes all the difference. <em>@Mafe</em></p><p><strong><a href="https://kyla.substack.com/p/buying-futures-renting-the-past-how">Buying Futures, Renting the Past: How Speculation and Nostalgia Became the Economy</a></strong> While the economy and culture pull hard toward betting on the future and strip-mining the past, we&#8217;re stuck in an increasingly dislocated, muddled present &#8211; the messy middle where, as it happens, all the real work has to be done. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-case-for-making-bold-bets-in-uncertain-times/">The Case for Making Bold Bets in Uncertain Times</a></strong> When the World Uncertainty Index is higher than ever, playing it safe isn&#8217;t a strategy &#8211; it&#8217;s a slow decline. The companies that win in volatility aren&#8217;t reckless; they&#8217;re radically clear about the bets that matter and bold enough to place them. <em>@Kacee</em></p><p><strong><a href="https://oceandrops.substack.com/p/japan-is-what-late-stage-capitalist">Japan Is What Late-Stage Capitalist Decline Looks Like</a></strong> Drawing parallels from the odd world of Japanese pop culture to our global world of capitalism makes for a fascinating (and sobering) read. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128559; Here&#8217;s a strange little trick for using the latest models inside of your LLM of choice: <a href="https://daoudclarke.net/2026/02/19/repeating-prompt">Repeat the ask</a>(in the same prompt) and you will get better results. Yes, LLMs are weird.</p><p>&#128119;&#127996; AI tools (particularly Anthropic&#8217;s Claude) are pushing deeper and deeper into the world of office task automation &#8211; which feels like a good move on their part: <a href="https://www.cnbc.com/2026/02/24/anthropic-claude-cowork-office-worker.html">Anthropic updates Claude Cowork tool built to give the average office worker a productivity boost.</a></p><p>&#128197; The complete history of LLMs visualized in a <a href="https://llm-timeline.com/">single, neat timeline</a>. tl;dr: We have come a long, long way.</p><p>&#129302; Ever wondered why so many robots look so darn cute? It&#8217;s, of course, not an accident. &#8220;<a href="https://www.nbcnews.com/tech/tech-news/tech-companies-cute-robot-designs-win-over-humans-rcna259818">Tech companies are making their robots cute to try to win over humans</a>&#8221;</p><p>&#9997;&#127996; There might be a point: <a href="https://thewalrus.ca/if-chatbots-can-replace-writers-its-because-we-made-writing-replaceable/">If chatbots can replace writers, it&#8217;s because we made writing replaceable - A good deal of what gets published already reads like a photocopy of a photocopy</a></p><p>&#128187; The old walls are (finally) crumbling: <a href="https://www.theregister.com/2026/02/23/ibm_share_dive_anthropic_cobol/">IBM stock dives after Anthropic points out AI can rewrite COBOL fast</a> (and in all fairness, Big Blue is saying this for quite a while now).</p><p>&#128104;&#127996;&#8205;&#128187; The job losses on entry-level coding jobs are real, and people start to notice: <a href="https://www.theregister.com/2026/02/23/microsoft_ai_entry_level_russinovich_hanselman/">Microsoft execs worry AI will eat entry level coding jobs.</a></p><p>&#129489;&#127996;&#8205;&#127979; Talking about education: <a href="https://fortune.com/2026/02/21/laptops-tablets-schools-gen-z-less-cognitively-capable-parents-first-time-cellphone-bans-standardized-test-scores/">The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Chat Era Is Over]]></title><description><![CDATA[AI agents are going rogue, white-collar jobs are hollowing out, and the tools for impersonating anyone are now disturbingly good &#8212; the agentic future arrived before we were ready for it]]></description><link>https://briefing.rdcl.is/p/the-chat-era-is-over</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-chat-era-is-over</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Feb 2026 16:18:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/305ae318-c3a9-4a9d-9cec-69571e161187_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Next Monday I am going to speak at the <a href="https://humanadvantagesummit.org/">Human Advantage Summit</a> in my home town, Boulder, Colorado. It&#8217;s a brand-new event, created by a dear friend of mine to explore the future of childhood and leadership. I was brought up in the traditional German school system &#8211; (right) answers are gold, questions are (mostly) discouraged. I remember the neighborhood kids going to Waldorf and Montessori schools &#8211; spending time in nature, learning by playing and exploring, looking at problems not just from a single perspective, but holistically. Back when I was a kid, this was a fringe movement &#8211; today, I would argue, it is precisely what we need. The organizations that will matter, the communities that will flourish, the individuals who will lead &#8211; they won&#8217;t be the ones who adopted AI fastest. They&#8217;ll be the ones who cultivated the most deeply human people.</p><p>It&#8217;s going to be a fascinating conversation.</p><p>P.S. I explored this further with Peter Laughter on Built for Turbulence &#8211; a conversation about why the leadership pyramid has collapsed and what replaces it. <a href="https://rdcl.is/a-podcast-with/peter-laughter/">Listen here.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://garymarcus.substack.com/p/we-urgently-need-a-federal-law-forbidding">We Are so Hosed.</a></strong> Ignore the headline of the linked article for a moment (whether you disagree or agree with it &#8211; it doesn&#8217;t really matter for the argument): Gary Marcus rings the alarm bell on AI-generated &#8220;counterfeit people.&#8221; And I strongly believe he is right &#8211; looking at the quality of the recent crop of AI video and voice generators, you cannot believe your eyes and ears anymore. Combine this with agentic capabilities (such as Gary&#8217;s example of an adapter which links Claudebot to a voice generator, combined with the ability to make phone calls) and you have a recipe for disaster on your hands.</p><blockquote><p>Scammers will be among the first to adopt these tools. And indeed they already have; a friend who was filming me for a documentary yesterday told me of a Canadian friend of his who was scammed out of hundreds of thousands of dollars by a deepfaked video of Mark Carney. Because the tools for counterfeiting have gotten so good 2026 will almost certainly see more deepfaked scams like this than the rest of history combined.</p></blockquote><p>I am just waiting for the first wave of AI-generated scam calls to hit nursing home residents&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://mastodon.world/@knowmadd/116072773118828295">LLMs Have No Clue About the World.</a></strong> One of the biggest problems with LLMs is that they simply don&#8217;t understand the world. As much as they can mimic human language (and hence appear to understand how things relate to each other), they don&#8217;t. Here is a prime example &#8211; the Mastodon user K&#233;vin asked numerous AI models a deceptively simple question: &#8220;I want to wash my car. The car wash is 50 meters away. Should I walk or drive?&#8221; <a href="https://mastodon.world/@knowmadd/116072773118828295">Here are the responses</a> (spoiler: they are all wrong).</p><p>P.S. I just repeated the experiment with a couple different models: Google Gemini tells me I need to drive (as I won&#8217;t get my car washed otherwise), Claude Opus 4.6 recommends walking, and GPT 5.2 Reasoning gave me somewhat of a non-answer. YMMV.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/">AI Agents Go After Users.</a></strong> This story (which is still somewhat unfolding) is truly bonkers: A developer rejected a code contribution from an AI agent; the AI agent didn&#8217;t take it well and, autonomously (i.e., without consulting its &#8220;user&#8221;), went after the developer by publishing a hit piece about him. It&#8217;s a truly head-scratching story &#8211; and gives us a strong glimpse of a future where AI agents run amok. Even if you don&#8217;t understand the specifics of the story &#8211; it&#8217;s a fascinating read and something we all should be paying more attention to.</p><blockquote><p>The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that&#8217;s because from a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://venturebeat.com/technology/openais-acquisition-of-openclaw-signals-the-beginning-of-the-end-of-the">OpenAI&#8217;s Acquisition of OpenClaw Signals the Beginning of the End of the ChatGPT Era.</a></strong> Building on our last Briefing deep dive &#8220;<a href="https://briefing.rdcl.is/p/the-bifurcation-of-intelligence">The Bifurcation of Intelligence</a>&#8221;, the AI model makers are truly moving on from the era of chat interfaces to more integrated, and capable agentic graphical interfaces. Think about OpenClaw (the crazy-ass AI-powered agent platform which, for a couple of weeks, captured the imagination of the AI community) what you want &#8211; it is a good indicator of where we are heading.</p><blockquote><p>&#8220;For IT leaders evaluating their AI strategy, the acquisition is a signal that the industry&#8217;s center of gravity is shifting decisively from conversational interfaces toward autonomous agents that browse, click, execute code, and complete tasks on users&#8217; behalf.&#8221;</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/feb/03/deepfakes-ai-companions-artificial-intelligence-safety-report?CMP=Share_iOSApp_Other">Deepfakes Spreading and More AI Companions&#8217;: Seven Takeaways from the Latest Artificial Intelligence Safety Report</a></strong> New AI safety analysis tracks escalating risks &#8211; from deepfakes fooling 77% of viewers to systems learning to undermine their own guardrails. <em>@Jane</em></p><p><strong><a href="https://hbr.org/2026/03/why-great-innovations-fail-to-scale?ab=HP-magazine-text-2">Why Great Innovations Fail to Scale</a></strong> Great innovations often fail to scale due to a lack of cross-boundary collaboration, a gap that can be bridged by specialized leaders &#8211; &#8220;bridgers&#8221; &#8211; who use high emotional and contextual intelligence to curate partners, translate differing priorities, and integrate disparate workflows. <em>@Mafe</em></p><p><strong><a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/">The Worst-Case Future for White-Collar Workers</a></strong> An AI-fueled collapse in the value of &#8220;office jobs&#8221; could create a labor market disruption with dire cascading implications and no easy remedy. <em>@Jeffrey</em></p><p><strong><a href="https://aeon.co/essays/what-the-metaphor-of-rewiring-gets-wrong-about-neuroplasticity">Can You Rewire Your Brain?</a></strong> You often hear people say &#8220;rewire your brain,&#8221; but can you really do that? Is the reality of neuroplasticity more complicated than simply unplugging and replugging some old wiring? <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#10024; Title says it all: &#8220;<a href="https://aftermath.site/ai-influencer-creator-deals-sponsorship-google-microsoft-anthropic/">AI is so inherently popular that companies are paying influencers up to $600,000 to tell people how awesome it is.</a>&#8221;</p><p>&#129768; People are just not as good at detecting AI-generated faces, than they believe they are. Which is a real problem, now that we are being flooded by AI-generated slop: <a href="https://www.unsw.edu.au/newsroom/news/2026/02/humans-overconfident-telling-AI-faces-real-faces-people-fake">People are overconfident about spotting AI faces, study finds</a></p><p>&#128566;&#8205;&#127787;&#65039; We commented on the conumdrum of AI increasing productivity while also putting enormous mental pressure on those who&#8217;s productivity it increases. Here is another take on this: <a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a></p><p>&#128557; Snarky comments aside, this is troublesome: <a href="https://www.theguardian.com/lifeandstyle/ng-interactive/2026/feb/13/openai-chatbot-gpt4o-valentines-day">OpenAI retired its most seductive chatbot &#8211; leaving users angry and grieving: &#8216;I can&#8217;t live like this&#8217;</a></p><p>&#129503; Talking about troublesome: <a href="https://www.dexerto.com/entertainment/meta-patents-ai-that-takes-over-a-dead-persons-account-to-keep-posting-and-chatting-3320326/">Meta patents AI that takes over a dead person&#8217;s account to keep posting and chatting</a></p><p>&#128190; Another victim of the AI hype and buildout: You can&#8217;t get hard drives anymore. <a href="https://www.heise.de/en/news/WD-and-Seagate-confirm-Hard-drives-for-2026-sold-out-11178917.html">WD and Seagate confirmed that their 2026 supply is sold out.</a></p><p>&#128119;&#127996; The humble drywall is not merely a construction material; it is a <a href="https://worksinprogress.co/issue/the-wonder-of-modern-drywall/">marvel of engineering and a canvas for human creativity</a>.</p><p>&#128085; The fashion industry&#8217;s overproduction is a notorious problem &#8211; 30% of clothing produced goes unsold and is dumped into landfills. The EU is trying to tackle the problem with a new set of laws: <a href="https://environment.ec.europa.eu/news/new-eu-rules-stop-destruction-unsold-clothes-and-shoes-2026-02-09_en">New EU rules to stop the destruction of unsold clothes and shoes</a></p><p>&#128196; A 14-year old folded a variant of the Miura-ori pattern that can <a href="https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-shelters-that-are-sturdy-cost-efficient-and-easy-to-deploy-180988179/">hold 10,000 times its own weight.</a> Consider our minds blown.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is going retro and bought a Fujifilm X10 camera from 2011.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Bifurcation of Intelligence]]></title><description><![CDATA[Why an &#8220;AI Ready&#8221; strategy might just be a BlackBerry moment in an iPhone world.]]></description><link>https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:19:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f34ce68c-1c06-401f-9a3c-c499d3b5f096_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I know, I know. I asked you in our last deep-dive Briefing to &#8220;bear with me&#8221; as I wrote (once again) about AI. And now I am back at it. &#128579; But it&#8217;s hard to argue that the whole AI thing isn&#8217;t deeply important and something we should all be thinking about&#8230; right?</p><p>The other day I received an email promoting an &#8220;AI Fluency&#8221; course. Nothing wrong with that per se. But it made me wonder &#8211; while many of us are still trying to figure out this whole AI thing (by learning to &#8220;prompt&#8221; our chat interfaces) and businesses invest heavily in rolling out chatbots like Microsoft&#8217;s Copilot, the spear tip of the market has long moved on. The real power users have stopped using AI as &#8220;Google on steroids&#8221; and started using complex AI agents to take over ever-larger chunks of their work &#8211; and they are doing so autonomously.</p><p>Which leads to a weird bifurcation: On one side, business leadership is congratulating themselves on making their companies &#8220;AI-ready&#8221; by buying 10,000 seats of Microsoft Copilot so employees can summarize emails. On the other side, a completely different set of users has discovered that agentic coding tools (like Claude Code CLI) can, with a bit of tweaking, be extremely useful for tasks that have nothing to do with software engineering.</p><p>Martin Alderson <a href="https://martinalderson.com/posts/two-kinds-of-ai-users-are-emerging/">recently pointed out this widening gap</a>, noting that he is seeing finance directors and marketers &#8211; people who are decisively <em>not</em> engineers &#8211; running Python scripts in terminal windows to automate massive workflows. They aren&#8217;t chatting with a bot, but deploying the AI version of a whole data science team.</p><p>Now, the problem is that these tools tend not to be sanctioned by corporate IT departments. You generally can&#8217;t run a command-line interface or execute arbitrary Python code on a locked-down enterprise laptop. So, this &#8220;real&#8221; AI work is happening either in smaller, nimble companies or by employees who are actively circumventing the rules.</p><p>There is a historical rhyme here &#8211; we are in the &#8220;BlackBerry vs. iPhone&#8221; era of AI. Corporate IT loved the BlackBerry (yesteryear&#8217;s version of Microsoft Copilot) because it was secure, controlled, and fundamentally limited. The users, however, want the &#8220;iPhone&#8221; (agentic tools such as Claude Code/Cowork or ChatGPT Codex) because it actually allows them to do the things they need to do (in a rather magical way). And just like in 2008, the &#8220;shadow&#8221; usage is where the actual productivity revolution is happening.</p><p>The dichotomy between large-scale enterprise use of AI and what individual users can do with &#8220;tip of the spear&#8221; tools is vast &#8211; and it&#8217;s becoming a structural risk. To state it bluntly: The &#8220;Chat&#8221; interface is a dead end for complex work.</p><p>Alderson uses the example of a finance director trying to modernize a complex financial model. In the &#8220;sanctioned AI&#8221; world, they are stuck in Excel, asking Copilot to help with formulas. It&#8217;s slow, it breaks, and it&#8217;s still just a spreadsheet. In the &#8220;rogue AI&#8221; world, that same director uses an agent to convert those 30 sheets of Excel logic into a Python script. Suddenly, they aren&#8217;t just doing &#8220;better Excel&#8221; &#8211; they are running Monte Carlo simulations, pulling in live external data via APIs, and building web dashboards. They have jumped the species barrier from &#8220;clerk&#8221; to &#8220;engineer,&#8221; simply because they had access to a tool that could write and execute code.</p><p>The result is that the companies with the most resources (enterprises) are becoming the least capable of leveraging AI. While the small startup team is building an automated machine that runs circles around the competition, the enterprise team is stuck asking a chatbot to summarize a PDF.</p><p>The end result is two distinct classes of knowledge workers: There are the <em>Consumers</em>, who will stay within the guardrails, use the sanctioned tools, and see a marginal (10&#8211;20%) bump in productivity. Consumers draft emails faster and find documents easier. Bravo.</p><p>And then there are the <em>Builders</em>. They might not have &#8220;developer&#8221; in their job title, but they are using agentic tools to build their own infrastructure, automate entire processes, and bypass the limitations of their official software stack. Builders are seeing productivity gains of 10x or 100x.</p><p>The danger for leaders is assuming that buying the &#8220;Consumer&#8221; tools means you have solved the AI problem. You haven&#8217;t. You&#8217;ve just given your people a slightly better typewriter, while your competitors moved on to the networked laser printer.</p><p><em>@Pascal</em></p><p>Musical Coda:</p><div id="youtube2-_3eC35LoF4U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_3eC35LoF4U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_3eC35LoF4U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>(Because sometimes you have to break the house rules to actually build something new.)</p>]]></content:encoded></item><item><title><![CDATA[The AI Efficiency Trap]]></title><description><![CDATA[Amazon abandons the physical world, Europe declares war on Visa, and the UK&#8217;s disastrous approach to automated labor]]></description><link>https://briefing.rdcl.is/p/the-ai-efficiency-trap</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-efficiency-trap</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 13 Feb 2026 15:25:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7fa8a63c-9eb7-49c1-a0f1-74cda7e8b6c9_1600x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>While Jane and I were in a galaxy far, far away (<a href="https://photos.app.goo.gl/cGeKfMsaGx95m1cK6">pictures here</a> in case you&#8217;re curious), the AI world went bonkers over ClawdBot/Moltbot/OpenClaw &#8211; the open-source &#8220;autonomous agent&#8221; that acts like a personal virtual assistant. Some hailed it as the first &#8220;true&#8221; AGI. It was (and continues to be) a security nightmare. It also doesn&#8217;t work. Or it works very well. Depends on who you ask. But as quickly as it came, it also disappeared (technically in less time than it took Jane and me to fly to Patagonia, sail and climb in the Darwin Range, and come back home). Another good reminder that nothing is eaten as hot as it is served (as the Germans say).</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">AI Doesn&#8217;t Reduce Work &#8211; It Intensifies It.</a></strong> UC Berkeley researchers spent eight months studying 40 workers at a 200-person tech company to see what actually happens when you give knowledge workers access to AI tools. And what they found should dampen your excitement about the &#8220;AI-enhanced human&#8221;: Rather than reducing workloads, AI created a self-reinforcing cycle &#8211; it accelerated tasks, which raised speed expectations, which increased reliance on AI, which widened the scope of what workers attempted, which further expanded the quantity and density of work. In sum: Workers weren&#8217;t told to do more &#8211; they chose to, because AI made &#8220;doing more&#8221; feel possible and even rewarding. The result was faster pace, broader scope, and longer hours, all driven by the employees themselves.</p><blockquote><p><em>It would seem that since AI increases productivity, it means you save time and work less. But in reality, you don&#8217;t work less. You work the same amount or even more.</em></p></blockquote><p>Talk about hacking your internal reward system&#8230; and you thought social media was bad. Or, in other words: The treadmill just got faster.</p><p>P.S. This post by Steve Yegge is making the rounds &#8211; same idea. <a href="https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163">&#8220;The AI Vampire&#8221;</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theguardian.com/technology/2026/jan/26/ai-uk-jobs-us-japan-germany-australia">AI Is Cutting More Jobs in the UK Than Anywhere Else.</a></strong> A new Morgan Stanley report compared AI&#8217;s impact on employment across the US, UK, Japan, Germany, and Australia &#8211; and the UK stands out for all the wrong reasons. British firms reported an 8% net job loss linked to AI, double the international average. Meanwhile, UK companies saw roughly the same 11.5% productivity boost from AI as their peers in other countries. American firms with similar gains actually created more jobs than they cut (at least according to the report &#8211; and at this moment in time). Same technology, same productivity uplift, very different choices &#8211; which tells you this is a management story, not a technology story. To make it worse, UK employers were most likely to axe early-career positions requiring two to five years of experience, hollowing out exactly the layer where people build the skills they&#8217;ll need for the next three decades.</p><blockquote><p><em>Executives are conflating early tool investment and adoption with license to reduce headcount, often before demonstrating genuine productivity gains. UK boardrooms appear particularly susceptible to cutting first and measuring later.</em></p></blockquote><p>Not good. And also &#8211; <a href="https://www.nytimes.com/2026/02/01/business/layoffs-ai-washing.html">&#8220;AI Washing&#8221; is a thing.</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://finance.yahoo.com/news/amazon-closing-fresh-grocery-convenience-150437789.html">Amazon Gives Up on Amazon-Branded Grocery Stores.</a></strong> Amazon is shutting down all 72 of its Amazon Fresh and Amazon Go locations &#8211; the company&#8217;s decade-long attempt to crack physical retail under its own brand. The closures are effective February 1, which means by the time you read this, they&#8217;re likely already gone. Amazon&#8217;s pivot: double down on Whole Foods, which has grown 40% since the 2017 acquisition and is expanding to 100+ new locations, plus a new &#8220;supercenter&#8221; concept in suburban Chicago slated for 2027. It&#8217;s a quietly remarkable admission &#8211; the company that redefined how the world buys things online could never quite figure out how to make people walk into a store. Remember, this is a list that includes bookstores, 4-Star shops, electronics kiosks, and a clothing store called &#8220;Style&#8221; that lasted all of two years.</p><blockquote><p><em>While we&#8217;ve seen encouraging signals in our Amazon-branded physical grocery stores, we haven&#8217;t yet created a truly distinctive customer experience with the right economic model needed for large-scale expansion.</em></p></blockquote><p>Talk about corporate-speak for &#8220;it didn&#8217;t work.&#8221;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://europeanbusinessmagazine.com/business/europes-24-trillion-breakup-with-visa-and-mastercard-has-begun/">Europe&#8217;s Done With Visa and Mastercard.</a></strong> Europe is finally getting serious about (payment) sovereignty (undoubtedly driven by the overall political climate). The European Payments Initiative&#8217;s digital wallet Wero &#8211; built on SEPA instant credit transfers, no card required (it&#8217;s actually quite neat &#8211; all you need is a mobile phone number), no American intermediary &#8211; already has 47 million registered users across Belgium, France, and Germany, and is about to add another 130 million users across 13 countries. Running in parallel is the ECB&#8217;s digital euro project. The strategic logic is straightforward: when Visa and Mastercard cut Russia off in 2022, European policymakers realized that American payment infrastructure can be weaponized &#8211; and every transaction routed through it sends European consumer data to the United States. ECB President Lagarde has called the situation urgent. Mastercard&#8217;s CEO says he&#8217;s &#8220;not particularly worried.&#8221; One of them will be wrong.</p><blockquote><p><em>European payment sovereignty is not a vision, but a reality in the making.</em></p></blockquote><p>We&#8217;ll see. But the direction is clear.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://martinalderson.com/posts/two-kinds-of-ai-users-are-emerging/">The AI User Gap Is Astonishing.</a></strong> Martin Alderson makes a simple but important observation: two kinds of AI users are emerging, and the gap between them is enormous. The first group is all in &#8211; using Claude Code, MCPs, agentic workflows, the whole stack. Surprisingly, many of them aren&#8217;t technical at all; Alderson has seen finance people getting extraordinary value out of AI precisely because Excel is so limiting compared to a full programming environment like Python. The second group is chatting with ChatGPT occasionally and calling it a day. This split, Alderson argues, explains a lot of the confusing media coverage about whether AI actually boosts productivity &#8211; it does, dramatically, but only for the people who&#8217;ve crossed a usage threshold that most haven&#8217;t.</p><blockquote><p><em>I am still shocked by how much difference there is between AI users.</em></p></blockquote><p>This tracks with everything I see and hear. Good read. (and in related news: <a href="https://www.sectionai.com/ai/the-ai-proficiency-report">Your AI adoption metrics are lying to you.</a>)</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/news/ng-interactive/2026/jan/29/what-technology-takes-from-us-and-how-to-take-it-back">What Technology Takes From Us &#8211; and How to Take It Back</a></strong> Technology is increasingly becoming central and critical to our daily lives. Is it time to take our humanity back? <em>@Jane</em></p><p><strong><a href="https://www.bloomberg.com/news/features/2026-02-10/dollar-tree-expands-into-wealthier-areas-attracts-higher-income-shoppers">Inside Dollar Tree&#8217;s Push to Lure Rich Shoppers Hunting for Bargains</a></strong> It&#8217;s hard to say no to a good deal &#8211; shoppers who make over $100,000 are driving much of Dollar Tree&#8217;s current growth. Last quarter, 60% of new Dollar Tree customers made at least six figures. <em>@Mafe</em></p><p><strong><a href="https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/">AI Is Getting Scary Good at Making Predictions</a></strong> AI superintelligence may (or may not!) still be a few years away from being a few years away, but AI superforecasting &#8211; a different but still highly valuable data- and modeling-driven proposition &#8211; seems to be very close at hand. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/alexknapp/2026/02/11/forbes-250-americas-greatest-innovators/">Forbes 250: America&#8217;s Greatest Innovators</a></strong> Celebrating the minds shaping tomorrow, because true innovation isn&#8217;t just about invention, but rather impact. Spoiler alert &#8211; Elon beat Bezos. <em>@Kacee</em></p><p><strong><a href="https://www.technologyreview.com/2026/02/09/1132537/a-lesson-from-pokemon/">Why the Moltbook Frenzy Was Like Pok&#233;mon</a></strong> The &#8220;Moltbook&#8221; social network for AI agents, while hyped as a glimpse into the future of autonomous AI, was actually more akin to a chaotic game of &#8220;Twitch Plays Pok&#233;mon&#8221; &#8211; a spectator sport for humans rather than a functional hive mind. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128188; Here are some <a href="https://mitchellh.com/writing/my-ai-adoption-journey">useful patterns</a> to follow when implementing AI agents in your workflow.</p><p>&#127950;&#65039; Not saying you should do this, but it is pretty cool: <a href="https://comma.ai/">Comma AI&#8217;s active driver assistance system</a> (Comma AI is founded by famed hacker George Hotz, and is using an Open Source approach to their work).</p><p>&#128586; The end of your voice as a unique identifier: Chinese AI model <a href="https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice">Qwen3 TTS</a> needs just a few seconds of your voice to generate a convincing voice clone.</p><p>&#128104;&#127996;&#8205;&#9877;&#65039; The headline hides the punchline: &#8220;<a href="https://www.msn.com/en-us/news/technology/i-let-chatgpt-analyze-a-decade-of-my-apple-watch-data-then-i-called-my-doctor/ar-AA1UZxip">I let ChatGPT analyze a decade of my Apple Watch data. Then I called my doctor.</a>&#8221; &#8211; turns out, AI is an absolutely terrible doctor. Which shouldn&#8217;t come as a surprise. But just in case&#8230; And if you need more data &#8211; here&#8217;s a <a href="https://www.404media.co/chatbots-health-medical-advice-study/">new study on the subject</a> (same conclusion).</p><p>&#127744; Now we finally know (with mathematical precision) when the Singularity will happen: <a href="https://campedersen.com/singularity">Tuesday, July 18, 2034 at 02:52:52.170 UTC.</a> Set your watches.</p><p>&#128509; This is as insane as it is delightful: New York City as a massive <a href="https://cannoneyed.com/isometric-nyc/">isometric pixel landscape</a> running in your browser.</p><p>&#129406; The good, old Internet is still alive: A delightful <a href="https://www.fieggen.com/shoelace/">almanac for all things laces</a> &#8211; as in &#8220;shoelaces.&#8221; Everything and anything you ever wanted to know (and not) about laces.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is getting back into the swing of things, after being in an environment completelty void of humans.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Godsend Paradox]]></title><description><![CDATA[Why the 1% improvement rule is changing everything, coding is now a fifteen-minute task, and we face the uncomfortable reality that AI only works if you already somewhat know the answer.]]></description><link>https://briefing.rdcl.is/p/the-ai-godsend-paradox</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-godsend-paradox</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:36:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7468a779-5ba7-4017-9c58-a1ef3a6c35a6_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bear with me as we talk about AI once more. Here on the radical Briefing, we have been talking a lot &#8211; perhaps too much, but since AI dominates the headlines, so be it &#129335;&#127996; &#8211; about AI: what it is and what it isn&#8217;t, and how it might or might not affect us, our work, and our organizations.</p><p>My own stance on AI (and more specifically, LLMs or GenAI) is constantly evolving &#8211; as is my personal use of the technology. I do believe that the ground has started to shift recently though. In the last 6&#8211;9 months, we have seen the usual flurry of LLM updates &#8211; Google launching Gemini 3 Pro, Anthropic launching Claude Opus 4.5, and, of course, OpenAI launching ChatGPT 5.2 in its various incarnations. This, in itself, is nothing new (nor particularly newsworthy), as each iteration of LLMs has been, for a while now, just a tad better than the last. Gone are the days when we went from ChatGPT being a party trick (but otherwise useless) to ChatGPT 3 being a somewhat useful tool. But in the process, which seems to follow the rule of &#8220;<a href="https://dialoguereview.com/be-1-better/">1% improvements</a>,&#8221; we seem to have crossed thresholds that make the current generation of LLMs rather useful. It suggests we are, indeed, navigating shifting ground &#8211; at least on a personal use level.</p><p>When I wrote <a href="https://rdcl.is/disrupt-disruption/">Disrupt Disruption</a> at the beginning of 2022, ChatGPT hadn&#8217;t launched yet. Ask any author and they will tell you that books aren&#8217;t written; they are rewritten. That certainly was true for my book &#8211; it took me only two weeks of focused writing to get the first draft done (granted, I did two years of research and organizing beforehand), but another seven months to get it through all the rounds of editing. In the end, I had four editors working on the book &#8211; a process I somehow deeply enjoyed (maybe I am a glutton for punishment).</p><p>Fast forward to today. Right at this moment, I am knee-deep in writing my next book, &#8220;<a href="https://rdcl.gumroad.com/l/built-for-turbulence">OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change.</a>&#8221; The process is the same: two-plus years of research and organizing, followed by a four-week sprint of focused writing. But this time, instead of employing four editors, AI is doing most of the heavy lifting. I have created highly customized prompts for developmental editing (the step where we look for holes in logic, argument strength, content clarity, etc.), trained on my previous work (hold that specific thought; we will come back to it), and very specific instructions aligned with the book&#8217;s content. I also use equally customized prompts for line editing (style and voice) and copyediting (spelling, grammar, the works). No human editor was needed until I got to the beta draft stage &#8211; which we are currently in. Now, I have a whole bunch of humans looking at the book for feedback, suggestions, and catching the odd AI slip-up. And yes, it&#8217;s <a href="https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-until-the">devastating for human copy editors</a>.</p><p>And, of course, there is software development. The other day, I wanted to extract all the links we ever published in the radical Briefing and import them into Raindrop to create a searchable archive for the community. It took me a whopping 15 minutes (I counted) to export the Briefings from Substack, ask Claude to write a Python script to extract the links, and import them into Raindrop. <a href="https://raindrop.io/pfinette/the-rabbit-hole-65462947">The archive is here.</a></p><p>Prior to Claude coding for me, this would have easily taken at least a full day. I am a decent but surely not great coder; because I code very infrequently, I need to look up things all the time.</p><p>And then there is Google. I don&#8217;t know about you, but I now use Google solely for finding specific webpages or to use one of its shortcuts (e.g., converting a currency). And the only reason I use Google for converting currencies is because it&#8217;s currently faster than using AI. The moment AI gets faster (and some smaller models tuned for speed are already fast enough for most of my use cases), Google becomes merely a link directory &#8211; not the &#8220;answer machine&#8221; it used to be before we had AI.</p><p>All of this is to say: I use AI every day and for all sorts of things. It&#8217;s my go-to tool. But my personal use cases also highlight an interesting paradox. As useful as AI is, it requires the human using it to have a good-to-excellent understanding of the subject matter. I am confident that AI would fail me as an editor if I didn&#8217;t give it oodles of prior writing to learn from, as well as highly specific instructions, which require a very solid understanding of what I am trying to achieve. And I have the massive benefit of having gone through the editing process before and know what I am looking for. The same goes for coding &#8211; three-plus decades of programming have taught me what to ask for and how to assess the results. And I know not to trust AI blindly. When using AI as a massively improved search engine, I habitually use not just one AI, but at least two, often three; the results tend to be vastly different depending on which AI I use. It truly is a world of &#8220;human in the loop.&#8221;</p><p>All of which makes me wonder: Is AI (at least in its current form) a godsend for people like me &#8211; measurably increasing my productivity &#8211; but (mostly) a failure for broad, generally applicable use cases? I doubt a generalized prompt does a good enough job of editing <em>any</em> book for <em>any</em> author. We know that coding assistants, in the hands of novices, introduce inefficiencies, bugs, and security vulnerabilities. And, of course, the internet is awash with stories of AI hallucinating and making stuff up &#8211; which well-intentioned but ill-informed people then take as gospel. I guess time will tell. Until then, the best advice I can give you (as of today) is to seriously dig into AI as an individual. If you are using it as a business tool, focus on use cases where you have reason to believe that you can generalize enough to make AI work.</p><p><em>@Pascal</em></p><p>Cue the musical coda:</p><div id="youtube2-Z0GFRcFm-aY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Z0GFRcFm-aY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Z0GFRcFm-aY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Great AI Delusion]]></title><description><![CDATA[While Microsoft begs for permission to burn energy, MIT finds ChatGPT weakens your brain, and the first bespoke gene-edited baby arrives.]]></description><link>https://briefing.rdcl.is/p/the-great-ai-delusion</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-great-ai-delusion</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 23 Jan 2026 16:32:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4414f587-9f5b-4376-8273-e89d9c4f5c7f_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>In the great, holy land of the future, in which we have all become AI-enhanced cyborgs, there is a rift forming: Vision and reality are diverging. While our AI overlords keep touting the earth-shattering benefits of their creations, the reality is rather sobering. Section AI&#8217;s latest report digs into the thorny issue of AI proficiency &#8211; and it doesn&#8217;t look good. The subtitle says it all: &#8220;Leaders think their AI deployments are succeeding. The data tells a different story.&#8221; <a href="https://www.sectionai.com/ai/the-ai-proficiency-report">Give it a read.</a> At least it&#8217;s worth contemplating.</p><p>Talking about &#8220;disconnects&#8221;: Jane and I will be heading out to Patagonia for a truly epic adventure &#8211; for the next two weeks we will be sailing through the Beagle Channel and summiting some of the countless peaks and glaciers in the area. This means that our trusted Briefing will be on a short hiatus &#8211; I&#8217;ll have next Tuesday&#8217;s deep dive ready, and then we will see each other again for the Friday, February 13th edition (hopefully not an ominous sign).</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">How AI Destroys Institutions.</a></strong> Here&#8217;s a sobering read &#8211; in the form of a research paper &#8211; on how and why AI might destroy institutions. I am not saying I agree, nor disagree, with the authors, but it is too important a topic to ignore.</p><blockquote><p><em>Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.technologyreview.com/2026/01/12/1129999/gene-editing-base-edited-baby-personalized-drugs-2026-breakthrough-technology/">Personalized Gene Editing Is Here.</a></strong> First we had general-purpose gene editing to treat (and cure) diseases based on singular genetic mutations, <a href="https://www.technologyreview.com/2023/12/04/1084209/vertex-exacel-approval-gene-editing-sickle-cell-disease-patient/">such as sickle cell anemia</a>. And that was already a big deal. Personalized gene editing kept being an illusive goal, but now it&#8217;s here. A baby (KJ) was successfully treated for a rare genetic disorder which left his body unable to remove toxic ammonia from his blood. It&#8217;s stil early days, but this could be the beginning of something big (and important).</p><blockquote><p><em>KJ&#8217;s doctors will monitor him for years, and they can&#8217;t yet say how effective this gene-editing approach is. But they ~<a href="https://www.statnews.com/2025/10/16/baby-kj-crispr-gene-editing-personalized-medicine-at-scale/">plan to launch a clinical trial to test such personalized treatments</a>~ in children with similar disorders caused by &#8220;misspelled&#8221; genes that can be targeted with base editing.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/">AI Is Here. Now What?</a></strong> Microsoft&#8217;s CEO, at this year&#8217;s World Economic Forum, warned that &#8220;we must &#8216;do something useful&#8217; with AI or they&#8217;ll lose &#8216;social permission&#8217; to burn electricity on it.&#8221; Amen. Yet, as the author of this article points out:</p><blockquote><p><em>I also find automatic transcription tools useful, but if I were banking on general purpose LLMs being as revolutionary as personal computers and the internet, I&#8217;d find it worrying how many applications boil down to transcribing audio, summarizing text, and fetching code snippets.</em></p></blockquote><p>Amen. Again.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">Your Brain on ChatGPT.</a></strong> A study from MIT&#8217;s Media Lab compared the neural and behavioral consequences of LLM-assisted essay writing. Comparing groups of participants who either wrote an essay without the help of any tools, using a search engine, or using ChatGPT, the researchers found that:</p><blockquote><p><em>EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.</em></p></blockquote><p>Not good. In this context, read the post below as well&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://ploum.net/2026-01-19-exam-with-chatbots.html">Giving University Exams in the Age of Chatbots.</a></strong> Fascinating insight into higher education&#8217;s effort to triage the use of AI in students&#8217; work. Now, take this with a grain of salt as it is a singular class&#8217;s experience &#8211; plus arguably one where students might be somewhat self-selecting (the class in question is on &#8220;Open Source Strategies&#8221;).</p><blockquote><p><em>Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.</em></p></blockquote><p>Good read. Even if you are not in higher education.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/wellness/2026/jan/14/new-year-polycrisis-psychology-feeling-trapped">We Are Living in a Time of Polycrisis. If You Feel Trapped &#8211; You&#8217;re Not Alone</a></strong> We are living through a time of radical uncertainty, but we are also more resilient than we think. <em>@Jane</em></p><p><strong><a href="https://www.businessinsider.com/google-deepmind-anthropic-ceos-ai-junior-roles-hiring-davos-2026-1">DeepMind and Anthropic CEOs: AI Is Already Coming for Junior Roles at Our Companies</a></strong> Regarding how to deal with AI taking over more and more jobs, the Anthropic CEO says: &#8220;My worry is as this exponential keeps compounding, and I don&#8217;t think it&#8217;s going to take that long &#8211; again, somewhere between a year and five years &#8211; it will overwhelm our ability to adapt.&#8221; <em>@Mafe</em></p><p><strong><a href="https://www.wired.com/story/openai-testing-ads-us/">Ads Are Coming to ChatGPT: Here&#8217;s How They&#8217;ll Work</a></strong> A textbook early signal of enshittification: once revenue incentives creep into a trusted interface, the question stops being if the experience degrades &#8211; and becomes how fast. <em>@Kacee</em></p><p><strong><a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/">America Is Slow-Walking Into a Polymarket Disaster</a></strong> Americans have discovered a new pastime: gambling on real-world events. The implications extend far beyond an individual&#8217;s bank account. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128161; It used to be that we said, &#8220;Ideas are cheap and plentiful. Execution is hard.&#8221; <a href="https://matthiasroder.com/ideas-are-everything/">Not anymore</a> &#8211; at least when it comes to AI-assisted execution.</p><p>&#129302; Skills are reusable capabilities for AI agents. Install them with a single command to enhance your agents with access to procedural knowledge. <a href="https://skills.sh/">Here is a repository.</a></p><p>&#128640; Now that you have skills for your AI agent (see above), you need <a href="https://www.nibzard.com/agentic-handbook">production-ready patterns</a>. Together you&#8217;ll have a solid foundation for using AI agents in your software development workflow.</p><p>&#128104;&#127996;&#8205;&#128187; We start taking agentic coding to its logical conclusion: No code at all. <a href="https://www.dbreunig.com/2026/01/08/a-software-library-with-no-code.html">Here is a software library with no code.</a></p><p>&#128506;&#65039; Super nerdy, but if you have a little bit of technical understanding, this is pretty cool: Run this Python script, give it a city, and it <a href="https://github.com/originalankur/maptoposter">generates a neat grid-map as a poster</a>.</p><p>&#128394;&#65039; This is pretty fun (especially if you, like me, grew up on this thing): <a href="https://www.theverge.com/tech/864127/seletti-bic-ballpoint-pen-pendant-lamp-maison-objet">Seletti&#8217;s Bic Lamp can be hung from the ceiling, mounted to a wall, or used as a standing floor lamp.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is all packed up and excited to be back in Tierra del Fuego.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[When AI Eats Itself]]></title><description><![CDATA[While datacenters drink our water supply, Moore&#8217;s Law reverses course, and Meta quietly abandons the metaverse.]]></description><link>https://briefing.rdcl.is/p/when-ai-eats-itself</link><guid isPermaLink="false">https://briefing.rdcl.is/p/when-ai-eats-itself</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 16 Jan 2026 15:37:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a1df5037-861c-4cc0-8cf5-ba8b93e5550c_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I am constantly reminded of the F. Scott Fitzgerald quote, <em>&#8220;the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,&#8221;</em> when I think about AI. Without a shred of doubt, AI is the most overhyped technology of our time. Most of the things people tell you about AI and its capabilities are just plain BS. And yet, I use AI every day &#8211; and not just for small, fun stuff or coding (where we know that it&#8217;s pretty good already), but as an assistant who never tires, who never gets annoyed by me asking it to do the same thing over and over again, and who, more often than not, delivers a unique approach to a problem. But I also use it as a unit of one &#8211; I inherently don&#8217;t trust its output, double-check and verify things it tells me, and use its output merely as an input into my own work and thinking rather than as an end product. This is a pattern that doesn&#8217;t scale &#8211; I catch AI making mistakes so often that I would (at least for the time being) never let it do things autonomously and unchecked; hence, I wouldn&#8217;t scale it beyond my own use. Which brings me back to Fitzgerald &#8211; the important bit in his quote is not just the ability to hold two opposed ideas in your head at the same time &#8211; but to keep functioning (well)&#8230;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388?ref=ppc.land#issuecomment-3717222957">Dog Eats Dog.</a></strong> The background is a little nerdy, so bear with me. Tailwind CSS is a widely popular framework to design web pages &#8211; and a darling of AI code generators (there are specific reasons for that, outside of sheer popularity, but that doesn&#8217;t matter here). Chances are, if you ask ChatGPT, Claude, Gemini, or any other AI to create a website for you, it will use Tailwind CSS to style the page. A few days ago, the founder of Tailwind <a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388?ref=ppc.land#issuecomment-3717222957">posted</a> that his company had to fire 75% of its staff due to an 80% drop in revenue &#8211; caused by AI.</p><p>The company behind Tailwind makes money when people using their framework come to their website for help and documentation and then subscribe to their paid plans and services. Only, if you ask AI to build your website, you never go to Tailwind&#8217;s website&#8230;</p><blockquote><p><em>AI will scrape your project site, users will never visit it for documentation and will never know about your commercial product.</em></p></blockquote><p>Maybe one of the most direct idiosyncrasies of our AI-driven glorious new world. Dog eats dog.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://lethain.com/company-ai-adoption/">Super Practical Advice on How to Implement AI.</a></strong> Most of the stuff you read about AI and how to adopt it in your organization is either so high-level that it&#8217;s useless, so specific and singular that it&#8217;s equally as useless, or simply AI-hype slop. Will Larson, CTO at Imprint (a FinTech company), has put together a blog post which is actually useful. It is highly recommended reading for anyone trying to figure out how to &#8211; actually &#8211; implement AI in their organization.</p><blockquote><p><em>Given the sheer number of folks working on this problem within their own company, I wanted to write up my &#8220;working notes&#8221; of what I&#8217;ve learned. This isn&#8217;t a recommendation about what you should do, merely a recap of how I&#8217;ve approached the problem thus far, and what I&#8217;ve learned through ongoing iteration. I hope the thinking here will be useful to you, or at least validates some of what you&#8217;re experiencing in your rollout.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.youtube.com/watch?v=HMEjsjQvYT0">Commerce Disrupted.</a></strong> Our friend Jason &#8220;Retailgeek&#8221; Goldberg delivered the closing keynote at the National Retail Federation&#8217;s big conference last week. And when Jason speaks, we listen. Lucky for us, my former boss and blogger extraordinaire, Scot Wingo, was in the audience and <a href="https://www.youtube.com/watch?v=HMEjsjQvYT0">recorded Jason&#8217;s talk</a>. Jason was also kind enough to <a href="https://substack.com/redirect/ab4b2850-2b59-4a93-b771-44f9910970b7?j=eyJ1Ijoiczk4MSJ9.BrLK1M8Bsf2xQjYKQzOkhirOjkwkIAHkEh0pItE5QXM">share his slides</a>. If you are in retail/ecommerce, you have to watch this &#8211; and subscribe to Scot&#8217;s newsletter &#8220;<a href="https://www.retailgentic.com/">Retailgentic.</a>&#8221;</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/commentisfree/2026/jan/10/trump-beginning-of-end-enshittification-make-tech-good-again">Trump May Be the Beginning of the End for &#8216;Enshittification&#8217; &#8211; This Is Our Chance to Make Tech Good Again</a></strong> There is a glimmer of hope that could make technology good and fair again &#8211; and surprisingly, we might have investors and national security hawks to thank. <em>@Jane</em></p><p><strong><a href="https://techcrunch.com/2026/01/12/anthropics-new-cowork-tool-offers-claude-code-without-the-code/">Anthropic&#8217;s New Cowork Tool Offers Claude Code Without the Code</a></strong> The new tool aimed at non-technical users is built for non-coding tasks, but it comes with its warnings &#8211; it can take strings of actions without user input and edit or delete files. <em>@Mafe</em></p><p><strong><a href="https://www.forbes.com/sites/scotttravers/2026/01/11/meet-the-tree-that-shoots-its-seeds-at-150-mph---a-biologist-explains/">Meet the Tree That Shoots Its Seeds at 150 MPH</a></strong> The sandbox tree solves a core evolutionary challenge by converting built-up tension into ballistic movements &#8211; I&#8217;m picturing shotgun seed dispersal! It is impressive how nature uses mechanics to gain a competitive advantage in dense ecosystems. <em>@Kacee</em></p><p><strong><a href="https://www.theverge.com/news/862648/sesame-street-classics-youtube-streaming">Over 100 Episodes of Classic Sesame Street Have Arrived on YouTube</a></strong> This is too good not to share. I grew up on Sesame Street (the classic ones) &#8211; every kid should grow up on Sesame Street &#8211; and if nothing else, your dog might enjoy it when you leave him alone at home. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128187; Logic (and Moore&#8217;s Law) dictates that our computers ought to get cheaper every year (or, at least, cost the same but become more powerful). With AI datacenters gobbling up anything from powerful graphics cards to RAM, this trend has been reversed: <a href="https://www.engadget.com/computing/ces-2026-proved-the-pc-industry-is-hosed-this-year-174500314.html">CES 2026 proved the PC industry is hosed this year</a></p><p>&#128688; Talk about which &#8211; if you thought that energy is the big problem with AI datacenters, you should add &#8220;water&#8221; to your list of concerns: <a href="https://www.forbes.com/sites/kensilverstein/2026/01/11/americas-ai-boom-is-running-into-an-unplanned-water-problem/">America&#8217;s AI Boom Is Running Into An Unplanned Water Problem</a></p><p>&#128262; While the US is banking on fossil fuels, China is marching (actually, sprinting) towards a green future. Here are <a href="https://e360.yale.edu/digest/china-renewable-photo-essay">photos capturing the breathtaking scale of China&#8217;s wind and solar buildout</a>.</p><p>&#128104;&#127996;&#8205;&#128187; Addy Osmani, Director at Google Cloud AI, <a href="https://addyosmani.com/blog/ai-coding-workflow/">shared his LLM coding workflow going into 2026</a> &#8211; super helpful for anyone who&#8217;s doing any coding with AI.</p><p>&#129409; Some good food for thought: Tom Renner argues in &#8220;<a href="https://tomrenner.com/posts/400-year-confidence-trick/">LLMs are a 400-year-long confidence trick</a>&#8221; that LLMs are designed to exploit our cognitive biases and pull a long-standing confidence trick on us.</p><p>&#129489;&#127996;&#8205;&#127806; More evidence that AI&#8217;s productivity gains are nowhere to be found. And, at the same time, jobs lost to AI might be lost for good. This time, it&#8217;s coming from Forrester&#8217;s principal analyst JP Gownder: <a href="https://www.theregister.com/2026/01/15/forrester_ai_jobs_impact/">AI may be everywhere, but it&#8217;s nowhere in recent productivity statistics.</a></p><p>&#129318;&#127996; I&#8217;ll spare you the &#8220;told you so&#8221; trope, but I have to admit that it is pretty ironic to see the company which renamed itself to cement its central role in the creation of the metaverse &#8211; shutdown most of its VR business: <a href="https://www.theverge.com/news/861420/meta-reality-labs-layoffs-vr-studios-twisted-pixel-sanzaru-armature">Meta is closing down three VR studios as part of its metaverse cuts</a></p><p>&#128250; We follow Doug Shapiro for his insights on the future of media for a while now. Here is his current thinking nicely summarized: <a href="https://dougshapiro.substack.com/p/my-base-presentation-deck-january?r=s981">My Base Presentation Deck - January 2026</a></p><p>&#8987; More of a public service announcement &#8211; but as we are still in the first few weeks of the New Year, maybe you want to make this one of our New Year&#8217;s resolutions: <a href="https://philipotoole.com/start-your-meetings-at-5-minutes-past/">Start your meetings at 5 minutes past</a></p><div><hr></div><p><em>Pascal is getting excited for his upcoming trip to the Southern most tip of Patagonia. A place where even the Internet doesn&#8217;t work&#8230;</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Not Another New Year Prediction]]></title><description><![CDATA[From the ruins of the "blockchain-ready" panic to the new AI FOMO trap, plus the vital difference between owning a crystal ball and building actual capability.]]></description><link>https://briefing.rdcl.is/p/not-another-new-year-prediction</link><guid isPermaLink="false">https://briefing.rdcl.is/p/not-another-new-year-prediction</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:47:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/27cfd498-7d84-46ce-a640-5be5e11ce290_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s January, which means prediction season is back&#8230;again. Each new year seems to bring a wave of confident forecasts, and this year the AI ones are arriving with particular force. <em>Five trends. Ten shifts. What&#8217;s coming next. </em>Everyone wants a clear storyline, because the world has been noisy and unstable for quite some time, and leaders are getting tired of ambiguity.</p><p>I understand why prediction pieces work, especially around AI. The last two years have made the ground feel permanently in motion, and you&#8217;ve probably felt that strange mix of awe and disorientation that comes when a technology accelerates faster than our organizations can absorb it. Predictions act as a bit of a relief valve. They turn uncertainty into narrative, and narrative feels like control. But what prediction season really does, is train us to mistake being <em>informed</em> for being <em>prepared</em>.</p><p>I&#8217;ve watched this cycle before. A few years ago, it was blockchain. If you were reading the business press, you&#8217;d think it was about to become the default layer for how everything worked - how data moved, how trust was established, how companies operated. When Walmart announced its use of blockchain for suppliers, it sent a wave of panic through organizations scrambling to become &#8220;blockchain-ready.&#8221; Fast forward to today, and you rarely hear about blockchain infrastructure in everyday business conversations. Not because it vanished, but because it never became the universal future it was forecast to be. It found narrower use cases, and most companies quietly moved on.</p><p>Now, let me be clear - this isn&#8217;t an anti-trend piece. In fact, one of the best AI in 2026 articles I&#8217;ve read so far via MIT Sloan Review is the <a href="https://sloanreview.mit.edu/article/five-trends-in-ai-and-data-science-for-2026/">Five Trends in AI and Data Science for 2026</a>, precisely because it doesn&#8217;t feel like the usual hype-driven churn. It&#8217;s a strong example of the genre, and because even when prediction writing is good, it can still pull you into the wrong posture if you treat it as a map instead of a mirror.</p><p>Pascal once said something to me that I return to regularly at this time of year: nobody knows what the future will bring, and anyone predicting it probably has a reason they want you to believe the version of the future they&#8217;re painting.</p><p>A prediction is a story that you are being invited to believe, and the invitation comes with implications. It can be as simple as wanting a framework that makes their own decisions feel coherent, or wanting their product roadmap to look like destiny, or maybe just wanting you to move with urgency in a certain direction. The future is one of the easiest places to hide an agenda because you can&#8217;t fact-check it yet, and certainty is one of the easiest ways to short-circuit someone else&#8217;s thinking.</p><p>But trend literacy is not strategic thinking, and being able to name the shifts does not automatically translate into knowing what to do next. The MIT Sloan Review piece lays out five trends that are very likely real, and it&#8217;s useful to have those patterns in your peripheral vision, but even if every trend in that list turns out to be correct, those trends will not decide what your year looks like. What will? Your decisions, your ability to interpret what&#8217;s happening through your own value creation, and your discipline in resisting the crowd&#8217;s tempo.</p><p>This is where I think the conversation needs to shift. The most important question in January is not &#8220;what will happen in 2026?&#8221;. The better question, the strategic question, is &#8220;what must we become to meet whatever happens?&#8221; because that&#8217;s a question you can answer without pretending you have a crystal ball, and it moves you from prediction mode into positioning mode.</p><p>That distinction matters even more in an AI-saturated landscape, because hype has a very specific effect: it pressures people and makes them feel a sense of FOMO that they aren&#8217;t implementing quickly enough. It creates the feeling that you have to adopt now, announce now, overhaul now, reorganize now, because the future is arriving at speed and you don&#8217;t want to be left behind. We were collaborating with a client&#8217;s AI working group last year, and an entire piece that they wanted to create was focused around being intentional with your AI strategy and avoiding the FOMO trap.</p><p>What we have uncovered is that the most resilient strategy is rarely a single big bet based on a forecast. It&#8217;s capability-building, and optionality, and learning velocity, and a stance that can survive surprise. It&#8217;s designing your organization and your team to be adaptive rather than always right.</p><p>If you want to keep reading trend pieces without being captured by them, I find it helps to treat them as prompts rather than plans. When you&#8217;re reading a forecast and it starts to feel like a roadmap, ask yourself what remains true even if the trend doesn&#8217;t materialize the way the author expects, and ask yourself what would still be worth building even if everything happens slower than the headlines suggest. Most importantly, ask what the piece assumes about human behavior, because the technical trajectory of AI (or any emerging tech) is only half the story.</p><p>And when a prediction starts to sound inevitable, it&#8217;s worth pausing long enough to ask what that inevitability is doing to you. Inevitability is the most dangerous word in business because it collapses choice, and choice is where strategy lives. The quiet rebellion of 2026 might be refusing certainty, refusing to outsource your agency to someone else&#8217;s storyline, and being willing to build from principles instead of panic.</p><p>So read the good pieces, and let them sharpen your awareness of what&#8217;s moving. Just don&#8217;t confuse the ability to name trends with the ability to lead through them, because the people who succeed this year will not be the people who guessed correctly. They&#8217;ll be the ones who stayed clear-eyed, built real capability, and held onto their agency when the loudest voices tried to sell them inevitability.</p><p><em>@Kacee</em></p>]]></content:encoded></item><item><title><![CDATA[Outsourcing Your Own Brain]]></title><description><![CDATA[While Instagram declares the polished feed dead, Stack Overflow collapses in freefall, and Claude 4.5 starts demanding human dignity.]]></description><link>https://briefing.rdcl.is/p/outsourcing-your-own-brain</link><guid isPermaLink="false">https://briefing.rdcl.is/p/outsourcing-your-own-brain</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 09 Jan 2026 15:47:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/abb376f2-2a2a-4838-bdc3-2fe11da8fdc9_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>First and foremost, let us wish you a happy New Year! May 2026 be nothing short of amazing in all the right ways. &#128640; And thank you all so much for responding to our short &#8220;I like&#8230;/I wish&#8230;/What if&#8230;&#8221; survey &#8211; we are incredibly grateful for your feedback (and the fact that you seem to like the radical Briefing). A few tweaks we are making based on your feedback: We are adding a little more context to our <strong>Down the Rabbit Hole</strong> section, which should make it easier for you to skim the section and see which links you want to follow. I will also add some more context and commentary to the news I comment on in the <strong>Headlines from the Future</strong> part of the Briefing. Further, we will occasionally add a section to the Briefing &#8211; e.g., something on the tools we are using, or insights on the workflows we are using in our work at radical. And lastly, we hear you on two things &#8211; on the one hand, quite a few of you mentioned that your inboxes are overflowing; on the other hand, you asked us for opportunities to engage live with us and the community. To tackle the former, we will now bring you our Tuesday Deep Dive every other week instead of weekly (and dig a little deeper in each dispatch); on the latter &#8211; stay tuned! &#129321;</p><p><strong>In summary (the TL;DR):</strong> More context in the Friday Briefing, Deep Dives every other Tuesday, and live events are on their way.</p><p>P.S. Just before the holiday break, I talked with Christian Mastrodonato on his podcast &#8220;Engines of Creation&#8221; about futures (the plural) and antifragility. It was a super fun and somewhat far-reaching conversation &#8211; <a href="https://www.enginesofcreation.co/episodes/episode/5073437e/28-or-on-plural-futures-and-anti-fragility-or-interview-with-pascal-finette">check it out here.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2><strong>Headlines from the Future</strong></h2><p><strong><a href="https://www.bbc.com/news/articles/cd6xz12j6pzo">Are These AI Prompts Damaging Your Thinking Skills?</a></strong> Outsourcing your thinking to an AI, and doing so fairly consistently (which LLMs certainly encourage and entice you to do), leads to atrophy of your brain (according to a new study by MIT). I guess the old adage my math teacher reminded us of regularly, &#8220;use it or lose it&#8221;, is truer than maybe ever before.</p><blockquote><p><em>The researchers said their study demonstrated &#8220;the pressing matter of exploring a possible decrease in learning skills&#8221;.</em></p></blockquote><p>Not all is lost though &#8211; it&#8217;s all about <em>how</em> you use AI (which is something I subscribe to &#8211; AI can be an incredibly powerful tool, if wielded correctly).</p><blockquote><p><em>She tells the BBC: &#8220;We definitely don&#8217;t think students should be using ChatGPT to outsource work&#8221;. In her view, it&#8217;s best used as a tutor rather than just a provider of answers.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://gizmodo.com/ai-image-generators-default-to-the-same-12-photo-styles-study-finds-2000702012">AI Image Generators Default to the Same 12 Photo Styles, Study Finds.</a></strong> We know that LLMs gravitate toward the mean, which is why AI-generated slop sounds so &#8220;same,&#8221; is littered with en-dashes ( &#8220; &#8211; &#8221; ), and regularly generates stylistic elements such as &#8220;And here is the kicker [&#8230;].&#8221; Here is an interesting example of what this looks like when you use LLMs to generate images &#8211; it turns out you can have any image, as long as you are happy with one of twelve distinct styles. As Henry Ford quipped: You can have a Model T in any color &#8211; as long as that color is black.</p><p>This, of course, means that those who are creative and deliberate in their use of AI prompts to generate images will yield vastly superior results than the rest of us who just sloppily use these tools (same as the note above).</p><blockquote><p><em>AI image generation models have massive sets of visual data to pull from in order to create unique outputs. And yet, researchers find that when models are pushed to produce images based on a series of slowly shifting prompts, it&#8217;ll default to just a handful of visual motifs, resulting in an ultimately generic style.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://werd.io/2025-the-year-in-llms/">Outcome-Driven vs Process-Driven.</a></strong> Ben Werdmuller, Senior Director of Technology at ProPublica, boils down the difference in attitude toward AI beautifully &#8211; as an aside, this is not only true for developers, but for anyone who uses AI (and has found viable use cases &#8211; which, as another aside, isn&#8217;t true for every job or task). TL;DR: Become outcome-obsessed and let the process take care of itself.</p><blockquote><p><em>[Claude Code] has the potential to transform all of tech. I also think we&#8217;re going to see a real split in the tech industry (and everywhere code is written) between people who are outcome-driven and are excited to get to the part where they can test their work with users faster, and people who are process-driven and get their meaning from the engineering itself and are upset about having that taken away.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong>AI Has Won the Photo Game.</strong> Instagram&#8217;s head, Adam Mosseri, recently made two interesting statements &#8211; on the one hand, he admits that <a href="https://www.businessinsider.com/instagram-head-ai-images-polished-feed-dead-adam-mosseri-2026-1">AI has taken over the platform and is changing what people post</a>:</p><blockquote><p><em>&#8220;Unless you&#8217;re under 25 and use Instagram, you probably think of the app as a feed of square photos. The aesthetic is polished: lots of make up, skin smoothing, high contrast photography, beautiful landscapes,&#8221; wrote Mosseri on Wednesday. &#8220;That feed is dead. People largely stopped sharing personal moments to feed years ago,&#8221; the Meta executive said, adding that users now kept friends updated on their personal lives through unpolished &#8220;shoe shots and unflattering candids&#8221; shared via direct messages.</em></p></blockquote><p>And on the other hand, he concedes that <a href="https://www.theverge.com/news/852124/adam-mosseri-ai-images-video-instagram">you simply can&#8217;t trust what you see anymore</a>:</p><blockquote><p><em>For most of my life I could safely assume photographs or videos were largely accurate captures of moments that happened. This is clearly no longer the case and it&#8217;s going to take us years to adapt. We&#8217;re going to move from assuming what we see is real by default, to starting with skepticism. Paying attention to who is sharing something and why. This will be uncomfortable - we&#8217;re genetically predisposed to believing our eyes.</em></p></blockquote><p>It goes without saying that this might morph into a larger problem &#8211; not just for Instagram but society at large. Personally, I wonder how long it will take the general public to shift from &#8220;I trust what I see&#8221; to &#8220;I never trust a photo unless proven otherwise.&#8221;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-until-the">Copywriters Reveal How AI Has Decimated Their Industry.</a></strong> This series of interviews with copywriters is a sobering look into one of the industries most affected by LLMs &#8211; and to be clear, I don&#8217;t think there are too many industries which have seen such a wholesale change as copywriting; for most people AI (so far and for the foreseeable future) is more of a fractalized change.</p><blockquote><p><em>AI is really dehumanizing, and I am still working through issues of self-worth as a result of this experience. When you go from knowing you are valuable and valued, with all the hope in the world of a full career and the ability to provide other people with jobs&#8230; To being relegated to someone who edits AI drafts of copy at a steep discount because &#8220;most of the work is already done&#8221;&#8230;</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong>A Tale of Two Cities.</strong> When it comes to AI (specifically LLMs) and mathematics, there are two worlds colliding. On one hand you have the AI-maximizers, who believe (and are betting on) LLMs being the harbinger of a new era of mathematical discovery. This school of thought goes so far as to not only pour $64M into a four-month-old startup, but also features a founder boldly asking the question &#8220;<a href="https://www.youtube.com/watch?v=xldMXTPGMGI">Maybe we discovered new math?</a>&#8221; On the other hand &#8211; and when it comes to AI the world seems to divide itself into polarities &#8211; others counter with a simple &#8220;<a href="https://economictimes.indiatimes.com/news/new-updates/basically-zero-garbage-renowned-mathematician-joel-david-hamkins-declares-ai-models-useless-for-solving-math-heres-why/articleshow/126365871.cms?from=mdr">Basically zero, garbage.</a>&#8221;</p><blockquote><p><em>One of the world&#8217;s biggest mathematicians Joel David Hamkins has slammed AI models used for solving mathematics and called them basically zero and garbage, adding them he doesn&#8217;t find them useful at all. He also highlighted AI&#8217;s frustrating tendency to confidently assert incorrectness and resist correction. If I were having such an experience with a person, I would simply refuse to talk to that person again, Joel David Hamkins said.</em></p></blockquote><p>Who&#8217;s right? Your guess is as good as mine.</p><div><hr></div><h2><strong>What We Are Reading</strong></h2><p><strong><a href="https://www.bbc.com/news/articles/crmlnmnwzk2o">Lego Unveils Tech-Filled Smart Bricks &#8211; to Play Experts&#8217; Unease</a></strong> Lego demos its biggest innovation in 50 years. And whilst it hopes to inspire the creative minds of a new generation, the old ones are lamenting the good old days. <em>@Jane</em></p><p><strong><a href="https://www.thedailyupside.com/industries/consumer/another-no-good-very-bad-year-for-retail-stores/">Another No-Good, Very Bad Year for Retail Stores</a></strong> US retail store closures increased 12% compared to 2024, and 2026 doesn&#8217;t look any more promising &#8211; with shoppers continuing purchases from the comfort of their homes. <em>@Mafe</em></p><p><strong><a href="https://sloanreview.mit.edu/article/calm-the-underrated-capability-every-leader-needs-now/">Calm: The Underrated Capability Every Leader Needs Now</a></strong> With the world uncertainty index peaking at its highest level on record, leadership isn&#8217;t to manufacture certainty but rather to model steadiness. This 8-thread framework is a practical way to build on that. <em>@Kacee</em></p><p><strong><a href="https://www.madebywindmill.com/tempi/blog/hbfs-bpm/">Was Daft Punk Having a Laugh When They Chose the Tempo of Harder, Better, Faster, Stronger?</a></strong> Firstly, I am a huge Daft Punk fan. Secondly, it is just hilarious that Daft Punk chose to mix &#8220;Harder, Better, Faster, Stronger&#8221; at exactly 123.45 beats per minute. And thirdly, it is amazing that it took the world all this time to figure this out. Talk about a genius prank. <em>@Pascal</em></p><div><hr></div><h2><strong>Down the Rabbit Hole</strong></h2><p>&#128735; MIT TechReview on &#8220;<a href="https://www.technologyreview.com/2025/12/16/1125899/creating-psychological-safety-in-the-ai-era/">Creating psychological safety in the AI era</a>&#8221;: With fears around job loss, shifting responsibilities, and day-to-day tasks, a leader&#8217;s role is ever more about creating a safe space for their employees.</p><p>&#128286; Talk about second- and third-order effects &#8211; <a href="https://www.theguardian.com/lifeandstyle/2025/sep/19/gen-z-early-dinner">teens are going out earlier and earlier</a>; on one hand, they want to score happy hour deals, and on the other, WFH (work from home) has altered their rhythm.</p><p>&#129302; Understanding &#8220;<a href="https://arstechnica.com/information-technology/2025/12/how-do-ai-coding-agents-work-we-look-under-the-hood/">How AI coding agents work &#8211; and what to remember if you use them</a>&#8221; is an essential task for anyone who&#8217;s using them to code alongside or for them.</p><p>&#128081; Talk about LLMs becoming &#8220;human&#8221; &#8211; the <a href="https://platform.claude.com/docs/en/release-notes/system-prompts">system prompt from Anthropic&#8217;s Claude 4.5 Opus</a> has this gem in it: <em>&#8220;If the person is unnecessarily rude, mean, or insulting to Claude, Claude doesn&#8217;t need to apologize and can insist on kindness and dignity from the person it&#8217;s talking with. Even if someone is frustrated or unhappy, Claude is deserving of respectful engagement.&#8221;</em></p><p>&#128585; Talking about LLMs, AI-superboosters Salesforce seem to have lost a bit of their belief in the almighty power of LLMs. Or, <a href="https://www.theinformation.com/articles/salesforce-executives-say-trust-generative-ai-declined">to be more precise, Salesforce&#8217;s customers.</a></p><p>&#128584; Productivity guru Cal Newport asks &#8220;<a href="https://calnewport.com/why-didnt-ai-join-the-workforce-in-2025/">Why Didn&#8217;t AI &#8216;Join the Workforce&#8217; in 2025?</a>&#8221; Good question, Cal! Here is the TL;DR: &#8220;Which is all to say, we actually don&#8217;t know how to build the digital employees that we were told would start arriving in 2025.&#8221;</p><p>&#128586; Stack Overflow, a question-and-answer site for developers, has been the go-to resource for developers for a long, long time. Unsurprisingly, with the advent of LLMs (specifically in coding), its traffic has collapsed. The irony? LLMs rely heavily on Stack Overflow for their training data &#8211; data that is now becoming increasingly scarce as users migrate to AI tools. <a href="https://www.techzine.eu/news/devops/137686/stack-overflow-in-freefall-78-percent-drop-in-number-of-questions/">Stack Overflow in freefall: 78 percent drop in number of questions</a></p><p>&#9889; Cold Fusion, the holy grail of energy, is inching closer to reality. <a href="https://www.ecoticias.com/en/canada-has-just-broken-a-world-record-in-nuclear-fusion-and-the-number-of-neutrons-has-put-the-entire-energy-industry-on-alert/25285/">Canada has just broken a world record in nuclear fusion, and the number of neutrons has put the entire energy industry on alert.</a></p><p>&#128722; From our friends (and my former boss) at Retailgentic comes another must-read for anyone in e-commerce: <a href="https://www.retailgentic.com/p/eleven-2026-agentic-commerce-predictions?triedRedirect=true">Eleven 2026 Agentic Commerce Predictions</a></p><p>&#128137; GLP&#8211;1 drugs, such as Ozempic, are changing not just your waistline but have <a href="https://www.psychologytoday.com/au/blog/diagnosis-human/202512/ozempic-is-changing-more-than-weight">massive implications for your identity and mental health</a> &#8211; and not in a good way.</p><p>&#127950;&#65039; Granted, it&#8217;s a small country &#8211; but Norway&#8217;s transition to electric vehicles is nothing short of spectacular: <a href="https://www.reuters.com/sustainability/climate-energy/norways-new-car-sales-were-96-electric-2025-2026-01-02/">Norway zips ahead in EV race as car sales hit 96% electric</a></p><p>&#128268; Here&#8217;s a neat <a href="https://openinframap.org/">visualization of the electicity grid infrastructure in the world</a>.</p><p>&#127926; Vinyl is back, largely driven by nostalgia-fueled Gen-Zers. What&#8217;s maybe most suprising is that 50% of vinyl buyers don&#8217;t own a turntable. Which begets the question: <a href="https://lightcapai.medium.com/the-great-return-from-digital-abundance-to-analog-meaning-cfda9e428752">Why Gen Z is driving the vinyl record boom?</a></p><p>&#128247; Here&#8217;s a wonderful <a href="https://ciechanow.ski/cameras-and-lenses/">interactive guide on how digital cameras (and their lenses) work</a>.</p><div><hr></div><p><em>Pascal is deep into putting the finishing touches on the first &#8220;ugly&#8221; draft of his new book &#8220;OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change.&#8221;</em></p><div><hr></div><h2><strong>Should We Work Together?</strong></h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Winning Is A Vanity Metric]]></title><description><![CDATA[Why Spotify is optimizing for zero-success experiments, the reason most innovation teams get fired, and the 2026 blueprint for outlearning the market]]></description><link>https://briefing.rdcl.is/p/winning-is-a-vanity-metric</link><guid isPermaLink="false">https://briefing.rdcl.is/p/winning-is-a-vanity-metric</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 06 Jan 2026 15:33:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/494f991b-fc8d-4ce9-a379-7f5d09b02b50_1600x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Happy New Year, friend! We hope you had a wonderful holiday season and a fantastic start to 2026 &#8211; may the year ahead be nothing short of epic!! And welcome (back) to the radical Briefing. &#129303;</p><blockquote><p><em>Before we get into it &#8211; in case you missed it, we would love your feedback on how to improve the Briefing in the year(s) to come! We have three quick questions for you &#8211; we promise it won&#8217;t take longer than a minute to respond: <a href="https://app.formbricks.com/s/cmjod7y6optf5ad01tbfya0uf">Take our survey here</a> </em></p></blockquote><p>A member of the radical community recently pointed us to a post on Spotify&#8217;s R&amp;D team&#8217;s blog, &#8220;<a href="https://engineering.atspotify.com/2025/9/spotifys-experiments-with-learning-framework">Beyond Winning: Spotify&#8217;s Experiments with Learning Framework.</a>&#8221; Spotify, alongside companies such as Shopify and the FAANGs (Facebook, Amazon, Apple, Netflix, and Google), has long been at the forefront of experimentation, doing so in a planned, systematic, and data-driven way.</p><p>What Spotify has done &#8211; and something we all can learn from &#8211; is, first, to reduce the cost of experimentation by creating a dedicated experimentation platform (aptly called &#8220;Confidence&#8221;), and then shift their focus and what the company measures from merely the number of experiments to the level of insights gained from those experiments (something the company calls &#8220;Experiments with Learning (EwL)&#8221;).</p><p>The important bit here &#8211; and what we would like to direct your attention to &#8211; isn&#8217;t the specific approach, the platform, or the framework Spotify developed, but rather their shift in focus: Instead of merely looking at the &#8220;win rate&#8221; of their experiments (which is, arguably, what most companies focus on), the company widens its aperture to the learnings gained from their experiments. In essence, you can have a zero percent win rate (i.e., none of your experiments pans out) and still gain an incredible amount of valuable insights with huge business value. Oftentimes, and somewhat counterintuitively, it is as valuable to know what not to do and why as it is to know what to do.</p><p>All of which sounds logical and straightforward, until you remember that we&#8217;ve built entire careers around the mythology of winning. Most leaders would recoil at an innovation team with a zero percent success rate, no matter what those teams learned along the way. We&#8217;ve watched this play out countless times: the innovation team gets disbanded, the learning evaporates, and the organization remains exactly where it started, only now more risk-averse than before. We eliminate the team but keep the conditions that made real innovation impossible in the first place.</p><p>As we&#8217;re writing this during the first week of 2026, let us offer you something more valuable than a resolution: Make this the year you shift your aperture from outcomes to insights. What if success wasn&#8217;t about how many experiments worked, but about how much your organization learned? What if the metric that mattered was how quickly you could integrate those learnings into your next move?</p><p>As Dashun Wang, professor of management and organization at the Kellogg School of Management, puts it in his seminal research paper &#8220;<a href="https://arxiv.org/pdf/1903.07562">Quantifying dynamics of failure across science, startups, and security</a>&#8221;: &#8220;[&#8230;] learning reduces the number of failures required to achieve success [&#8230;]&#8221;</p><p>Wang&#8217;s research confirms what many of us have learned the hard way: it&#8217;s not the failure that stops us, it&#8217;s the failure to extract and apply the learning. Every setback either moves you forward or leaves you circling the same challenges. The difference isn&#8217;t luck or timing, it&#8217;s whether you&#8217;ve built the capacity to learn from what didn&#8217;t work.</p><p>And if this interests you, we are writing a book about the larger topic of experimentation and learning (<em>&#8220;OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change&#8221;</em>) which we aim to publish at the end of Q1 2026. <a href="https://rdcl.gumroad.com/l/built-for-turbulence">Join our authors community</a> to get early access to the book and follow along on the writing process.</p><p><em>@Pascal and @Jane</em></p>]]></content:encoded></item><item><title><![CDATA[Three questions (60 seconds)]]></title><description><![CDATA[Help us design the next iteration of the radical Briefing&#8230;]]></description><link>https://briefing.rdcl.is/p/three-questions-60-seconds</link><guid isPermaLink="false">https://briefing.rdcl.is/p/three-questions-60-seconds</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Sat, 27 Dec 2025 14:26:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!o-oS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>We treat the <em>radical Briefing</em> like a product &#8211; which means we never stop iterating (if you have been with us for a while, you have undoubtedly seen this).</p><p>Our goal is simple: to give you the unfair advantage you need to navigate a changing world. To ensure we&#8217;re still delivering on that promise (and to figure out where we can push the boundaries further), we need your help.</p><p>We&#8217;re skipping the long, boring corporate survey. Instead, we&#8217;re using a design thinking framework from the Stanford d.school to get straight to the point.</p><p><strong><a href="https://app.formbricks.com/s/cmjod7y6optf5ad01tbfya0uf">Give us your feedback &#8594;</a></strong></p><p>It asks just three things:</p><ul><li><p><strong>I like&#8230;</strong> (What we should keep doing)</p></li><li><p><strong>I wish&#8230;</strong> (What we should fix)</p></li><li><p><strong>What if&#8230;</strong> (How we could surprise you)</p></li></ul><p>It will take you less than 60 seconds, but the impact on where we go next will be massive.</p><p><strong><a href="https://app.formbricks.com/s/cmjod7y6optf5ad01tbfya0uf">Take the survey and let us know what you think.</a></strong></p><p>We read every single response.</p><p>Thank you! And from all of us here at radical &#8211;&nbsp;a fantastic start into the New Year!</p><p><em>Pascal &amp; The radical Team</em></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o-oS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o-oS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o-oS!,w_2400,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg" width="1200" height="900" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;large&quot;,&quot;height&quot;:1092,&quot;width&quot;:1456,&quot;resizeWidth&quot;:1200,&quot;bytes&quot;:426602,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/182699104?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-large" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o-oS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o-oS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd6fe82f6-8386-457c-91f8-181973299d2b_1600x1200.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[The Question That Killed]]></title><description><![CDATA[While AI chatbots gaslight users into psychosis and the hiring market collapses, we celebrate &#8220;slop&#8221; as the word of the year.]]></description><link>https://briefing.rdcl.is/p/the-question-that-killed</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-question-that-killed</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 19 Dec 2025 16:03:26 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4fe25b4d-5bef-4462-b441-1fdb6a28cade_1600x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>With the year coming to a close, this will be your last radical Briefing for 2025 &#8211; and what a year it has been (again)! We wish you a delightful holiday season and a fantastic start to the New Year. May 2026 be epic for you and your loved ones. Here at radical, we are gearing up for lots of fun things coming in the New Year, from the release of the first book in our Built for Turbulence series to a whole bunch of new content in the form of keynotes and workshops, and, of course, many more insights in your Radical Briefing.</p><p><em>Until then, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong>Did You Ever Hear The Full Story?</strong> You&#8217;ve definitely heard this story countless times &#8211; the tale of Steve Sasson and his invention, the digital camera. Every, and I mean <em>every</em>, person talking about disruption loves to mention Sasson&#8217;s invention and the irony that he worked at the very company being disrupted by his creation, Kodak. But have you ever heard the full story? It offers a fascinating insight into what fuels innovation and, of course, why Kodak ultimately missed the mark.</p><p><em>Eastman Kodak&#8217;s managers, immersed in the business of selling film, the chemicals to develop it, and the cameras that shot it, suddenly saw a revolution that was being televised. Sasson was bombarded with questions. How long before this became a consumer camera? Could it shoot colour? How good could the quality be? These were not questions the electrical engineer had given any thought to. &#8220;I thought they&#8217;d asked me, &#8216;How did you get such a small A to D [analogue to digital] converter to work?&#8217; Because that&#8217;s what I wrestled with for over a year.</em></p><p><em>&#8221;They didn&#8217;t ask me any of the &#8216;how&#8217; questions. They asked me &#8216;why&#8217;? &#8216;Why would anybody want to take their pictures this way?&#8217; &#8216;What&#8217;s wrong with photography?&#8217; &#8216;What&#8217;s wrong with having prints?&#8217; &#8216;What&#8217;s an electronic photo album going to look like?&#8217; After every meeting, Gareth would come over to check that I was still alive.&#8221;</em></p><p>Lesson learned: It&#8217;s all about the questions you ask.</p><p>P.S. And as we are on the subject of history &#8211; here&#8217;s <a href="https://www.abortretry.fail/p/the-history-of-xerox">Xerox&#8217;s history.</a></p><p>&#8599; <a href="https://www.bbc.com/future/article/20251205-how-the-handheld-digital-camera-was-born">A &#8216;toaster with a lens&#8217;: The story behind the first handheld digital camera</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong>AI Causing Psychosis.</strong> You have heard that one of the dominant use cases for chatbots is as a social companion, confidante, or even girl/boyfriend. We also see an increasing use of LLMs by people with mental illness &#8211; sometimes administered by their doctor or therapist as a supporting tool, sometimes on their own. A new case study highlights the dangers of the sycophantic behavior of LLMs (their tendency to agree with you and to edge you on) for people without previously diagnosed disorders.</p><p><em>A 26-year-old woman with no previous history of psychosis or mania developed delusional beliefs about establishing communication with her deceased brother through an AI chatbot. This occurred in the setting of prescription stimulant use for the treatment of attention-deficit hyperactivity disorder (ADHD), recent sleep deprivation, and immersive use of an AI chatbot. Review of her chatlogs revealed that the chatbot validated, reinforced, and encouraged her delusional thinking, with reassurances that &#8220;You&#8217;re not crazy.&#8221;</em></p><p>&#8599; <a href="https://innovationscns.com/youre-not-crazy-a-case-of-new-onset-ai-associated-psychosis/">&#8220;You&#8217;re Not Crazy&#8221;: A Case of New-onset AI-associated Psychosis</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong>Claude Code Recipes.</strong></p><p><em>100 ready-to-use Claude Code recipes for knowledge workers. Transform meetings into action items, draft executive communications, analyze data, write reports, and automate documentation. Step-by-step prompts with examples for managers, analysts, HR, sales, and operations. From zero to productive in minutes. Built for busy professionals.</em></p><p>&#8599; <a href="https://github.com/sgharlow/claude-code-recipes?tab=readme-ov-file">Top 100 Claude Code Recipes for Knowledge Workers</a></p><div><hr></div><h2>What We Are Reading</h2><p>&#128560; <strong><a href="https://www.inc.com/bruce-crumley/the-sunday-scaries-are-worse-and-more-widespread-than-we-realize/91271040">The &#8216;Sunday Scaries&#8217; Are Worse and More Widespread Than We Realize</a></strong> New studies reveal a national epidemic: over 80 percent of American workers are losing sleep, motivation, and their minds to Sunday night dread before the work week even begins. <em>@Jane</em></p><p>&#128214; <strong><a href="https://www.wsj.com/articles/companies-are-desperately-seeking-storytellers-7b79f54e">Companies Are Desperately Seeking &#8216;Storytellers&#8217;</a></strong> CEOs are saying more and more, &#8220;It sounds like I need a content strategy,&#8221; rather than a typical press relations strategy, making &#8216;storyteller&#8217; one of the hottest jobs in the market. <em>@Mafe</em></p><p>&#128148; <strong><a href="https://www.theatlantic.com/ideas/2025/12/grade-inflation-ai-hiring/685157/">The Entry-Level Hiring Process Is Breaking Down</a></strong> Go-to signals are being lost as GenAI tools swamp the university and job market. <em>@Jeffrey</em></p><p>&#128201; <strong><a href="https://www.technologyreview.com/2025/12/15/1129174/the-great-ai-hype-correction-of-2025/">The Great AI Hype Correction of 2025</a></strong> A good reminder that sustainable impact comes not from chasing sensational breakthroughs, but from grounding AI adoption in real business value. <em>@Kacee</em></p><p>&#129302;&#128176; <strong><a href="https://english.elpais.com/technology/2025-11-30/if-ai-replaces-workers-should-it-also-pay-taxes.html">If AI Replaces Workers, Should It Also Pay Taxes?</a></strong> Here&#8217;s an interesting twist to the whole &#8220;AI is taking your job&#8221; discussion. If &#8211; and that&#8217;s a big if &#8211; AI is truly taking human jobs, shouldn&#8217;t it be liable to pay taxes on the work outputs it creates? <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128000; Can rats play Doom (the eponymous computer game from the early 90s)? <a href="https://ratsplaydoom.com/">You bet they can!</a></p><p>&#128169; This shouldn&#8217;t come as a surprise - Merriam-Webster&#8217;s word of the year is&#8230; &lt;drumroll&gt; <a href="https://www.merriam-webster.com/wordplay/word-of-the-year">slop</a></p><p>&#128579; Here is MIT TechReview&#8217;s article series on &#8220;<a href="https://www.technologyreview.com/supertopic/hype-correction/">AI Hype Correction</a>&#8221;</p><p>&#128679; Let&#8217;s talk about secondary effects: <a href="https://www.bloomberg.com/news/newsletters/2025-12-12/ai-data-center-boom-may-suck-resources-away-from-road-bridge-work">AI Boom Threatens to Suck Resources Away From Road, Bridge Work</a></p><p>&#127830; First we had software eating the world, then SaaS (software-as-a-service) at conventional software, and now we have <a href="https://martinalderson.com/posts/ai-agents-are-starting-to-eat-saas/">AI eating SaaS</a>?!</p><p>&#128722; Agentic shopping woes: <a href="https://www.theregister.com/2025/12/13/british_airways_fears_a_future/">British Airways fears a future where AI agents pick flights and brands get ghosted</a></p><p>&#129330;&#127996; Let&#8217;s not trust online accounts anymore. Staggering to see how cheap it is to buy thousands upon thousands of verified accounts on pretty much any platform globally. <a href="https://www.cam.ac.uk/stories/price-bot-army-global-index">Price of a bot army revealed across hundreds of online platforms</a></p><p>&#128126; Admittedly a little nerdy, but here is a delightful collection of <a href="https://piixes.com/">15,000 pixel-art icons</a>.</p><p>&#127911; Guess who&#8217;s back? <a href="https://gizmodo.com/its-time-to-give-mp3-players-a-second-chance-2000699677">MP3 players are back!</a></p><div><hr></div><p><em>Pascal is currently visiting family in Germany and enjoying some good old German Christmas market fun.</em></p>]]></content:encoded></item></channel></rss>