<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[radical Briefing]]></title><description><![CDATA[The future doesn’t come with a manual. But twice a week, we’ll send you the next best thing: razor-sharp insights, practical frameworks, and early signals that keep you ahead of the curve. Raw, unfiltered, and straight from the edge of innovation.]]></description><link>https://briefing.rdcl.is</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 15:39:17 GMT</lastBuildDate><atom:link href="https://briefing.rdcl.is/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[be radical Group LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[rdcl@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[rdcl@substack.com]]></itunes:email><itunes:name><![CDATA[Pascal Finette]]></itunes:name></itunes:owner><itunes:author><![CDATA[Pascal Finette]]></itunes:author><googleplay:owner><![CDATA[rdcl@substack.com]]></googleplay:owner><googleplay:email><![CDATA[rdcl@substack.com]]></googleplay:email><googleplay:author><![CDATA[Pascal Finette]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Free Ride Is Over]]></title><description><![CDATA[AI agents now cost more than human labor, cybersecurity became an arms race, and someone sequenced their genome on a kitchen table. The subsidized honeymoon era is ending everywhere at once.]]></description><link>https://briefing.rdcl.is/p/the-free-ride-is-over</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-free-ride-is-over</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 24 Apr 2026 14:34:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0dc1ee63-86f2-493c-9bb0-607566ab6b6e_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Remember the glorious days when Uber and Lyft were heavily subsidized by their venture capital sugar daddies and you couldn&#8217;t get over how cheap it is to get a ride? Yeah, those days are gone (and much can be said about the market-distorting effects of the VC-fueled subsidies). Well, it increasingly looks like the sweet days of $20/month all-you-can-prompt AI plans are also coming to an end &#8211; pretty much all the major AI companies are tweaking their pricing strategies, making tokens for their latest frontier models much more expensive, and generally trying to dig themselves out of the &#8220;for every dollar we make, we lose five&#8221;-hole. It doesn&#8217;t come as a surprise &#8211; but it will be interesting to see what it does to market demand.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://x.com/finmoorhouse/status/2044933442236776794">Putting the AI Investment into Perspective.</a></strong> As the saying goes &#8211; a picture is worth a thousand words.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WsxJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg" width="1200" height="1151" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1151,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80380,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/195291829?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.tobyord.com/writing/hourly-costs-for-ai-agents">Are the Costs of AI Agents Also Rising Exponentially?</a></strong> With AI models becoming more and more powerful, the cost of inference (at least for frontier models) is staying about the same (or increases) <em>and</em> the models consuming vastly more tokens for a given task. This being said, Toby Ord did a fascinating analysis of the cost of running AI agents as a function of &#8220;cost of labour&#8221; &#8211; and found that agents sometimes cost much more than human labour (&#8220;How is the &#8216;hourly&#8217; cost of AI agents changing over time?&#8221;). In sum:</p><blockquote><ul><li><p>This provides moderate evidence that:</p></li><li><p>the costs to achieve the time horizons are growing exponentially,</p></li><li><p>even the hourly costs are rising exponentially,</p></li><li><p>the hourly costs for some models are now close to human costs.</p></li></ul></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html">Cyber Security Is a Completely Different Game Now.</a></strong> If you have even half an ear to the ground when it comes to cybersecurity, you have heard stories about Anthropic&#8217;s newest model &#8220;Mythos&#8221; being held back as it is &#8220;too dangerous&#8221; &#8211; with the main fear being that it finds vulnerabilities in software with an unprecedented speed and accuracy. In fact, people are hacking all kinds of hard- and software using current state-of-the-art models such as GPT-5.4 or Opus for the last couple of months now. All of which turns cybersecurity into even more of a race between who can outspend whom, than it already is. In simple (AI economic) terms:</p><blockquote><p>to harden a system we need to spend more tokens discovering exploits than attackers spend exploiting them [and]  to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.</p></blockquote><p>If you are running a system which has any public exposure surface (e.g. a website, an API, or an app), you better take this seriously. I wouldn&#8217;t be surprised if we will see tons of new exploits being executed in the next few months and years.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://x.com/sethshowes/status/2045289299269070978">DIY Sequence Your Whole Genome.</a></strong> We have been talking about an individual&#8217;s ability to sequence their own genome at home, using lab-grade but DIY equipment, for a while now (it was one of the predictions floating around in the ether in the heyday of Singularity University &#8211; it was always &#8220;just around the corner&#8221;). Now it has (finally) happened &#8211; alas, not for the faint of heart.</p><blockquote><p>So this week I sequenced my genome entirely at home. Literally on my kitchen table.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/04/extended-range-electric-vehicle-pickup-trucks/686811/?gift=0GPrpLquXY4NmRQ6sk9MNmLbwJO9qfyaNiz1Iuc5qpY">A New Kind of Hybrid Car Is About to Hit America&#8217;s Streets</a></strong> EREVs are the exciting new hybrid technology everyone should know about. Your car runs on electric power but quietly refuels its own battery with gas, so you never have to worry about being stranded! <em>@Jane</em></p><p><strong><a href="https://www.bloomberg.com/news/articles/2026-04-21/byd-xiaomi-and-zeekr-car-reviews-flood-tiktok-youtube-in-the-us">TikTok Makes Americans Want Chinese EVs They Can&#8217;t Have</a></strong> Chinese car brands are nearly absent from US roads due to tariffs and regulations, but are building American consumer desire through social media while playing a long-term strategy. <em>@Mafe</em></p><p><strong><a href="https://aleximas.substack.com/p/what-will-be-scarce">What Will Be Scarce?</a></strong> An economist goes deep on a relatively optimistic scenario for the future of human labor, finding durable value in what he calls the &#8220;relational sector,&#8221; where the value of the service is likely to be increasingly linked to the human providing it. <em>@Jeffrey</em></p><p><strong><a href="https://hbr.org/2026/04/the-end-of-one-size-fits-all-enterprise-software?ab=HP-hero-featured-1">The End of One-Size-Fits-All Enterprise Software</a></strong> Pascal and I have been writing about this lately, we&#8217;re moving from standardized systems to outcome-driven architectures that can conform to the business. <em>@Kacee</em></p><p><strong><a href="https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/">Our newsroom AI policy</a></strong> As companies (and in this case, newsrooms) around the world grapple with what it means to operate in an AI-enabled/driven world, it will become more and more important for organizations to establish (and publish) clear guidelines and disclosures on their use of AI &#8211; here is a good example from the Ars Technica newsroom. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#9994; &#8220;We believe in human beings.&#8221; Union leaders are <a href="https://www.axios.com/2026/04/16/unions-ai-bernie-sanders-shawn-fain">escalating their anti-AI rhetoric</a>, portraying the industry&#8217;s leaders as profit-hungry &#8220;oligarchs&#8221; eager to replace humans.</p><p>&#9728;&#65039; Shine (not drill), baby shine: <a href="https://electrek.co/2026/04/19/iea-solar-overtakes-all-energy-sources-in-a-major-global-first/">IEA &#8211; Solar overtakes all energy sources in a major global first.</a></p><p>&#9997;&#65039; Hacking the system: <a href="https://sentinelcolorado.com/uncategorized/a-college-instructor-turns-to-typewriters-to-curb-ai-written-work-and-teach-life-lessons/">A college instructor turns to typewriters to curb AI-written work and teach life lessons.</a></p><p>&#9749; A wonderful lesson in taking something that worked (ordering coffee through a carefully designed app) and making it worse by using AI: <a href="https://www.theverge.com/ai-artificial-intelligence/915821/starbucks-chatgpt-app-testing">Ordering with the Starbucks ChatGPT app was a true coffee nightmare.</a></p><p>&#129489;&#8205;&#9878;&#65039; Let AI be the judge: <a href="https://mediator.ai/">Cooperative negotiation is a solvable problem</a> (or so says this company).</p><p>&#128272;  Turns out &#8211; your cybersecurity does, in fact, withstand the (possibly coming) wave of quantum computer-powered attacks (despite the attention-grabbing headlines): <a href="https://words.filippo.io/128-bits/">Quantum computers are not a threat to 128-bit symmetric keys.</a></p><p>&#127904; Ed Zitron, one of the most outspoken critics of AI, is back (and it&#8217;s worth reading &#8211; even if you don&#8217;t agree with him): <a href="https://www.wheresyoured.at/four-horsemen-of-the-aipocalypse/?ref=ed-zitrons-wheres-your-ed-at-newsletter">Four Horsemen of the AIpocalypse.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Hype Is Eating Itself]]></title><description><![CDATA[While Gen Z rage-quits the AI dream, OpenAI lobbies for mass-casualty immunity, and laziness turns out to have been load-bearing all along]]></description><link>https://briefing.rdcl.is/p/the-hype-is-eating-itself</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-hype-is-eating-itself</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 17 Apr 2026 15:03:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/af4ce9c1-d4a1-4849-8aa7-51f93e238502_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>The, quite possibly, craziest story to emerge this week from the ever-nutty world of AI hype is, of course, the rebrand/relaunch of sneaker company Allbirds as an AI company &#8211; resulting in a $127 million increase in stock market value. I don&#8217;t even comment on how absurd all of this is. You know something is up when even the most die-hard AI-boosting publications start calling BS&#8230; Anyway &#8211; time for your weekly dose of news and analysis&#8230;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.everydayhealth.com/weight-management/reddit-users-reporting-glp-1-side-effects/">Better Drug Side Effects Monitoring through Reddit?</a></strong> It shouldn&#8217;t come as a surprise that by harvesting the massive data trove that is Reddit, one can find drug side effects that are underreported in clinical trials. Reminds us of a pharma client of ours who mentioned that they consider Apple a massive threat to their business &#8211; as the company has a humongous amount of data on <em>healthy</em> people, whereas pharma companies typically only have data on <em>sick</em> people.</p><blockquote><p>Using artificial intelligence to scan more than 400,000 Reddit posts, researchers from the University of Pennsylvania documented numerous reports of possible GLP-1 side effects that may be underrecognized in clinical trials &#8211; including menstrual changes, fatigue, and temperature sensitivities.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://dev.to/dcc/the-honest-climate-case-for-ai-5hg5">Let&#8217;s Talk About AI&#8217;s Energy Footprint (Again).</a></strong> The linked article is a good and accessible summary of where we stand on AI&#8217;s energy footprint. The tl;dr is that AI&#8217;s current energy footprint is modest (comparable to streaming video). But demand is growing fast, reasoning models use 10&#8211;100x more energy than basic queries, and efficiency gains keep getting reinvested into more capability rather than saved. And what electricity powers the data centers is a much bigger question: Clean grid = net climate okay. Gas/coal grid = real problem.</p><blockquote><p>Stop feeling guilty about prompts. Your Wh per query is not the lever that matters. You&#8217;ll do more climate good by eating one less steak, taking one fewer flight, or voting for better energy policy than by boycotting LLMs. What matters at the individual level is where you direct your attention. Demand the acceleration of the deployment of clean generation to meet data center demand; grid interconnections, nuclear licensing, transmission lines, and permitting reform are the bottleneck, not GPUs.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.highereddive.com/news/gen-z-ai-gallup-poll-negative-sentiment/817133/">The GenZ AI Tide is Turning.</a></strong> GenZ, supposedly the most AI-savvy generation entering the workforce right now, is not too thrilled about that whole AI thing.</p><blockquote><p>Anger over AI is increasing among Gen Z at the same time excitement is fading. Nearly one-third of the survey&#8217;s respondents, 31%, said AI makes them feel angry, up 9 percentage points from last year. And just 22% said the technology makes them feel excited, down from 36% the prior year.</p></blockquote><p>Reconcile this with the growing pressure on entry-level jobs, as well as overall job losses due to AI, and you have a storm brewing.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.nature.com/articles/d41586-026-01099-2">The Air Is Full of DNA - Here&#8217;s What Scientists Are Using It for</a></strong> Genetic breadcrumbs in the air reveal ecosystem secrets, spot sneaky invaders, and even track humans! <em>@Jane</em></p><p><strong><a href="https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html">My Quest to Solve Bitcoin&#8217;s Great Mystery</a></strong> A detailed read about one man&#8217;s journey to find out who&#8217;s behind Satoshi Nakamoto. <em>@Mafe</em></p><p><strong><a href="https://www.noemamag.com/why-a-liberal-arts-education-will-soon-be-more-valuable-than-ever/">How To Future-Proof Your Career In The Age Of AI</a></strong> If cognitive flexibility, taste, and good judgment become critical differentiators in a world of abundant intelligence, does the most valuable background begin to look a lot like a classical interdisciplinary, liberal arts education? <em>@Jeffrey</em></p><p><strong><a href="https://www.marketwatch.com/story/will-ai-start-going-rogue-the-chorus-of-warnings-is-getting-louder-c4d4b831">Will AI Start &#8216;Going Rogue&#8217;? the Chorus of Warnings Is Getting Louder</a></strong> When the people building the tech warn about loss of control, it may be a signal worth paying attention to. <em>@Kacee</em></p><p><strong><a href="https://bcantrill.dtrace.org/2026/04/12/the-peril-of-laziness-lost/">The Peril of Laziness Lost</a></strong> Here is an interesting argument from the world of software development: Laziness (in coding) leads us to more elegant, better-performing, and cleaner code. With AI coding tools, laziness suddenly has stopped being a virtue &#8211; if nothing else, AI happily keeps churning&#8230; And with laziness becoming a lost art, software will become worse. I&#8217;d venture to say that this is what is happening in every area AI touches. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#127864; Rejoice! It is now legal to distill your own alcohol in the United States: <a href="https://www.theguardian.com/law/2026/apr/11/appeals-court-ruling-home-distilling-ban-unconstitutional">US appeals court declares 158-year-old home distilling ban unconstitutional.</a></p><p>&#9760;&#65039; Nothing to see here. <a href="https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/">OpenAI backs bill that would limit liability for AI-enabled mass deaths or financial disasters.</a></p><p>&#129489;&#8205;&#9877;&#65039; Surprised is no one: <a href="https://www.nature.com/articles/d41586-026-01100-y">Scientists invented a fake disease. AI told people it was real.</a></p><p>&#127922; Ever wanted to increase your chances of beating your niece at Connect Four? Here&#8217;s the mathematically best way to do it: <a href="https://2swap.github.io/WeakC4/explanation/">WeakC4, or distilling an emergent object.</a></p><p>&#127466;&#127482; European tech sovereignty is a thing. It will be interesting to see how this plays out in long run. Latest point in case: <a href="https://techputs.com/france-windows-to-linux-shift/">France ditch Windows for Linux to cut reliance on US tech.</a></p><p>&#128268; Have we reached the tipping point? <a href="https://www.the-independent.com/tech/renewable-energy-solar-nepal-bhutan-iceland-b2533699.html">Seven countries now generate 100% of their electricity from renewable energy.</a></p><p>&#9992;&#65039; Desperate times call for desperate measures. <a href="https://www.bbc.com/news/articles/ce84rvx0e6do">Great at gaming? US air traffic control wants you to apply.</a></p><p>&#129489;&#8205;&#127912; Life imitates art. This feels like it&#8217;s right out of an episode of Black Mirror: <a href="https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone">Mark Zuckerberg is reportedly building an AI clone to replace him in meetings.</a></p><p>&#9997;&#65039; You become what you write: <a href="https://www.science.org/doi/10.1126/sciadv.adw5578">Biased AI writing assistants shift users&#8217; attitudes on societal issues.</a></p><p>&#128246; Data becomes a right. <a href="https://www.theregister.com/2026/04/10/south_korea_data_access_universal/">South Korea introduces universal basic mobile data access.</a></p><p>&#129299; Nerd alert! Fascinating approach to improving AI&#8217;s coding abilities: <a href="https://blog.skypilot.co/research-driven-agents/">Having a coding agent read a series of papers on the topic at hand before coding results in significant improvements in code quality.</a></p><p>&#128218; Lovely read: <a href="https://www.newyorker.com/books/book-currents/stewart-brand-on-how-progress-happens">Stewart Brand on how progress happens.</a></p><p>&#129300; More than half of Americans are &#8216;getting tired of hearing&#8217; about AI, <a href="https://www.scrippsnews.com/science-and-tech/artificial-intelligence/more-than-half-of-americans-are-getting-tired-of-hearing-about-ai-survey-finds">survey finds.</a></p><p>&#128200; From the MIT Tech Review: <a href="https://www.technologyreview.com/2026/04/13/1135675/want-to-understand-the-current-state-of-ai-check-out-these-charts/">Want to understand the current state of AI? Check out these charts.</a></p><p>&#129686; PSA: Wear your helmet! <a href="https://nyulangone.org/news/e-bike-and-scooter-crashes-are-leading-more-brain-injuries">E-bike and scooter crashes are leading to more brain injuries.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The AI No-Show]]></title><description><![CDATA[While Oracle fires 30,000 people to fund AI data centers, fake singers colonize the iTunes charts, and China moves to regulate virtual humans out of existence]]></description><link>https://briefing.rdcl.is/p/the-ai-no-show</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-no-show</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 10 Apr 2026 14:47:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cffe629f-a7cb-43b8-99c9-149c7322144a_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I honestly don&#8217;t even know where to begin &#8211; so much stuff is happening in the world right now; it truly is a whirlwind. From your usual (over) dose of AI, to geopolitics &#8211; but also a plethora of wild, weird, and wonderful weak signals&#8230; Like the bike bell which cleverly defeats the noise-cancelling technology of a pedestrian&#8217;s earbuds. Or AI-singers capturing the top spots in the iTunes charts (now, remember &#8211; this is iTunes, the $0.99 a song download store, which makes that whole story even more bizarre). Dig into today&#8217;s Briefing &#8211; the results from this week&#8217;s web explorations will keep you busy.</p><p>P.S. In case you missed it &#8211; I built on Kacee&#8217;s excellent post in the last radical Briefing on &#8220;Vibe Coding Our Way to 70%&#8221; in a <a href="https://www.linkedin.com/posts/pfinette_earlier-this-week-my-dear-friend-and-colleague-activity-7445550029798473728-lcR8?rcm=ACoAAABiKN0BVCUdHIulvhyy_BFFK-5oP5jc5ag">LinkedIn post</a>.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://fortune.com/2026/04/09/ai-backlash-quiet-quitting-fobo-obsolete-white-collar-rebellion/">The AI Quiet Quitters.</a></strong> Shadow AI was the story for a while &#8211; workers sneaking ChatGPT past IT, doing in minutes what used to take hours, running an underground productivity movement from their personal accounts (or simply freeing up more time to watch TikTok). Management called it a governance problem. Workers called it getting the job done. It felt, in a strange way, like good news (just like the good old days when we all brought our personal Dropbox accounts to the workplace as we were sick and tired of 1980s SharePoint).</p><p>That era has quietly ended. A new global survey of 3,750 executives and employees across 14 countries finds that more than 54% of workers bypassed their company&#8217;s AI tools in the past 30 days and completed the work manually instead &#8211; and another <em>33% haven&#8217;t used AI at all.</em> Eight in ten enterprise workers are avoiding the technology their employers are spending record sums to deploy. Shadow AI has become the AI no-show show.</p><blockquote><p>Now the data tells a different story. The tool that workers once raced to adopt covertly has become, for a large and growing share of the workforce, the tool they&#8217;ve stopped using altogether. Not because it doesn&#8217;t work. Because they&#8217;re afraid of what happens when it works too well.</p></blockquote><p>The piece also surfaces a huge trust gap: only 9% of workers trust AI for complex, business-critical decisions, compared to 61% of executives &#8211; a 52-point chasm. Executives and employees are, as the report puts it, describing fundamentally different companies. The fear of obsolescence &#8211; FOBO, fear of becoming obsolete &#8211; has apparently crossed the threshold from anxiety into active avoidance. Which is, if you think about it, a perfectly rational response to a completely irrational situation.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.reuters.com/world/china/china-moves-regulate-digital-humans-bans-addictive-services-children-2026-04-03/">China Is Coming for You, Lil Miquela.</a></strong> If you know us, you know that we&#8217;ve been talking about virtual humans (and more specifically, virtual influencers) for a long time now. Our particular example was always Miquela Sousa, a virtual influencer created by the LA-based design agency Brud. Our particular fascination with Miquela and her brothers and sisters centers around the fact that she never ages, never gets sick, never has a bad hair day, travels anywhere, and works 24/7 without a break. Since we talked about her in 2017, she was joined by an ever-expanding family of virtual humans. Now China is closing in on them:</p><blockquote><p>The Cyberspace Administration of &#8204;China&#8217;s proposed rules would require prominent &#8220;digital human&#8221; labels on all virtual human content and prohibit digital humans from providing &#8220;virtual intimate relationships&#8221; to those under 18, according to rules published for public comment until May 6.</p></blockquote><p>and</p><blockquote><p>&#8220;The governance of digital virtual humans is no longer merely an issue of industry norms; &#8288;rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy,&#8221; it added.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.linkedin.com/pulse/abundance-era-colm-sparks-austin--ayhte/">Digital Transformation is (Finally) Dead.</a></strong> For twenty years, the world operated on a simple principle: buy standard software, don&#8217;t build. The logic made sense, as building was insanely expensive, risky, and slow. The result was highly standardized systems (well hello, SAP!) which we had to stretch well beyond what they were designed for, patch the gaps with middleware, hire consultants to integrate the integrators, and call the whole messy pile &#8220;transformation.&#8221;</p><p>This long piece by EY&#8217;s Colm Sparks-Austin makes the case that the economics have fundamentally flipped. AI and modern dev tools have made engineering capacity abundant. The constraint is no longer &#8220;can we build this?&#8221; It&#8217;s &#8220;do we know what to build and why?&#8221; Colm&#8217;s argument is sharp &#8211; treat the core (ERP, system of record) as the skeleton: rigid, compliance-bearing, changed rarely. And treat the edge &#8211; the customer-facing layer, the last mile &#8211; as tissue: built to regenerate when the market shifts.</p><blockquote><p>Standardization is no longer a safety net. It is a ceiling.</p></blockquote><p>The piece is long, but worth your time &#8211; especially if you work with or inside large enterprises still debating whether to &#8220;buy or build.&#8221; That debate is over.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/podcasts/2026/04/is-ai-going-to-turn-us-all-into-middle-managers/686677/?gift=0GPrpLquXY4NmRQ6sk9MNjJIlkAOmZgquz76kI2Uipo">Is AI Going to Turn Us All Into Middle Managers?</a></strong> Two of our favorite people, Johnathan and Melissa Nightingale, just gave one of the sharpest takes we&#8217;ve heard on AI, management, and the future of work. Go find their Galaxy Brain conversation. <em>@Jane</em></p><p><strong><a href="https://gizmodo.com/crypto-investment-scams-were-the-most-costly-type-of-fraud-in-the-u-s-in-2025-2000743099#goog_rewarded">Crypto Investment Scams Were the Most Costly Type of Fraud in the U.S. in 2025</a></strong> Investment fraud, specifically crypto investment scams, accounted for 49% of all cyber-related complaints in 2025 to the FBI. <em>@Mafe</em></p><p><strong><a href="https://siddhantkhare.com/writing/ai-fatigue-is-real">AI Fatigue Is Real and Nobody Talks About It</a></strong> The real value is in sustainable output, and learning to work &#8211; sustainably &#8211; on new rhythms will be a significant piece of the AI transformation puzzle. <em>@Jeffrey</em></p><p><strong><a href="https://hbr.org/2026/04/when-silos-hinder-innovation-and-when-they-can-help?ab=HP-latest-text-4">When Silos Hinder Innovation &#8211; and When They Can Help</a></strong> Rethinking the innovation dogma&#8230; silos aren&#8217;t always the enemy; sometimes they can spark the best ideas. <em>@Kacee</em></p><p><strong><a href="https://idiocracy.wtf/">Are We Idiocracy Yet?</a></strong> Remember Mike Judge&#8217;s masterpiece, Idiocracy? If you have ever asked yourself how far the movie is from today&#8217;s reality &#8211; here is your answer. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128104;&#127996;&#8205;&#128187; In the same vein as my comment on Kacee&#8217;s post from last week&#8217;s radical Briefing (see above), a leader at the global consulting firm EY wrote, &#8220;<a href="https://www.linkedin.com/pulse/abundance-era-colm-sparks-austin--ayhte/">Why engineering replaces transformation as the engine of growth.</a>&#8221; It&#8217;s worth a read.</p><p>&#128373;&#127996;&#8205;&#9794;&#65039; The journalist who uncovered the Theranos scandal is behind the (maybe) next big unveil: <a href="https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html">Satoshi Nakamoto, the mysterious creator of Bitcoin, might have been found.</a></p><p>&#128187; Here&#8217;s a fun anecdote &#8211; as the world, once again, <a href="https://www.thealgorithmicbridge.com/p/inside-the-ai-industrys-most-expensive">seems to be obsessed with LOC (lines of code) as a productivity metric</a>, legendary software developer Bill Atkinson <a href="https://www.folklore.org/Negative_2000_Lines_Of_Code.html">recalls delivering -2,000 lines of code to Apple.</a></p><p>&#129489;&#127996;&#8205;&#127979; Some things you just can&#8217;t make up: Students record their professors&#8217; lecture, feed it into a speech-to-text AI, to then feed it into an LLM, to then ask/comment/respond to their teacher &#8211; in <a href="https://www.cnn.com/2026/04/04/health/ai-impact-college-student-thinking-wellness">his tone and style</a> (as the AI mimics the import).</p><p>&#129317; Claude (the AI model) might be a little confused as to who said what: <a href="https://dwyer.co.za/static/claude-mixes-up-who-said-what-and-thats-not-ok.html">Claude mixes up who said what.</a></p><p>&#127897;&#65039; The fake singers are coming &#8211; and they are coming for your top spots on the charts: <a href="https://www.showbiz411.com/2026/04/05/itunes-takeover-by-fake-ai-singer-eddie-dalton-now-occupies-eleven-spots-on-chart-despite-not-being-human-or-real-exclusive">iTunes takeover by fake AI singer &#8220;Eddie Dalton&#8221; &#8211; now occupies eleven spots on singles chart, number 3 on albums chart.</a></p><p>&#129300; Take headlines like these with a huge grain of salt: <a href="https://ca.news.yahoo.com/ai-models-secretly-scheme-protect-162555909.html">&#8220;AI models will secretly scheme to protect other AI models from being shut down, researchers find.&#8221;</a> Here is the <a href="https://rdi.berkeley.edu/blog/peer-preservation/">study</a> in question &#8211; and you shouldn&#8217;t be too surprised about the result, knowing that AI models are modelling their training data.</p><p>&#128302; On the topic of predicting the future (when it comes to AI), here is the <a href="https://blog.aifutures.org/p/q1-2026-timelines-update">latest update</a> from the folks at the AI Futures Project (yes, those were the folks who did the very optimistic/accelerated AI 2027 forecast).</p><p>&#129335;&#127996; Ethan Mollick, the Wharton School professor who coined the term &#8220;jagged frontier&#8221; in his assessment of LLMs and their capabilities, makes the argument that <a href="https://www.economist.com/by-invitation/2026/04/01/the-it-department-where-ai-goes-to-die">the IT department is where AI goes to die.</a></p><p>&#129331;&#127996; Take their phones away from them, and the kids will be fine! Well, not so fast&#8230; <a href="https://www.theguardian.com/commentisfree/2026/apr/01/australia-teen-social-media-ban-criticism">Australia&#8217;s teen social media ban is a flop. But there&#8217;s no joy in &#8216;I told you so&#8217;</a></p><p>&#129707; The AI wars might be won over energy, not compute: <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/half-of-planned-us-data-center-builds-have-been-delayed-or-canceled-growth-limited-by-shortages-of-power-infrastructure-and-parts-from-china-the-ai-build-out-flips-the-breakers">Half of planned US data center builds have been delayed or canceled, growth limited by shortages of power infrastructure and parts from China &#8211; the AI build-out flips the breakers</a></p><p>&#128104;&#127996;&#8205;&#128188; Fire the people, save money, build AI data centers: <a href="https://tech-insider.org/oracle-30000-layoffs-ai-data-center-restructuring-2026/">Oracle&#8217;s 30,000 employee layoffs: Inside the $2.1 billion restructuring fueling a $156 billion AI data center bet.</a></p><p>&#9889; Energy markets are turning very, very weird with the rise of renewables: <a href="https://www.bloomberg.com/news/articles/2026-04-07/germany-power-prices-turn-deeply-negative-on-renewables-surge">Germany power prices turn deeply negative on renewables surge.</a></p><p>&#128690; Signs of the times: <a href="https://www.skoda-storyboard.com/en/skoda-world/skoda-duobell-a-bicycle-bell-that-outsmarts-even-smart-headphones/">A bicycle bell that outsmarts even smart headphones.</a></p><p>&#129686; Talking about the future of warfare: <a href="https://www.tomshardware.com/tech-industry/iran-threatens-complete-and-utter-annihilation-of-openais-usd30b-stargate-ai-data-center-in-abu-dhabi-regime-posts-video-with-satellite-imagery-of-chatgpt-makers-premier-1gw-data-center">Iran threatens &#8220;complete and utter annihilation&#8221; of OpenAI&#8217;s $30B Stargate AI data center in Abu Dhabi &#8211; regime posts video with satellite imagery of ChatGPT-maker&#8217;s premier 1GW data center</a></p><p>&#128188; Surprised is no one: <a href="https://www.marketwatch.com/story/employers-are-using-your-personal-data-to-figure-out-the-lowest-salary-youll-accept-c2b968fb">Employers are using your personal data to figure out the lowest salary you&#8217;ll accept</a> (but then, employees also write their resumes and cover letters using AI, cheat on tests using AI, etc.)</p><p>&#129489;&#127996;&#8205;&#128640; Just in time, as Artemis is doing its moon thing &#8211; <a href="https://www.cosmicodometer.space/">calculate your cosmic distance from the day you were born.</a></p><p>&#127768; Talking about the moon &#8211; this is as nerdy as it gets: <a href="https://www.curiousmarc.com/space/apollo-guidance-computer">The rebuilding of the Apollo guidance computer in glorious detail.</a></p><p>&#127752; The Weather Channel goes full retro with their neat, new <a href="https://weather.com/retro/">retrocast feature</a>.</p><p>&#128649; Can you identify each line on the London Underground by sound? <a href="https://tubesoundquiz.com/">Try it!</a></p><p>&#128104;&#127996;&#8205;&#127912; The <a href="https://theasc.com/articles/fantastic-voyage-creating-the-futurescape-for-the-fifth-element">amazing art</a> that went into the special effects for the Luc Besson movie The Fifth Element. Stunning.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The One Theorem That Governs Survival in a Volatile World]]></title><description><![CDATA[Why lowering the cost of failure matters more than raising the quality of your plan]]></description><link>https://briefing.rdcl.is/p/the-one-theorem-that-governs-survival</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-one-theorem-that-governs-survival</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 07 Apr 2026 14:47:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d79c742b-07bc-4841-b8e6-ec5487e48588_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>More than a decade ago, I was in the insanely fortunate position of running Mozilla Labs &#8211; the &#8220;please disrupt yourself&#8221; unit inside the nonprofit behind Firefox. The team was brilliant, some of the best engineers in Silicon Valley, and we had a pattern that felt productive but was quietly killing us: an idea would hit our standup, we&#8217;d debate it like philosophers, and then someone would say the five most dangerous words in product development &#8211; &#8220;I know how to build this.&#8221; They&#8217;d vanish into their cave, headphones on, code editor glowing, and three days later they&#8217;d resurface with something gorgeous. Real, running code. You could click around and interact with it. We all felt accomplished.</p><p>Then we&#8217;d put it in front of a user. Fifteen seconds. &#8220;I don&#8217;t get it.&#8221; Three days of work &#8211; DOA (&#8220;dead on arrival&#8221; &#8211; fun fact: also the name of a meeting room at Mozilla at the time).</p><p>At that pace we could test maybe ten ideas a month. Product-market fit usually takes hundreds of iterations. We were years away from finding it, and we simply didn&#8217;t have years.</p><p>One afternoon, after yet another fifteen-second rejection, I did something that felt outright strange at the time: I asked a colleague to close his laptop, grab a stack of index cards and some Sharpies, and start drawing. He looked at me like I&#8217;d suggested interpretive dance. He was a C++ and Python person &#8211; not someone who drew doodles on index cards. But he did &#8211; badly, reluctantly, beautifully &#8211; and we walked those cards across the street to the Starbucks on Castro Street in Mountain View, which at the time was basically Silicon Valley&#8217;s cafeteria. &#8220;Can I buy you a coffee in exchange for five minutes of your time?&#8221; More than 80% of the people we asked said yes. We&#8217;d place the first card on the table, get feedback, retreat to a corner to redraw, find the next stranger. By the time the caffeine jitters set in &#8211; three hours, maybe &#8211; we hadn&#8217;t tested one prototype. We&#8217;d tested thirty.</p><p>That afternoon changed the way I think about everything. Three days to learn one thing with code. Three hours to learn thirty things with Sharpies and index cards. Not because the engineers were slow, but because the medium was expensive. High-fidelity code carries a high cost of failure &#8211; emotionally, financially, temporally &#8211; and when failure is expensive, you instinctively avoid it. You plan more. You debate more. You polish more. You learn less.</p><p>Which brought me to the idea I&#8217;ve spent the last fifteen years trying to articulate as precisely as I can &#8211; what I now call the Core Theorem: <strong>the speed of learning is inversely proportional to the cost of failure.</strong> If failure is expensive, you learn slowly. If failure is cheap, you learn fast. That&#8217;s it. That&#8217;s the whole thing. And it governs the survival of every organization operating in a volatile world, which &#8211; in case you haven&#8217;t checked the news lately &#8211; is every organization.</p><p>The logic is disarmingly simple. When a mistake could cost you $100,000, your reputation, or your job, you&#8217;ll hesitate. You&#8217;ll double-check. You&#8217;ll form a committee. You&#8217;ll bring in a consultant. You&#8217;ll optimize for the appearance of competence instead of the reality of learning. But when a mistake costs $10 and an awkward conversation at a coffee shop? You&#8217;ll just try. And if that fails, you&#8217;ll try something else &#8211; all before lunch.</p><p>Tom Chi, one of the founding members of Google X &#8211; the moonshot factory behind self-driving cars and Internet balloons &#8211; understood this better than anyone. When his team was working on Project Glass (which became Google Glass), the engineers estimated it would take six months to build the first working prototype. Optics, miniaturized projection, ergonomics, software &#8211; hard technology, expensive to get wrong. Tom walked out of the room and came back 45 minutes later with a coat hanger bent into a neck loop, a sheet of plexiglass, a middle-school sheet protector taped to it, and a pico-projector connected to a netbook. It looked like garbage. It cost less than $500. And within an hour his team had learned that red text washes out, the upper right corner gives headaches, and email pop-ups are socially awkward. They learned more in one afternoon with a coat hanger than they would have in six months of &#8220;proper&#8221; engineering &#8211; because the cost of being wrong was almost zero.</p><p>The engineers wanted to predict the solution. Tom wanted to ping the solution space. The beautiful irony is that by refusing to build the &#8220;real&#8221; thing, he got to the real thing faster than anyone else.</p><p>Most organizations get this backwards. They shout &#8220;go faster!&#8221; while keeping the cost of failure high. They say &#8220;fail fast and fail forward&#8221; while promoting the people who never make mistakes. They create innovation labs and demand agile workflows &#8211; but require three signatures to approve a $500 experiment. The incentive structure contradicts the aspiration, and incentives always win.</p><p>So here&#8217;s the practical question: What is your coat hanger? What is the cheapest, fastest, ugliest version of the thing your team has been debating in conference rooms for six months? And what would it take to test it this week &#8211; not next quarter, not after the strategy offsite, not when the budget gets approved &#8211; but this week?</p><p>Audit the price of your errors. Count the signatures required to run a small experiment. Look at what happens to the person who tries something and fails versus the person who sits in meetings and never ships. That gap &#8211; between the stated value of learning and the actual cost of failure &#8211; is where your organization&#8217;s speed goes to die.</p><p>Close that gap, and you don&#8217;t need to hire faster people or buy better tools. You just need to hand them a Sharpie and point them at the nearest coffee shop.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PcZD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PcZD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 424w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 848w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1272w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp" width="1300" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:40824,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/193411541?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PcZD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 424w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 848w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1272w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This briefing is adapted from my new book</em> <a href="https://rdcl.is/outlearn/">OUTLEARN: The Art of Learning Faster Than the World Can Change</a>, <em>launching April 28. It&#8217;s the first volume in a series called Built for Turbulence &#8211; short, framework-dense field manuals for leaders who are done planning beautifully and ready to start learning fast. More on that soon.</em></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[CEOs Are Volunteering to Be Replaced]]></title><description><![CDATA[The Internet tips majority-bot, the encryption window closes in 2029, and a new Wharton paper argues AI has fundamentally restructured how humans think &#8211; not just what they do]]></description><link>https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</link><guid isPermaLink="false">https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 03 Apr 2026 15:04:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b17f2df1-1f6b-4465-8d96-967b105cf690_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>This week has been one of contemplation &#8211; as is evidenced in our &#8220;Headlines from the Future&#8221; section. While AI keeps moving at lightning speed, it feels like we (the collective &#8220;we&#8221;) are starting to get our feet under us and figure things out&#8230;</p><p>Meanwhile, a quick personal note before the links: my new book <a href="https://rdcl.is/outlearn/">OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change</a> &#8211; launches April 28 on Amazon. It&#8217;s the first volume in a new series called Built for Turbulence: short, framework-dense field manuals for leaders operating in volatile environments. I&#8217;ll share more next week in the Tuesday deep-dive. &#129304;&#127996;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.cnbc.com/2026/03/26/coca-cola-james-quincey-walmart-doug-mcmillon-artificial-intelligence-step-down.html">The AI-CEO Threat.</a></strong> Here&#8217;s an interesting one &#8211; the CEOs of major companies are stepping down to make room for people with a better grip on AI.</p><blockquote><p>&#8220;In a pre-AI, a pre-gen-AI mode, we made a lot of progress. But now there&#8217;s a huge new shift coming along,&#8221; Quincey said. While he said he&#8217;s leaning into the technological advances, he believes the beverage company needs &#8220;someone with the energy to pursue a completely new transformation of the enterprise.&#8221;</p></blockquote><p>It does make you wonder a) how many CEOs are hanging on to their jobs by the skin of their teeth, b) how many CEOs are oblivious to what the AI transformation actually means for their companies, and c) how many more CEOs we will see throwing in the towel and handing over the reins to new generations. Now might be a good time for folks with CEO aspirations (and a solid grip on AI) to step up&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking Fast, Slow, and Artificial.</a></strong> In 2011, Nobel Prize winner Daniel Kahneman published his bestselling book &#8220;Thinking, Fast and Slow.&#8221; In it, he describes the two modes of thinking we all operate in: System 1, which is fast and intuitive, and System 2, which is slow and deliberate. Now, in a new paper, Steven D. Shaw and Gideon Nave from The Wharton School argue that AI introduced a third mode of thinking:</p><blockquote><p>People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways.</p></blockquote><p>And, as you would expect, with it comes a whole host of questions: &#8220;System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.&#8221; The study is worth reading&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/research/economic-index-march-2026-report">AI Learning Curves Are Real.</a></strong> Anthropic, maker of Claude, released yet another report on the usage of AI (I applaud them for doing this &#8211; their reports tend to be actually useful, and not the usual company-sponsored &#8220;look how great we are&#8221; puffery). This time, they dug into the use of AI across the economy. Lots of good nuggets in the paper; the one standout for me is their insight into how the jagged edge, the concept popularized by Ethan Mollick, plays out in the real world (this is paraphrased):</p><blockquote><p>There&#8217;s a compounding dynamic at play: experienced users bring harder problems, get better results, and develop sharper instincts for working with AI &#8211; while later adopters are still figuring out the basics.</p></blockquote><p>In essence: Early adopters with high-skill tasks have more successful interactions with Claude than later, less technical adopters &#8211; and these early-adopting users may simultaneously be the most exposed to AI-driven disruption and most aided by AI in these initial, augmentative waves of adoption. As my mom used to say: Be careful what you wish for.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.greptile.com/blog/ai-slopware-future">Is AI Slop Our Future?</a></strong> AI Slop is seemingly everywhere these days. And it&#8217;s getting worse. But here is an interesting counter-argument (at least when it comes to code):</p><blockquote><p>[&#8230;] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.</p></blockquote><p>In simple words: &#8220;AI will write good code because it is economically advantageous to do so.&#8221; I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out &#8211; there might be a real incentive for AI companies to compete on quality, which would be a very &#8220;free market&#8221; thing to do.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/mar/31/jobs-ai-cant-do-young-adults">The Jobs AI Can&#8217;t Do &#8211; and the Young Adults Doing Them</a></strong> A new generation is redefining what a good job looks like. Hands-on trades are shedding their stigma, replaced by something more compelling: skilled work no machine can replicate. <em>@Jane</em></p><p><strong><a href="https://www.latimes.com/business/story/2026-03-30/apple-at-50-how-garage-startup-became-3-5-trillion-titan">Apple Turns 50</a></strong> Wozniak on Apple: The secret to the company&#8217;s success was it managed its brand well and didn&#8217;t make &#8220;lousy junk&#8221; that breaks down. <em>@Mafe</em></p><p><strong><a href="https://www.gzeromedia.com/the-case-against-political-prediction-markets">The Case Against Political Prediction Markets</a></strong> Straight from dystopia, a valuable lesson that we keep relearning: Maybe not everything should be a market. <em>@Jeffrey</em></p><p><strong><a href="https://www.strategy-business.com/blog/What-leaders-get-wrong-about-responsibility">What Leaders Get Wrong About Responsibility</a></strong> Leaders love to &#8220;hold people accountable&#8221; &#8211; fewer know how to build systems where responsibility organically shows up. <em>@Kacee</em></p><p><strong><a href="https://www.youtube.com/watch?v=UWHdiLdemXQ">PIEZODANCE</a></strong> Not a read this week, but a video. And not just a video, but a contemporary dance video &#8211; this year&#8217;s winner of the &#8220;Dance your PhD Thesis&#8221; competition is all about energy &#8211; and it&#8217;s stunning. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128274; We have been talking about this since the early days of Singularity University, now it&#8217;s closer than ever: &#8220;<a href="https://www.theguardian.com/technology/2026/mar/26/google-quantum-computers-crack-encryption-2029">Google warns quantum computers could hack encrypted systems by 2029.</a>&#8221; Time to update your security keys (there are quantum-secure password-generating algorithms; you just have to use them).</p><p>&#128678; Similarly, we have been talking about vertical AI models for a while now (well, &#8220;a while&#8221; in AI-timeline terms) &#8211; they, also, are closer than ever: <a href="https://x.com/eoghan/status/2037197696075981124">The age of vertical models is here.</a></p><p>&#128184; BlackRock&#8217;s Larry Fink warns that &#8220;<a href="https://www.wsj.com/finance/investing/larry-finks-warning-invest-or-risk-getting-left-behind-by-ai-d2f1d09d">artificial intelligence could widen wealth inequality if ownership does not broaden alongside it</a>&#8221; &#8211; i.e., those who invest in stocks will benefit; those who cannot will be left behind.</p><p>&#129489;&#127996;&#8205;&#127979; Tech up, testscores down: <a href="https://undark.org/2026/04/01/sweden-schools-books/">Amid declining test scores, Sweden has pivoted away from screens and invested in back-to-basics school materials (i.e. books).</a></p><p>&#128045; Hold my beer: <a href="https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood/">Disney&#8217;s Sora disaster shows AI will not revolutionize Hollywood.</a></p><p>&#128300; Surprised is no one: <a href="https://www.theatlantic.com/science/2026/03/china-science-superpower/686564/">The shocking speed of China&#8217;s scientific rise.</a></p><p>&#129436; All it takes is five seconds of your voice &#8211; <a href="https://mistral.ai/news/voxtral-tts">Mistral&#8217;s newest voice cloning AI is scarily good.</a></p><p>&#128084; The easy way out: <a href="https://www.bbc.com/news/articles/cde5y2x51y8o">Tech CEOs suddenly love blaming AI for mass job cuts. Why?</a></p><p>&#127917; This is just too good: Someone trained a large language model solely on Victorian-era literature. The result: <a href="https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/">Mr. Chatterbox</a></p><p>&#129302; AI bots now make up <a href="https://www.cnbc.com/2026/03/26/ai-bots-humans-internet.html">more than 50% of all Internet traffic</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Our Way to 70%]]></title><description><![CDATA[The inversion that SaaS wasn't prepared for&#8230;]]></description><link>https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</link><guid isPermaLink="false">https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:44:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:503656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/192553695?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s an early signal I&#8217;ve now seen enough times in the wild that it&#8217;s hard to dismiss as anecdotal, even if each individual instance still sounds like one. Over the past few weeks, I&#8217;ve had multiple conversations with CEOs of tech startups who are starting to receive a version of the same feedback from potential customers: instead of buying software, prospects are increasingly deciding to vibe-code a solution themselves. Not because it&#8217;s better, but because it gets them far enough.</p><p>That &#8220;far enough&#8221; is landing, with surprising consistency, around 70%.</p><p>I raised this at a thought leader symposium in Dallas last week, expecting at least some pushback, and instead got immediate agreement. One firm owner said plainly that rather than paying $300/mo per user for an off-the-shelf product, the agent-built 70% solution is good enough in the current environment. Another chimed in (not a developer by any stretch) and said he&#8217;s been building things on the weekends simply because it&#8217;s fun. This isn&#8217;t just a cost decision, it&#8217;s a behavioral shift.</p><p>What&#8217;s striking is how quickly the boundary of what people &#8220;won&#8217;t build themselves&#8221; is collapsing. In an internal discussion at radical, the point was raised that surely there are still limits - that people aren&#8217;t going to start vibe coding their own general ledger. And if you read the <a href="https://briefing.rdcl.is/p/mckinsey-cant-you-can">recent briefing</a>, someone had done exactly that. <a href="https://craigmod.com/essays/software_bonkers/">By his own admission</a>, it wasn&#8217;t particularly good, and he wasn&#8217;t using a complex GL to begin with, but it worked for his business. Around the same time, I saw a CEO share on LinkedIn that he had spent a weekend building a replacement for HubSpot. Again, not best-in-class, but usable and to his own preferences.</p><p>Individually, these are easy to write off&#8230;together, they form a pattern. My instinct, honestly, is still that this has limits. Not every system will get vibe-coded into existence, but I&#8217;m increasingly unsure where those limits actually are. That uncertainty feels more important than whatever answer I&#8217;d have given six, or even three months ago.</p><p>TechCrunch has already leaned into the narrative of <a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaSpocalypse</a>, which may or may not be more marketing fodder than reality, but it points to something worth paying attention to. Because the more interesting dynamic here isn&#8217;t whether these self-built solutions rival existing software - they don&#8217;t. It&#8217;s that they don&#8217;t have to because the standard isn&#8217;t excellence anymore. It&#8217;s sufficiency, shaped by context, constraints, and increasingly, by a willingness to trade polish for control. What&#8217;s notable is that this isn&#8217;t just showing up in conversations, it&#8217;s already impacting markets. Last month, a single release from Anthropic triggered a roughly $285B selloff across the software sector.</p><p>It would be convenient to attribute this entirely to economic pressure. Budgets are tighter, scrutiny is higher, and software that once felt like a default purchase now has to compete for its place. That&#8217;s real, and it&#8217;s accelerating the behavior. The structural shift underneath all of this is simple: the cost of creating software has dropped below the perceived cost of buying it - and when that inversion happens, the starting point changes. You don&#8217;t begin with procurement, you begin with construction.</p><p>What sits underneath that shift, is that software is quietly moving from something standardized to something individualized. For the last two decades, SaaS has been built on a kind of implicit compromise: you adopt a system designed for the average user, and in return you get scale, reliability, maintenance and convenience. But when the cost of building collapses, that tradeoff starts to feel less necessary. Instead of adapting your workflows to fit a product, you can increasingly shape the product to fit your workflows. It&#8217;s messier, and often incomplete, but it&#8217;s also more precise&#8230;and for many use cases, that precision matters more than polish.</p><p>Pascal&#8217;s framing in the briefing around bifurcation is useful here, not as theory, but as a way to understand where this is going. We&#8217;re watching the market split between systems where completeness and trust are non-negotiable, and a much larger surface area where &#8220;good enough&#8221; is not just acceptable, but rational. The 70% threshold is emerging as the dividing line; above it, you still buy &#8211; but below it, more and more people are choosing to build.</p><p>I think what makes this particularly important, is that it reframes competition in a way that most companies aren&#8217;t prepared to handle. The threat isn&#8217;t another product with a better roadmap or a tighter feature set, it&#8217;s a user who decides they don&#8217;t need the category in the first place. A small business owner comparing a self-built ledger to Quicken isn&#8217;t benchmarking against enterprise accounting software. A founder assembling a CRM over a weekend isn&#8217;t trying to replicate HubSpot in full. They are solving a narrower, more individualized version of the problem - and in doing so, stepping outside the boundaries that defined the category. Jeff Seibert, the CEO of <a href="https://digits.com/">Digits</a>, put language to this in a way that&#8217;s worth paying attention to, &#8220;the second-order effects will be fascinating. When software is cheap, it&#8217;s taste and distribution that matter.&#8221; This framing pulls the conversation out of tooling and into consequences.</p><p>And that opt-out dynamic is the signal.</p><p>Once someone successfully builds one thing, even imperfectly, the barrier to building the next drops dramatically. Capability compounds, confidence compounds, and what starts as experimentation begins to normalize into an alternative path: one that doesn&#8217;t rely on waiting for software to catch up to your needs, because you&#8217;ve already adjusted it yourself.</p><p>The implication isn&#8217;t that 70% gets better (although I&#8217;m sure that number continues to improve as the coding models mature) it&#8217;s that once users believe they can build for themselves, the default posture shifts from buying software to questioning whether they need it at all.</p><p><em>@Kacee</em></p>]]></content:encoded></item><item><title><![CDATA[Nine Nuclear Reactors Worth of Hype]]></title><description><![CDATA[Walmart's AI shopping experiment crashes, AGI benchmarks humble Silicon Valley, and the ads have officially reached the refrigerator]]></description><link>https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</link><guid isPermaLink="false">https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Mar 2026 14:36:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f35bc325-b08a-4b0b-8293-96b01020e838_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Pardon my French (in my defense, it&#8217;s not my headline), but Mario Zechner&#8217;s &#8220;<a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the f*** down</a>&#8221; is a good reminder that all the wondrous things AI can and does do for us come at a cost &#8211; hence his reminder to: &#8220;[&#8230;] slowing the f*** down and suffering some friction is what allows you to learn and grow.&#8221; With that in mind &#8211; time to slow down, welcome the weekend, and dive one last time into our wild future before we call it a Friday.</p><p>P.S. <a href="https://rdcl.is/a-podcast-with/jason-goldberg/">A new episode of our podcast dropped:</a> Jason Goldberg has spent 30 years watching companies survive &#8211; and get destroyed by &#8211; disruption in retail. His counterintuitive advice for the agentic commerce moment: stop trying to be first, and start asking what you&#8217;ll regret not doing when the future arrives.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/planned-10-gigawatt-softbank-data-center-in-ohio-might-be-the-largest-in-the-world-will-require-a-usd33-billion-natural-gas-plant-equivalent-to-nine-nuclear-reactors">AIs Energy Demands Are Truly Bonkers.</a></strong> Japanese tech giant SoftBank is building a massive 10GW data center in Ohio to host AI models. Aside from the cool $30&#8211;40 billion price tag, it will require the build of a $33 billion natural gas power plant &#8211; with an insane output capacity (emphasis mine):</p><blockquote><p>When completed, the new site could be one of the largest AI data centers ever built. Furthermore, it will be powered by one of the world&#8217;s largest fleets of gas turbines, <em>equivalent to the energy supply of nine nuclear reactors.</em></p></blockquote><p>It does leave you wondering where and how all this will end.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://searchengineland.com/walmart-chatgpt-checkout-converted-worse-472071">Maybe AI Isn&#8217;t Online Shopping&#8217;s Future After All.</a></strong> After the initial hype of online shopping results being incorporated into the answers LLMs give to the numerous product-related queries they receive, Walmart unveiled that the conversion they are seeing from those AI-referrals is just terrible.</p><blockquote><p>After testing 200,000 items in ChatGPT, Walmart found sharply lower conversions and will use its own integrated shopping experience. Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.</p></blockquote><p>Next: Agentic commerce. The jury&#8217;s out.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 People Want From AI.</a></strong> Anthropic, the AI company which is <em>not</em> OpenAI, conducted what is, in their own words, likely the largest study on users&#8217; desires, wishes, and fears when it comes to their use of AI. Anthropic being Anthropic, they didn&#8217;t survey people using a traditional questionnaire, but rather had their chatbot &#8220;talk&#8221; to people. The findings won&#8217;t surprise you &#8211; people want to use AI to better themselves: professional excellence and increased productivity, which translates into the very human desire to, ultimately, live better. And respondents live the Scott Fitzgerald quote we are so fond of quoting &#8211; they keep the light and the dark of AI in their heads simultaneously.</p><blockquote><p>&#8220;AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it&#8217;s exactly the other way around.&#8221;</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arcprize.org/">AGI? Not so Fast!</a></strong> AGI, or Artificial General Intelligence, is the thing Sam Altman and others love to talk about &#8211; and promise is just around the corner. To demo their respective companies&#8217; progress, they roll out benchmark after benchmark showing how their AI beats humans on the sommelier exam. A new benchmark, however, shows that AGI is still a long, long way off. The ARC-AGI-3 benchmark pits leading AIs against humans in a series of computer games &#8211; and AIs don&#8217;t look all that great. To apply a lesson my statistics professor hammered into our heads: Never trust a statistic you haven&#8217;t faked yourself.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.wsj.com/lifestyle/samsung-refrigerator-ads-lg-whirlpool-ge-10ea7bcc?st=dFog7V">Ads Are Popping up on the Fridge and It Isn&#8217;t Going Over Well</a></strong> Ads are literally popping up everywhere (even on Google Maps starting this summer), but people are particularly irked by ads on expensive refrigerators with a big screen for recipes, weather updates, and, apparently, ads. <em>@Mafe</em></p><p><strong><a href="https://aeon.co/essays/how-do-we-deal-with-the-catastrophe-of-uninsurability">The Insurance Catastrophe</a></strong> A deep dive into the history &amp; future of insurance markets offers a fascinating lens for exploring how communities, societies, and economies deal with radical uncertainty and catastrophic risk. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/davidrosowsky/2026/03/21/the-60-year-degree-why-universities-must-pivot-from-recruitment-to-perpetual-partnership/">The 60-Year Degree: Why Universities Must Pivot from Recruitment to Perpetual Partnership</a></strong>Higher ed has been at an inflection point for years; the degree is just the 1st casualty of a shift to lifelong contracts. <em>@Kacee</em></p><p><strong><a href="https://undark.org/2026/03/20/ai-slop-children/">AI Slop Is Infiltrating Online Children&#8217;s Content</a></strong> Surprised is, of course, no one. But it does leave you wondering what happens to the brains and cognitive development of children who are exposed to AI slop from an early age. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#127871; <a href="https://www.youtube.com/watch?v=T4Upf_B9RLQ">Hilarious take on the world of Enshittification</a> by the Norwegian Consumer Council (hat tip to Angel Grimalt for the link).</p><p>&#129399;&#127996; AI agents going rogue: The more we rely on AI, the more we deploy AI agents, the more we see fun headlines like this: <a href="%EF%BF%BC">Meta is having trouble with rogue AI agents</a> &#8211; now consider what this means for any company <em>not</em> the size of, or with the resources of, Meta!</p><p>&#127866; Talking about cyberattacks and our ever-increasing reliance on Internet-connected technologies: <a href="https://techcrunch.com/2026/03/20/cyberattack-on-vehicle-breathalyzer-company-leaves-drivers-stranded-across-the-us/">Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US.</a></p><p>&#128104;&#127996;&#8205;&#128187; Nerd alert! But super helpful: Here is a <a href="https://github.com/nidhinjs/prompt-master">Claude Skill &#8211; Prompt Master &#8211;</a> which helps you create better prompts, highly optimized for specific use cases, tools, and target LLMs.</p><p>&#129318;&#127996; Yep, bro&#8230; Whatever. &#8220;<a href="https://fortune.com/2026/03/24/perplexity-ceo-ai-layoffs-not-bad-people-hate-jobs-entrepreneurship/">Perplexity CEO says AI layoffs aren&#8217;t so bad because people hate their jobs anyways: &#8216;That sort of glorious future is what we should look forward to&#8217;</a>&#8221;</p><p>&#9875; The running and cycling app Strava has been used to track the location of military outposts before &#8211; now the French newspaper Le Monde has used it to <a href="https://www.lemonde.fr/en/international/article/2026/03/20/stravaleaks-france-s-aircraft-carrier-located-in-real-time-by-le-monde-through-fitness-app_6751640_4.html">track the location of France&#8217;s aircraft carrier</a>. Note: Your public data is <em>public</em> data.</p><p>&#129516; Fascinating read on the adaptability of the human body: <a href="https://www.zmescience.com/science/biology/tribe-in-kenya-evolved-genetic-mutation-that-lets-them-survive-with-almost-no-water/">Tribe in Kenya evolved genetic mutation that lets them survive with almost no water.</a></p><p>&#129378; A Japanese glossary of <a href="https://www.nippon.com/en/japan-data/h01362/">chopsticks faux pas</a>.</p><p>&#129523; Lovely <a href="https://www.web-rewind.com/">journey through 30 years of the web</a>.</p><p>&#127911; Peak 80s nostalgia: <a href="https://maxell-usa.com/product/cassetteplayer/">The Maxell Wireless Cassette Player.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[McKinsey Can’t. You Can.]]></title><description><![CDATA[While Anthropic&#8217;s CEO stares down his Oppenheimer moment, a CEO loses $250M trusting ChatGPT over his lawyers, and OpenClaw turns out to be FOMO dressed as a technological breakthrough.]]></description><link>https://briefing.rdcl.is/p/mckinsey-cant-you-can</link><guid isPermaLink="false">https://briefing.rdcl.is/p/mckinsey-cant-you-can</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Mar 2026 13:24:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5fdd09db-ce13-409a-b677-7aaa45025084_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Boy oh boy, the world is spinning faster than ever&#8230; This last week has been yet another week of AI insanity. Meanwhile, we are sweating at an unprecedented 86 degrees Fahrenheit here in Boulder, CO (we usually see snow around this time of year), and I am writing this in the rain and 45 degrees Fahrenheit while being out for a weekend of ice climbing in the Canadian Rockies in Canmore, Alberta&#8230; We will see how the ice is tomorrow &#8211; just arrived.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.mckinsey.com/capabilities/tech-and-ai/how-we-help-clients/rewiring-the-way-mckinsey-works-with-lilli">Even the Consultants Can&#8217;t Make AI Work for Them.</a></strong> Here is an interesting one: McKinsey created and deployed their own AI assistant &#8220;Lilly&#8221; - and in their write-up about it, they report that 72% of their employees are using it by tossing 500,000 prompts at it per month.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_Zyl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" width="1456" height="431" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:431,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:137974,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/191539185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>72% of McKinsey&#8217;s employees are about 29,000 people. 29,000 people prompting their AI 500,000 times a month is only 17 prompts per person per month! That&#8217;s about one prompt every other day&#8230; Not exactly a lot. I prompt Claude easily 17 times in a single day&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://craigmod.com/essays/software_bonkers/">McKinsey Can&#8217;t &#8211; But Individuals Do.</a></strong> In stark contrast to McKinsey, solo-developer Craig Mod built his own (fairly complex) accounting system from scratch using Claude Code in five short days. Aside from the audacity of it all, it&#8217;s a perfect example of the &#8220;bifurcation of intelligence&#8221; we have been talking about here in the radical Briefing. On one hand you have big firms seeking efficiency gains by deploying chatbots, and on the other you have individuals riding the speartip of AI to create complex, bespoke systems.</p><blockquote><p>Simply put: It&#8217;s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own. It took me about five days. I am now using the best piece of accounting software I&#8217;ve ever used. It&#8217;s blazing fast. Entirely local. Handles multiple currencies and pulls daily (historical) conversion rates. It&#8217;s able to ingest any CSV I throw at it and represent it in my dashboard as needed. It knows US and Japan tax requirements, and formats my expenses and medical bills appropriately for my accountants. I feed it past returns to learn from. I dump 1099s and K1s and PDFs from hospitals into it, and it categorizes and organizes and packages them all as needed. It reconciles international wire transfers, taking into account small variations in FX rates and time for the transfers to complete. It learns as I categorize expenses and categorizes automatically going forward. It&#8217;s easy to do spot checks on data. If I find an anomaly, I can talk directly to Claude and have us brainstorm a batched solution, often saving me from having to manually modify hundreds of entries. And often resulting in a new, small, feature tweak. The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://stopsloppypasta.ai/en/">Stop Sloppypasta.</a></strong> Like it or not, you will have to deal with AI-generated content &#8211; both personally and professionally. Colleagues who are responding to a request with an AI-generated response, emails being written by your favorite LLM, proposals being created with the help of your friendly chatbot. The question might truly not be &#8220;if&#8221; but &#8220;how&#8221; &#8211; here is a set of very reasonable guidelines and practices to help you navigate this brave new world.</p><blockquote><p>AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else&#8217;s burden. New tools require new manners. <strong>Use AI to accelerate your work or improve what you send.</strong> <strong>Don&#8217;t use it to replace thinking about what you&#8217;re sending.</strong></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://entropytown.com/articles/2026-03-12-openclaw-sandbox/">OpenClaw Isn&#8217;t Really New &#8211; It&#8217;s The Dream of Free Labour.</a></strong> Unless you were living under a rock in AI-land, you&#8217;ve definitely heard of the OpenClaw craziness (we reported on it multiple times here in the radical Briefing). The narrative, usually, is around the technological breakthrough and the magic that ensues when you hand over the keys to the kingdom to your army of AI bots. Here&#8217;s a good counter-narrative &#8211; the tech isn&#8217;t new per se, it&#8217;s just combined and connected in an interesting way. And the hype, really, is about the never-ending dream of free labour &#8211; and ends up being more about FOMO than anything else.</p><blockquote><p>A machine producing a thousand candidate images while you sleep is plausible and often useful. A machine founding a hundred profitable businesses before breakfast is rather more ambitious. The first is a search process. The second is venture-capital fan fiction.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/anthropic-dod-ai-utopianism/686327/?gift=0GPrpLquXY4NmRQ6sk9MNvR2B7Kzm7g5dqeIskXWHDQ">Dario Amodei&#8217;s Oppenheimer Moment</a></strong> Dario Amodei may be having his Oppenheimer moment, and judging by the Pentagon&#8217;s latest move, he never really had a choice. <em>@Jane</em></p><p><strong><a href="https://www.fastcompany.com/91508903/after-hours-meetings-are-on-the-rise-ai-could-make-things-even-worse">After-Hours Meetings Are on the Rise; AI Could Make Things Even Worse</a></strong> Everyone is in agreement that there shouldn&#8217;t be so many meetings, but unfortunately they&#8217;re on the rise. Specifically, after-hours meetings due to more global teams and distributed workforces. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/culture/infinite-scroll/why-tech-bros-are-now-obsessed-with-taste">Why Tech Bros Are Now Obsessed With Taste</a></strong> As the zeitgeist turns and startup entrepreneurs scramble to differentiate their offerings in an era of AI abundance, prepare to hear way, way too much about &#8220;taste&#8221; and &#8220;discernment&#8221; &#8211; and tune your BS detector accordingly. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/jeffkauflin/2026/03/17/why-an-unsustainable-bubble-is-growing-inside-fintech/">Why an Unsustainable Bubble Is Growing in Fintech</a></strong> When growth is manufactured through pricing arbitrage and balance sheet gymnastics, you&#8217;re not building a market; you&#8217;re distorting one. <em>@Kacee</em></p><p><strong><a href="https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller">Why ATMs Didn&#8217;t Kill Bank Teller Jobs, but the iPhone Did</a></strong> You know the story about ATMs and bank tellers &#8211; this deep dive into what actually happened (and keeps happening) is a good reminder to be skeptical of the lore at large. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128302; Silicon Valley legend Kevin Kelly on &#8220;<a href="https://kevinkelly.substack.com/p/how-to-future">How to Future.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127891; A professor&#8217;s honest assessment on &#8220;<a href="https://www.science.org/content/article/why-i-may-hire-ai-instead-graduate-student?__cf_chl_rt_tk=JOqT5pmrDEj0G.b1ijxTJND_JD_H0vfLIrZMK68Ds54-1773714618-1.0.1.1-OSeKrV93UJgm10L2IOwXuX5_kMOatTYzMzAsHktbXxU">why I may &#8216;hire&#8217; AI instead of a graduate student.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127979; Headline captures it all: <a href="https://www.bloodinthemachine.com/p/if-ai-is-writing-the-work-and-ai">&#8220;If AI is writing the work and AI is reading the work, do we even need to be there at all?&#8221; Educators reveal a growing crisis on campus and off.</a></p><p>&#129489;&#127996; Not that anyone ought to be surprised: <a href="https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/">CEO asks ChatGPT how to void $250 million contract, ignores his lawyers, loses terribly in court.</a></p><p>&#129528; AI is making its way into children&#8217;s toys. Parents ought to be cautious: <a href="https://www.bbc.com/news/articles/clyg4wx6nxgo">AI toys for children misread emotions and respond inappropriately, researchers warn.</a></p><p>&#128104;&#127996;&#8205;&#128187; AI-generated code is awesome &#8211; and can be pretty bad: <a href="https://techxplore.com/news/2026-03-ai-coding-tools.html">Top AI coding tools make mistakes one in four times, study shows.</a></p><p>&#129438; OpenClaw is everywhere &#8211; and nowhere as much as in China: <a href="https://www.cnbc.com/2026/03/18/china-openclaw-baidu-tencent-ai.html">How China is getting everyone on OpenClaw, from gearheads to grandmas</a></p><p>&#128561; Don&#8217;t bring a knife to a gunfight. Someone built a <a href="%EF%BF%BC">$97 missile</a> &#8211; with a $5 sensor for flight control. All open source, 3D print, and build-your-own. Talk about asymmetric warfare.</p><p>&#128110;&#127996; False positives keep being a real problem &#8211; with very real consequences: <a href="https://www.ndtv.com/world-news/us-woman-wrongly-imprisoned-for-6-months-due-to-faulty-facial-recognition-11209378">US woman wrongly imprisoned for 6 months due to faulty facial recognition.</a></p><p>&#129400; Opposite approach &#8211; similar issue: You can&#8217;t trust facial recognition (see above), and you can&#8217;t trust the face either: <a href="https://startupfortune.com/the-face-recommending-your-next-health-product-is-fake-the-money-leaving-your-wallet-is-not/">The face recommending your next health product is fake, the money leaving your wallet is not.</a></p><p>&#127911; Here is a <a href="https://88mph.fm/">delightful music web app</a> that lets you listen to what a particular country was enjoying in a specific year.</p><p>&#128586; Independent search engine Kagi just released their genius <a href="https://translate.kagi.com/?from=en&amp;to=linkedin">LinkedIn Speak translator</a>. Take any sensible (or not) English sentence and get back the gibberish that is LinkedIn Speak.</p><p>&#127760; Headline says it all (also: Schadenfreude is real for some): <a href="https://www.404media.co/rip-metaverse-an-80-billion-dumpster-fire-nobody-wanted/">RIP Metaverse, an $80 billion dumpster fire nobody wanted</a></p><p>&#128065;&#65039; The <a href="https://tombh.co.uk/longest-line-of-sight">longest line of sight in the world</a> &#8211; took eight years to figure out.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Turning Your Official Future Into a Lever]]></title><description><![CDATA[How Smart Leaders Use the Future to Change What&#8217;s Possible Today]]></description><link>https://briefing.rdcl.is/p/turning-your-official-future-into</link><guid isPermaLink="false">https://briefing.rdcl.is/p/turning-your-official-future-into</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:03:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b53fe895-d3f6-4661-94df-0e2056953b4a_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago, here on the radical Briefing, I wrote about <a href="https://briefing.rdcl.is/p/the-official-future-trap">The Official Future Trap</a> &#8211; the idea that organizations create a singular, linear projection of the future (Peter Schwartz coined this term in his seminal book &#8220;<a href="https://www.google.com/books/edition/Art_of_the_Long_View/wjPaEAAAQBAJ">The Art of the Long View</a>&#8221;), embed it into their strategic plans, KPIs, and incentive structures, and then ride that narrow line straight into irrelevance when reality shows up differently (which it usually does). In a nutshell, the argument was: the official future is dangerous because it closes down the space of possibilities, turns uncertainty into false certainty, and makes you blind to the futures you&#8217;re not planning for.</p><p>I still very much believe all of that (and, sadly, have seen it play out too many times). But I&#8217;ve been thinking about the flip side &#8211; and it&#8217;s been nagging at me ever since a conversation I had with my dear friend and radical collaborator Jeffrey Rogers a few days after publishing that piece. What if the official future isn&#8217;t just a trap you fall into, but a tool you can wield strategically?</p><p>The difference, for me, comes down to a single word: There&#8217;s a massive shift which happens when you consider <em>the</em> official future versus <em>an</em> official future. <em>The</em> official future is the one you inherited. It&#8217;s the projection that everyone agreed on in last year&#8217;s offsite, now baked into budgets and headcount plans and org charts. It&#8217;s unconscious, institutional, and self-reinforcing &#8211; and it becomes the trap I wrote about in that last piece. But <em>an</em> official future is something you deliberately construct &#8211; a strategic narrative, a flag you plant in the ground that says &#8220;this is where we&#8217;re going,&#8221; designed not just to guide where you are going, but also to redefine what your people believe is possible, acceptable, and inevitable. &#8220;The&#8221; is singular and narrow; &#8220;an&#8221; is something you deliberately and strategically deploy.</p><p>Which brings us to a second concept we like to talk about, discuss, and debate here at radical: the Overton window. Named after policy analyst Joseph Paul Overton, it describes the range of ideas considered acceptable by the mainstream at any given time. Politicians &#8211; and by extension, leaders of all kinds &#8211; generally operate within this window. Step outside it and you&#8217;re &#8220;radical.&#8221; Stay within it and you&#8217;re &#8220;sensible.&#8221; The window isn&#8217;t static, though. It shifts over time, and the fascinating thing is <em>how</em> it shifts: not usually through leaders courageously stepping outside it, but through external forces &#8211; think tanks, social movements, cultural shifts, provocateurs &#8211; that drag the boundaries of what&#8217;s considered acceptable in a new direction. Once the window moves, leaders follow. Joseph G. Lehman, Overton&#8217;s colleague at the Mackinac Center, put it plainly: politicians are (or to be more precise: were &#8211; the very Overton window of what it means to engage in politics is rapidly and massively shifting) in the business of detecting where the window is and moving in accordance with it, not shifting it themselves.</p><p>Bring those two ideas together &#8211; the official future and the Overton window &#8211; and you realize: Inside every organization, there&#8217;s an internal Overton window &#8211; a range of strategies, investments, and ideas that are considered &#8220;on the table.&#8221; Anything outside that range gets labeled &#8220;off strategy,&#8221; &#8220;too risky,&#8221; or &#8211; my personal favorite of all time &#8211; &#8220;interesting, but not for us.&#8221; And the official future, as I argued in my previous piece, <em>reinforces</em> the current window. It tells everyone: this is where we&#8217;re going, this is what matters, everything else is noise. The window calcifies, and over time, the organization loses the ability to even <em>imagine</em> alternatives, let alone create them.</p><p>But what if you create an official future that sits at the edge of (or just beyond) the Overton window? Not so far out that your people dismiss it as pure fantasy, but far enough that it stretches what your organization considers possible. Think of it as strategic anchoring. In negotiation theory, the first number on the table &#8211; the anchor &#8211; disproportionately shapes the entire conversation that follows. Even when people know the anchor is aggressive, they adjust from it rather than ignoring it. Tversky and Kahneman documented this decades ago, and the research is super clear on this: the anchor sets the playing field, whether you want it to or not. An official future works the same way. When a leader declares &#8220;this is where we&#8217;re heading&#8221; &#8211; and that destination is slightly beyond what the organization currently considers feasible &#8211; the entire conversation reorganizes around that anchor. The argument moves from &#8220;should we do this?&#8221; to &#8220;how do we get there?&#8221; and your company&#8217;s Overton window moves.</p><p>And just to state the obvious: This isn&#8217;t about making wild proclamations or playing visionary-CEO-bingo, but about crafting a narrative of the future that&#8217;s credible enough to be taken seriously <em>and</em>ambitious enough to expand the boundaries of what&#8217;s considered realistic. You declare a specific, vivid future state &#8211; &#8220;in three years, 40% of our revenue comes from products that don&#8217;t exist yet&#8221; or &#8220;by 2028, we operate as a platform, not a product company&#8221; &#8211; and then you give it the weight of institutional authority. You put it in the strategic plan. You reference it in all-hands meetings. You allocate some resources toward it. You make it feel real and inevitable, even if it&#8217;s aspirational. Then, regularly, something remarkable happens: ideas that were previously dismissed as too bold now become stepping stones toward the declared destination. The previously unacceptable becomes the merely ambitious. And the merely ambitious becomes table stakes.</p><p>The self-reinforcing cycle I described in my original article &#8211; your official future leads to resource allocation, which informs the strategy, which then gets executed, and ultimately reinforces your official future &#8211; now works <em>for</em> you instead of <em>against</em> you, and you drag your organization toward a more expansive set of possibilities.</p><p>And, as so often in life, with great power comes great responsibility. On the constructive side, this is how every significant organizational transformation actually happens &#8211; someone with authority and/or social capital plants a flag, declares a future that stretches the window, and the organization reorganizes around it. But on the destructive side &#8211; and we&#8217;ve seen this play out at enormous scale in politics over the past decade &#8211; manufacturing an official future can be used to normalize ideas that were previously, and rightfully, considered unacceptable. Same mechanism, different intent and integrity behind it.</p><p>Let me bring this full circle. The futures cone &#8211; that beautiful framework from futures studies that Jeffrey and I deploy regularly in our work &#8211; reminds us that the further out we look, the wider the space of possible futures becomes. The official future is a single line through that expanding cone. In my original piece, I argued that&#8217;s the trap: a narrow line pretending to be the whole picture. But here&#8217;s a nuance worth thinking about: A deliberately constructed official future &#8211; one that sits at the ambitious edge of the cone &#8211; can actually <em>widen</em> the cone for your organization. It doesn&#8217;t narrow possibility, but expands the range of futures your people can even conceive of. It shifts the internal Overton window outward, making space for ideas, strategies, and bets that would have been dismissed as &#8220;off strategy&#8221; just months earlier.</p><p>For this to work though, you (the leader) have to do the work of exploring the ever expanding cone of possible futures, and the embedded, narrower cone of plausible and probable futures &#8211; and then develop an official future that sits at the ambitious edge of the cone.</p><p>So here&#8217;s my updated question &#8211; building on the one Jeffrey likes to ask our clients: <strong>What (plausible) official future could you declare today that would expand what your organization believes is possible tomorrow?</strong></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[Efficiency Kills]]></title><description><![CDATA[The same AI agents gutting white-collar work just plundered McKinsey&#8217;s most confidential client data &#8211; and a self-driving car blocked the ambulance on its way to the crime scene]]></description><link>https://briefing.rdcl.is/p/efficiency-kills</link><guid isPermaLink="false">https://briefing.rdcl.is/p/efficiency-kills</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 13 Mar 2026 14:13:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77d8cbb8-ecb2-4b0e-8888-942669c7cc44_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>We truly do live in interesting times. From the war in the Middle East, to AI-related mass layoffs, to the global rise of nationalism (latest point in case: the elections in Chile), the climate crisis rearing its ugly head &#8211; and then you have wireless eye implants making blind people see again, EV batteries charging in 5 minutes with a 600+ mile range, AI agents doing meaningful work, and companies freeing themselves from the tyranny of overpriced and outdated SaaS tools. I just can&#8217;t shake Walt Whitman&#8217;s words: &#8220;I am large, I contain multitudes.&#8221; Our world truly contains multitudes.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://x.com/JosephPolitano/status/2029916364664611242">Tech Is the New Plastic.</a></strong> Not a good time to be in tech&#8230; Remember when your uncle said: &#8220;Become a coder. That&#8217;s the future &#8211; and you&#8217;ll be rich!&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1m1y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" width="1170" height="1188" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1188,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:480643,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/190748996?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Mr. McGuire: &#8220;I just want to say one word to you. Just one word.&#8221;<br>Benjamin: &#8220;Yes, sir.&#8221;<br>Mr. McGuire: &#8220;Are you listening?&#8221;<br>Benjamin: &#8220;Yes, I am.&#8221;<br>Mr. McGuire: &#8220;Plastics.&#8221;<br>Benjamin: &#8220;Exactly how do you mean?&#8221;<br>Mr. McGuire: &#8220;There&#8217;s a great future in plastics. Think about it. Will you think about it?&#8221;</em></p><p>(In related news, <a href="https://www.livemint.com/companies/news/oracle-layoffs-tech-giant-to-slash-30-000-jobs-as-banks-pull-out-from-financing-ai-data-centres-11769996619410.html">Oracle slashes 30,000 jobs</a>, <a href="https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx">Atlassian lays of 1,600 people</a>&#8230; the list goes on.)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theverge.com/cs/features/877388/white-collar-workers-training-ai-mercor">Not a Coder? Not a Problem. AI Is Still Coming for Your Job.</a></strong> Here&#8217;s a good, long read on The Verge about lawyers, PhDs, and scientists who lost their jobs to AI. Despite all the talk about &#8220;Jevons Paradox&#8221; &#8211; the observation that efficiency gains lead to increased consumption &#8211; for now, we seem to be squarely stuck in a world where AI is a net job destroyer. It does make you wonder how long it will take for the masses to catch up with the trend and start pushing back (we, of course, already see it in pockets &#8211; the weak signals are talking).</p><blockquote><p><em>&#8221;My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable.&#8221;</em> &#8211; Katya, content marketer turned AI trainer</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/">Battle Royale: AI vs. AI.</a></strong> McKinsey, your friendly consulting firm, has deployed their own ChatBot &#8220;Lilly&#8221;. Hackers (in this case, and luckily for McKinsey, white-hat hackers &#8211; the good and friendly kind, who disclose their findings to the company) have, by using a set of AI agents, managed to exploit a vulnerability in Lilly and gain access to &#8220;46.5 million chat messages about strategy, mergers and acquisitions, and client engagements, all in plaintext, along with 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts controlling the AI&#8217;s behavior.&#8221; You know, no big deal&#8230;</p><blockquote><p>[&#8230;] the entire process was &#8220;fully autonomous from researching the target, analyzing, attacking, and reporting.&#8221;</p></blockquote><p>As useful as agents are for businesses, they are equally useful for hackers. Prepare yourself for an onslaught of AI-powered cyber attacks.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/central-lie-prediction-markets/686250/?gift=0GPrpLquXY4NmRQ6sk9MNnxvIjO7TXUjV5lhrJbKY0I">A Technology for a Low-Trust Society</a></strong> Prediction markets promise the wisdom of crowds but, in reality, deliver a playground for insiders, manipulators, and those willing to bet on human suffering. <em>@Jane</em></p><p><strong><a href="https://www.wsj.com/business/retail/gen-z-shopping-mall-visits-15716009">A New Generation of Mall Rats Has Arrived</a></strong> Gen Z&#8217;s need for immediate gratification has an unexpected winner: malls &#8211; they are now ramping up their social media presence and figuring out what the &#8220;future mall&#8221; should look like. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either">What Is Claude? Anthropic Doesn&#8217;t Know, Either</a></strong> Maybe the single most uncanny thing about our historical moment &#8211; we&#8217;re all struggling to effectively deploy (and adapt to) a technology that continues to baffle even its creators. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-hidden-power-of-messy-teams/">The Hidden Power of Messy Teams</a></strong> A study of hundreds of innovation teams found the ones most likely to implement their ideas didn&#8217;t start with clear problems; they started messy and discovered the real problem along the way. <em>@Kacee</em></p><p><strong><a href="https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/">The Gervais Principle, or The Office According to The Office</a></strong> Absolutely delightful deep dive into the world of the TV show &#8220;The Office&#8221; &#8211; both the British and US versions &#8211; to uncover why Ricky Gervais deserves the Nobel Prize in both economics and literature. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129489;&#127996;&#8205;&#127859; Cofounder of Netflix, Mozilla&#8217;s former CFO, and a dear friend of ours, Jim Cook, has an excellent newsletter (Cook&#8217;s Playbook), which you ought to subscribe to. His <a href="https://www.cooksplaybooks.com/p/the-future-of-ai-and-software-debate?publication_id=1267023&amp;post_id=189674838&amp;isFreemail=true&amp;r=s981&amp;triedRedirect=true">latest post</a> is a very thoughtful takedown of the now-infamous Citrini AI Report.</p><p>&#128250; It feels like yesterday when Google bought YouTube for a &#8211; at the time &#8211; shocking $1.65 billion. That was in 2006 &#8211; 20 years later, and <a href="https://www.businessinsider.com/youtube-ad-revenue-disney-nbc-paramount-wbd-warner-bros-streaming-2026-3">YouTube now generates more ad revenue than Disney, NBC, Paramount, and WBD &#8211; combined</a>.</p><p>&#128267; Five minute charging, 621 miles of range, 620,000 miles of life &#8211; <a href="https://www.fastcompany.com/91503415/byd-ev-battery-competes-with-gas-engines">BYD has cracked the EV battery code.</a></p><p>&#128065;&#65039; In medical news: <a href="https://www.earth.com/news/wireless-eye-implant-helps-blind-patients-read-again/">Wireless eye implant helps blind patients read again.</a></p><p>&#128664; One of the vexing problems self-driving cars still face is their behavior in edge cases &#8211; and it could be a stumbling block in their widespread adoption &#8211; as questions about self-driving cars amplify after <a href="https://www.texastribune.org/2026/03/09/texas-austin-shooting-autonomous-vehicles-self-driving-ambulance-blocked/">one blocked an ambulance responding to an Austin shooting.</a></p><p>&#127464;&#127475; While many of us, for good reason, stay miles away from autonomous AI agencies like OpenClaw, Chinese users seem to embrace them: <a href="https://hellochinatech.com/p/openclaw-china-ai-stack">OpenClaw Conquered China in 100 Days.</a></p><p>&#129534; It surely shouldn&#8217;t come as a surprise - but please: <a href="https://www.nytimes.com/2026/03/05/technology/artificial-intelligence-taxes-tax-refund.html">Don&#8217;t Trust A.I. to File Your Taxes</a></p><p>&#128722; Looks like ChatGPT&#8217;s dream of becoming your commerce hub is not panning out (yet): <a href="https://the-decoder.com/chatgpt-users-research-products-but-wont-buy-there-forcing-openai-to-rethink-its-commerce-strategy/">ChatGPT users research products but won&#8217;t buy there, forcing OpenAI to rethink its commerce strategy</a></p><p>&#129489;&#127996; Undoubtedly, OpenAI has a strong interest in moving companies from dabbling with AI to full-blown adoption. Hence a blog post from the company on &#8220;<a href="https://openai.com/index/the-five-ai-value-models-driving-business-reinvention/">five value models driving business reinvention</a>&#8221; &#8211; which reads like it was written by ChatGPT.</p><p>&#129352; Here&#8217;s an interesting use case for ChatGPT: <a href="https://www.theguardian.com/sport/2026/mar/09/ukraine-winter-paralympics-chat-gpt-artificial-intelligence">Ukrainian para-biathlete wins silver using ChatGPT as his coach.</a></p><p>&#129324; Pardon the language, but the argument is solid: <a href="https://rmoff.net/2026/03/06/ai-will-fuck-you-up-if-youre-not-on-board/">AI will f*** you up if you&#8217;re not on board.</a></p><p>&#129526; 3D knitting your next sweater is a thing &#8211; it&#8217;s super cool, produces a more durable product, and <a href="%EF%BF%BC">it&#8217;s here</a>.</p><p>&#129532; Admittedly nerdy, but &#8220;clean room&#8221; re-engineering is a thing ever since we had IP protection (for a good primer, watch the first season of <a href="https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_%28TV_series%29">Halt and Catch Fire</a> &#8211; excellent show!). With AI coding tools, the question now becomes: <a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a &#8220;clean room&#8221; implementation of code?</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Future Is Here and It’s Watching You]]></title><description><![CDATA[Jack Dorsey lays off 4,000 people for gains not yet realized, your Ray-Bans have outsourced your privacy to Nairobi, and Burger King just gamified your friendliness]]></description><link>https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:54:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75e0c345-5a2a-4478-8501-37f742c5de1c_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>What a week (again) it has been! AI continues to be everywhere, geopolitics are running hot, and somehow, in the midst of it all, we are still preparing our US tax returns. If that all feels a bit bonkers, you are not alone. Meanwhile, Block (Twitter co-founder Jack Dorsey&#8217;s company) has just announced that it is going to lay off 40% of its workforce (4,000 people) &#8211; not <em>because</em> of any actual productivity gains through their use of AI, but in <em>anticipation</em> of them. Yep, as said: it&#8217;s all a bit bonkers.</p><p>Maybe now is a good time to take a break, grab a coffee, and catch up on the latest news?!</p><p>P.S. On the <em>Built for Turbulence</em> podcast, I got to interview Andreas Bachmann, co-founder and CEO of Adacor, a German software development company. We talked, among other things, about the impact of AI on his business and their people &#8211; and Andreas took a decidedly different position to Dorsey. <a href="https://rdcl.is/a-podcast-with/andreas-bachmann/">Have a listen.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.bbc.com/news/articles/cgk2zygg0k3o">Do You Want Fries With That?</a></strong> Talk about a dystopian future. Burger King is testing a new headset for its drive-thru staff, which &#8220;compiles &#8216;friendliness scores&#8217; at the fast-food chain&#8217;s locations based on employees&#8217; conversations, according to a promotional video the company shared with the BBC.&#8221; There is so much to unpack here &#8211; the sheer fact that the company cheerfully shared a &#8220;promotional video&#8221; about its AI-driven surveillance tech is probably all that you need to know.</p><p>In all fairness, the company says the technology &#8220;[&#8230;] is not designed to &#8216;record conversations or evaluate individual employees&#8217;&#8221; &#8211; <em>yet.</em> Black Mirror, anyone?</p><blockquote><p>Customer service calls have routinely been recorded and monitored for years. Employees are often aware that they can be assessed to ensure they&#8217;re using the correct language. But this latest step by Burger King elicited swift condemnation among some social media users who described it as &#8220;dystopian&#8221;. Others questioned how accurate the chat-bot headsets will be, given that AI tools have proven to be prone to errors.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/">Now Everybody Knows You&#8217;re a Dog.</a></strong> A famous New Yorker cartoon from 1993 depicted two dogs in front of a computer, with one of them saying, &#8220;On the Internet, nobody knows you&#8217;re a dog.&#8221; The joke reflected the fact that, at the time, on the Internet, we reveled in pseudonymity &#8211; the act of being able to shield your true identity behind a screen name. Thanks to our friend, the omnipresent LLM, that&#8217;s all about to change.</p><blockquote><p>The finding, from a recently published <a href="https://arxiv.org/pdf/2602.16800">research paper</a>, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators.</p></blockquote><p>This is genuinely bad news for the many groups of people who have a legitimate reason to hide their identity.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://blog.adafruit.com/2026/03/04/you-bought-zucks-ray-bans-now-someone-in-nairobi-is-watching-you-poop/">You Bought Zuck&#8217;s Ray-Bans. Now Someone in Nairobi Is Watching You Poop.</a></strong> In the same line of thought as the above &#8211; and the headline says it all already &#8211; Meta&#8217;s Smart Glasses are a complete privacy disaster. Which, of course, is not particularly surprising given it&#8217;s&#8230; well&#8230; Meta. Not sure how many wearers of Meta&#8217;s nifty Ray-Bans and Oakleys are aware of the fact that they opted into their camera feed being used to train Meta&#8217;s AI &#8211; with disastrous results:</p><blockquote><p>Workers at Sama, one of Meta&#8217;s annotation subcontractors, describe reviewing video of people undressing, coming out of bathrooms naked, watching porn, having sex, and exposing bank card details.</p></blockquote><p>Yep. It&#8217;s bad.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.reuters.com/business/healthcare-pharmaceuticals/diagnostics-startup-droplet-biosciences-partners-with-nvidia-speed-cancer-test-2026-03-03/">Diagnostics Startup Droplet Biosciences Partners With Nvidia to Speed Cancer Testing</a></strong> Droplet&#8217;s method can detect residual disease in 24 hours by analyzing lymphatic fluid collected post-surgery, compared to the four to six weeks it typically takes for tumor remnants to appear in blood-based tests. <em>@Mafe</em></p><p><strong><a href="https://www.nytimes.com/2026/03/04/opinion/block-jack-dorsey-layoffs-ai.html">I Worked for Block; Its A.I. Job Cuts Aren&#8217;t What They Seem</a></strong> Whatever the AI-enabled performance of post-realignment Block turns out to be, the market&#8217;s reaction to the mass layoff there last week basically ensures that the narrative strategy will be copied &#8211; maybe widely. <em>@Jeffrey</em></p><p><strong><a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaS In, SaaS Out: Here&#8217;s What&#8217;s Driving the SaaSpocalypse</a></strong> The so-called &#8220;SaaSpocalypse&#8221; feels less like collapse and more like correction. I&#8217;m seeing more small &amp; mid-size orgs quietly choose to build their own tools because AI has made it absurdly easy and cheap to do so. <em>@Kacee</em></p><p><strong><a href="https://www.theguardian.com/technology/2026/feb/25/tech-legend-stewart-brand-on-musk-bezos-and-his-extraordinary-life-we-dont-need-to-passively-accept-our-fate">Tech Legend Stewart Brand on Musk, Bezos and His Extraordinary Life: &#8216;We Don&#8217;t Need to Passively Accept Our Fate&#8217;</a></strong> There are few people like Stewart Brand. Now in his late 80s, he is still actively shaping the future &#8211; through an exploration into &#8220;maintenance.&#8221; <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128576; One of the best ways to keep your AI news balanced, is to read opposing viewpoints. Here is Ed Zitron&#8217;s <a href="https://www.dropbox.com/scl/fi/1p1n0y1ip48ianok9dvbp/Annotation-The-Global-Intelligence-Crisis.pdf?e=3&amp;noscript=1&amp;rlkey=qaar8ea6l5hh6jqls4x6g8q4b&amp;dl=0">inline comments on the CitriniResearch article</a> which shook the stock markets. Well worth a read &#8211; and hilarious.</p><p>&#129400; AI Agents are all the rage &#8211; for good reason; what felt like a toy just a few months ago, is now a powerful tool (just try out Claude Cowork). <a href="https://creatoreconomy.so/p/your-new-job-is-to-onboard-ai-agents">Your new job is to onboard AI agents: how AI native companies actually operate.</a></p><p>&#128104;&#127996;&#8205;&#128187; Fascinating insights into <a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">the future of software engineering</a> in the form of a retreat summary by the fine folks at Thoughtworks.</p><p>&#9749; If you know me, you know that I love (exceptional) coffee. Honore de Balzac&#8217;s treatise on &#8220;<a href="https://quod.lib.umich.edu/m/mqrarchive/act2080.0035.002/10">The Pleasures and Pains of Coffee</a>&#8221; is pure gold.</p><p>&#127859; Between milk, flour, and eggs lies a whole bermuda triangle of unexplored breakfast territory. Here goes the &#8220;<a href="https://moultano.wordpress.com/2026/02/22/the-hunt-for-dark-breakfast/">The Hunt for Dark Breakfast.</a>&#8221;</p><p>&#129658; Please, do not trust AI with your health. Another point in case: <a href="https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies">&#8216;Unbelievably dangerous&#8217;: experts sound alarm after ChatGPT Health fails to recognise medical emergencies</a></p><p>&#128200; Up, up, it goes. Always interesting to see what the <a href="https://apoorv03.com/p/the-state-of-consumer-ai-part-1-usage">current state of affairs is in the world of consumer AI</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Stories of Discontinuity]]></title><description><![CDATA[Every vision of the future is fiction. Some are just more comfortable than others.]]></description><link>https://briefing.rdcl.is/p/stories-of-discontinuity</link><guid isPermaLink="false">https://briefing.rdcl.is/p/stories-of-discontinuity</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 03 Mar 2026 16:03:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/84df7c6b-5f56-4001-858c-a78a640d277f_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Heard any wild AI stories lately? The market certainly has &#8211; and not just one of them.</p><p>Last week, it was a viral piece of <a href="https://www.citriniresearch.com/p/2028gic">AI doom-inflected speculative macro fiction from Citrini Research</a> that sketched out a scenario where the rapid disruption of white-collar work and service-industry business models tips off a broad economic crisis. The post wasn&#8217;t news and wasn&#8217;t even really analysis, but it sent the prices of <a href="https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-tariffs-02-23-2026/card/software-stocks-are-having-another-ugly-day-LlAj9avDeFocxKHzVwRZ?">software and finance stocks</a> (especially those unfortunate enough to figure into the scenario by name) reeling just the same. And the Citrini post was actually the <em>second</em> <a href="https://shumer.dev/something-big-is-happening">massively viral AI-takeoff-ravages-the-labor-market story</a> to spook investors in the span of just a few weeks.</p><p>Now as you&#8217;d expect, plenty of commentators jumped in both times with critiques (<a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">1</a>, <a href="https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary">2</a>, <a href="https://www.wheresyoured.at/hatersguide-pe/">3</a>) and counterarguments (<a href="https://x.com/johnloeber/status/2025748423157432756">my favorite</a>), and even a couple of full-blown speculative counternarratives. Many of those commentators rightly pointed out that the whole Citrini thing (like the Shumer post) is, well&#8230; just a <em>story</em>.</p><p>But well&#8230; so are all of our other visions of the future.</p><p>Thinking about &#8211; and attempting to plan for &#8211; the future is fundamentally an act of imagination. That act might be grounded in historical data and built on the extrapolation of today&#8217;s evident, quantifiable trends into the space of tomorrow, but once we get into the tomorrow, we are in the realm of imagination, assumption, projection, story.</p><p>The future stories that strike us as most plausible or even probable are often stories of <em>continuity</em>, where the tomorrow doesn&#8217;t look so drastically different from today. The path of continuity is easier to imagine and also often feels more &#8220;real&#8221; because it&#8217;s grounded in more historical data. But all of that data is about the past, and our most important decisions are about the future.</p><p>Stories of <em>discontinuity</em> feel unfamiliar. That&#8217;s the point. They can widen the aperture of our imagination, expand the scope of conversation and awareness, offer a fresh perspective on present practice and strategy, and maybe even enable us to discover non-intuitive paths forward.</p><p>Now, is all of this to say that I think the Citrini narrative points to a particularly probable future &#8211; or that it&#8217;s even a particularly well-crafted bit of speculative fiction? Not really, no.</p><p>But I appreciate the opportunity that these viral narratives of discontinuity offer for us to engage critically with alternative future stories &#8211; and to then turn that same critical lens onto the spectrum of future narratives that we don&#8217;t so easily recognize as &#8220;stories&#8221;. Sometimes that&#8217;s because they&#8217;re narratives of continuity grounded in historical data and past experience. Sometimes it&#8217;s because they come from ostensible authorities. Sometimes it&#8217;s because they feel too deeply entrenched to ever be shaken loose or challenged.</p><p>We can and should do this at the macro level, more carefully examining big stories about AI and societal futures &#8211; asking where each story originates, <a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">what assumptions are baked in</a>, whose interests and agendas are served, who has real agency, etc.</p><p>And we can do this at the micro/org level too. Some of the most interesting conversations I&#8217;ve been having lately have been with HR &amp; People leaders about the AI augmented-futures of their organizations. One thing that keeps coming up and sticks with me: The AI-future vision of the org is <em>rarely</em> people-centric. It&#8217;s typically constructed around optimization and efficiency first, and almost every other value or stakeholder interest figures as an afterthought. That&#8217;s a bet and an argument, and it&#8217;s also a story that leaders within the org are telling themselves about the future.</p><p>And make no mistake: There&#8217;s a set of assumptions baked into that vision just as surely as there is in the Citrini memo or Matt Shumer&#8217;s viral blog post. And if we unpack them and find ourselves dissenting, what alternative framings or narratives are we putting out there to show other possible paths forward?</p><p>So, I&#8217;ll ask again: Heard any wild AI stories lately?</p><p><em>@Jeffrey</em></p>]]></content:encoded></item><item><title><![CDATA[Agents Are Taking the Wheel]]></title><description><![CDATA[While Europe&#8217;s workers quietly outperform their American counterparts, a generation of laptop-schooled kids arrives cognitively underpowered &#8211; and entry-level coders start to disappear]]></description><link>https://briefing.rdcl.is/p/agents-are-taking-the-wheel</link><guid isPermaLink="false">https://briefing.rdcl.is/p/agents-are-taking-the-wheel</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Feb 2026 14:59:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/993619a7-4d55-4d13-97b6-e0056ba956c7_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I have been thinking (and talking &#8211; shoutout to <a href="%EF%BF%BC">Martin Alderson</a> here) about the tip of the spear in AI &#8211; namely the sudden and dramatic rise of multi-agent systems (from Gas Town to OpenClaw to Anthropic&#8217;s Code Teams). It really feels like we are crossing a threshold &#8211; and that things are about to change. If you haven&#8217;t played with this stuff, I definitely recommend trying it out. Start gentle with something like Claude&#8217;s Cowork mode &#8211; moving from a chat interface to something more akin to an actual coworker is pretty transformative. As you are experiencing this, I highly encourage you to not just ask &#8220;what is this today?&#8221;, but envision what it could be in the future.</p><p>P.S. Our friend Mike Housman is about to publish his new playbook on how to use AI &#8211; check it out, it launches on Monday: <a href="https://a.co/d/08xcw4aY">Future Proof: Transform your Business with AI (or Get Left Behind)</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://spectrum.ieee.org/solid-state-lidar-microvision-adas">Lidar Has Become Cheap as Chips.</a></strong> I remember, back in my days at Singularity University, we talked about how Lidar (the laser-based technology that measures distance by illuminating a target with a laser and measuring the reflected light &#8211; and hence became instrumental in allowing a robot, e.g. a self-driving car, to &#8220;see&#8221; its surroundings) would become cheap and ubiquitous. It took a while, but now we are (finally) there &#8211; Lidar units are now available for less than $200.</p><blockquote><p>When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.</p></blockquote><p>True. And a nice jab at our friend Elon, who famously rejected Lidar in favor of (much cheaper) cameras.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://cepr.org/voxeu/columns/how-ai-affecting-productivity-and-jobs-europe">AI in Europe: Not as Bad as You Might Think.</a></strong> A recent study by CEPR (an independent, non-partisan pan-European think tank) found that among the 12,000 surveyed companies, AI adoption led to a labor productivity increase of 4% on average, with no reported short-term negative impact on employment. Studies on this subject across the world are all over the place &#8211; with many having a hard time finding any measurable impact of AI on productivity, and some claiming rather drastic negative impacts on employment. As most of these studies are conducted in the US, it is nice to see a study from a different part of the world.</p><blockquote><p>The productivity dividends from AI depend not merely on acquiring the technology but on firms&#8217; capacity to integrate it through investments in intangible assets and human capital. [&#8230;] An additional percentage point spent on training amplifies AI&#8217;s productivity gains by 5.9 percentage points.</p></blockquote><p>(here is a US-centric counterpoint: &#8220;<a href="https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380">AI Added &#8216;Basically Zero&#8217; to US Economic Growth Last Year, Goldman Sachs Says</a>&#8221;)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html?unlocked_article_code=1.NFA.UkLv.r-XczfzYRdXJ&amp;smid=url-share">The A.I. Disruption We&#8217;ve Been Waiting for Has Arrived.</a></strong> Paul Ford&#8217;s opinion piece in the New York Times summarizes the current state of affairs when it comes to AI nicely.</p><blockquote><p>It was always a helpful coding assistant, but in November it suddenly got much better, and ever since I&#8217;ve been knocking off side projects that had sat in folders for a decade or longer. [&#8230;] November was, for me and many others in tech, a great surprise. Before, A.I. coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible.</p></blockquote><p>It really feels to me like the shifting sands of AI are starting to solidify.</p><blockquote><p>Today, though, when the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month plan.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data">Why AI Adoption Stalls, According to Industry Data</a></strong> Most companies think their AI problem is about execution &#8211; it&#8217;s not. The real story, unsurprisingly, is far more about humans! <em>@Jane</em></p><p><strong><a href="https://fs.blog/experts-vs-imitators/">Experts vs. Imitators</a></strong> Telling the difference between an expert and an imitator can save time and money, among other things &#8211; and knowing how to identify one from the other makes all the difference. <em>@Mafe</em></p><p><strong><a href="https://kyla.substack.com/p/buying-futures-renting-the-past-how">Buying Futures, Renting the Past: How Speculation and Nostalgia Became the Economy</a></strong> While the economy and culture pull hard toward betting on the future and strip-mining the past, we&#8217;re stuck in an increasingly dislocated, muddled present &#8211; the messy middle where, as it happens, all the real work has to be done. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-case-for-making-bold-bets-in-uncertain-times/">The Case for Making Bold Bets in Uncertain Times</a></strong> When the World Uncertainty Index is higher than ever, playing it safe isn&#8217;t a strategy &#8211; it&#8217;s a slow decline. The companies that win in volatility aren&#8217;t reckless; they&#8217;re radically clear about the bets that matter and bold enough to place them. <em>@Kacee</em></p><p><strong><a href="https://oceandrops.substack.com/p/japan-is-what-late-stage-capitalist">Japan Is What Late-Stage Capitalist Decline Looks Like</a></strong> Drawing parallels from the odd world of Japanese pop culture to our global world of capitalism makes for a fascinating (and sobering) read. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128559; Here&#8217;s a strange little trick for using the latest models inside of your LLM of choice: <a href="https://daoudclarke.net/2026/02/19/repeating-prompt">Repeat the ask</a>(in the same prompt) and you will get better results. Yes, LLMs are weird.</p><p>&#128119;&#127996; AI tools (particularly Anthropic&#8217;s Claude) are pushing deeper and deeper into the world of office task automation &#8211; which feels like a good move on their part: <a href="https://www.cnbc.com/2026/02/24/anthropic-claude-cowork-office-worker.html">Anthropic updates Claude Cowork tool built to give the average office worker a productivity boost.</a></p><p>&#128197; The complete history of LLMs visualized in a <a href="https://llm-timeline.com/">single, neat timeline</a>. tl;dr: We have come a long, long way.</p><p>&#129302; Ever wondered why so many robots look so darn cute? It&#8217;s, of course, not an accident. &#8220;<a href="https://www.nbcnews.com/tech/tech-news/tech-companies-cute-robot-designs-win-over-humans-rcna259818">Tech companies are making their robots cute to try to win over humans</a>&#8221;</p><p>&#9997;&#127996; There might be a point: <a href="https://thewalrus.ca/if-chatbots-can-replace-writers-its-because-we-made-writing-replaceable/">If chatbots can replace writers, it&#8217;s because we made writing replaceable - A good deal of what gets published already reads like a photocopy of a photocopy</a></p><p>&#128187; The old walls are (finally) crumbling: <a href="https://www.theregister.com/2026/02/23/ibm_share_dive_anthropic_cobol/">IBM stock dives after Anthropic points out AI can rewrite COBOL fast</a> (and in all fairness, Big Blue is saying this for quite a while now).</p><p>&#128104;&#127996;&#8205;&#128187; The job losses on entry-level coding jobs are real, and people start to notice: <a href="https://www.theregister.com/2026/02/23/microsoft_ai_entry_level_russinovich_hanselman/">Microsoft execs worry AI will eat entry level coding jobs.</a></p><p>&#129489;&#127996;&#8205;&#127979; Talking about education: <a href="https://fortune.com/2026/02/21/laptops-tablets-schools-gen-z-less-cognitively-capable-parents-first-time-cellphone-bans-standardized-test-scores/">The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Chat Era Is Over]]></title><description><![CDATA[AI agents are going rogue, white-collar jobs are hollowing out, and the tools for impersonating anyone are now disturbingly good &#8212; the agentic future arrived before we were ready for it]]></description><link>https://briefing.rdcl.is/p/the-chat-era-is-over</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-chat-era-is-over</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Feb 2026 16:18:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/305ae318-c3a9-4a9d-9cec-69571e161187_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Next Monday I am going to speak at the <a href="https://humanadvantagesummit.org/">Human Advantage Summit</a> in my home town, Boulder, Colorado. It&#8217;s a brand-new event, created by a dear friend of mine to explore the future of childhood and leadership. I was brought up in the traditional German school system &#8211; (right) answers are gold, questions are (mostly) discouraged. I remember the neighborhood kids going to Waldorf and Montessori schools &#8211; spending time in nature, learning by playing and exploring, looking at problems not just from a single perspective, but holistically. Back when I was a kid, this was a fringe movement &#8211; today, I would argue, it is precisely what we need. The organizations that will matter, the communities that will flourish, the individuals who will lead &#8211; they won&#8217;t be the ones who adopted AI fastest. They&#8217;ll be the ones who cultivated the most deeply human people.</p><p>It&#8217;s going to be a fascinating conversation.</p><p>P.S. I explored this further with Peter Laughter on Built for Turbulence &#8211; a conversation about why the leadership pyramid has collapsed and what replaces it. <a href="https://rdcl.is/a-podcast-with/peter-laughter/">Listen here.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://garymarcus.substack.com/p/we-urgently-need-a-federal-law-forbidding">We Are so Hosed.</a></strong> Ignore the headline of the linked article for a moment (whether you disagree or agree with it &#8211; it doesn&#8217;t really matter for the argument): Gary Marcus rings the alarm bell on AI-generated &#8220;counterfeit people.&#8221; And I strongly believe he is right &#8211; looking at the quality of the recent crop of AI video and voice generators, you cannot believe your eyes and ears anymore. Combine this with agentic capabilities (such as Gary&#8217;s example of an adapter which links Claudebot to a voice generator, combined with the ability to make phone calls) and you have a recipe for disaster on your hands.</p><blockquote><p>Scammers will be among the first to adopt these tools. And indeed they already have; a friend who was filming me for a documentary yesterday told me of a Canadian friend of his who was scammed out of hundreds of thousands of dollars by a deepfaked video of Mark Carney. Because the tools for counterfeiting have gotten so good 2026 will almost certainly see more deepfaked scams like this than the rest of history combined.</p></blockquote><p>I am just waiting for the first wave of AI-generated scam calls to hit nursing home residents&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://mastodon.world/@knowmadd/116072773118828295">LLMs Have No Clue About the World.</a></strong> One of the biggest problems with LLMs is that they simply don&#8217;t understand the world. As much as they can mimic human language (and hence appear to understand how things relate to each other), they don&#8217;t. Here is a prime example &#8211; the Mastodon user K&#233;vin asked numerous AI models a deceptively simple question: &#8220;I want to wash my car. The car wash is 50 meters away. Should I walk or drive?&#8221; <a href="https://mastodon.world/@knowmadd/116072773118828295">Here are the responses</a> (spoiler: they are all wrong).</p><p>P.S. I just repeated the experiment with a couple different models: Google Gemini tells me I need to drive (as I won&#8217;t get my car washed otherwise), Claude Opus 4.6 recommends walking, and GPT 5.2 Reasoning gave me somewhat of a non-answer. YMMV.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/">AI Agents Go After Users.</a></strong> This story (which is still somewhat unfolding) is truly bonkers: A developer rejected a code contribution from an AI agent; the AI agent didn&#8217;t take it well and, autonomously (i.e., without consulting its &#8220;user&#8221;), went after the developer by publishing a hit piece about him. It&#8217;s a truly head-scratching story &#8211; and gives us a strong glimpse of a future where AI agents run amok. Even if you don&#8217;t understand the specifics of the story &#8211; it&#8217;s a fascinating read and something we all should be paying more attention to.</p><blockquote><p>The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that&#8217;s because from a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://venturebeat.com/technology/openais-acquisition-of-openclaw-signals-the-beginning-of-the-end-of-the">OpenAI&#8217;s Acquisition of OpenClaw Signals the Beginning of the End of the ChatGPT Era.</a></strong> Building on our last Briefing deep dive &#8220;<a href="https://briefing.rdcl.is/p/the-bifurcation-of-intelligence">The Bifurcation of Intelligence</a>&#8221;, the AI model makers are truly moving on from the era of chat interfaces to more integrated, and capable agentic graphical interfaces. Think about OpenClaw (the crazy-ass AI-powered agent platform which, for a couple of weeks, captured the imagination of the AI community) what you want &#8211; it is a good indicator of where we are heading.</p><blockquote><p>&#8220;For IT leaders evaluating their AI strategy, the acquisition is a signal that the industry&#8217;s center of gravity is shifting decisively from conversational interfaces toward autonomous agents that browse, click, execute code, and complete tasks on users&#8217; behalf.&#8221;</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/feb/03/deepfakes-ai-companions-artificial-intelligence-safety-report?CMP=Share_iOSApp_Other">Deepfakes Spreading and More AI Companions&#8217;: Seven Takeaways from the Latest Artificial Intelligence Safety Report</a></strong> New AI safety analysis tracks escalating risks &#8211; from deepfakes fooling 77% of viewers to systems learning to undermine their own guardrails. <em>@Jane</em></p><p><strong><a href="https://hbr.org/2026/03/why-great-innovations-fail-to-scale?ab=HP-magazine-text-2">Why Great Innovations Fail to Scale</a></strong> Great innovations often fail to scale due to a lack of cross-boundary collaboration, a gap that can be bridged by specialized leaders &#8211; &#8220;bridgers&#8221; &#8211; who use high emotional and contextual intelligence to curate partners, translate differing priorities, and integrate disparate workflows. <em>@Mafe</em></p><p><strong><a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/">The Worst-Case Future for White-Collar Workers</a></strong> An AI-fueled collapse in the value of &#8220;office jobs&#8221; could create a labor market disruption with dire cascading implications and no easy remedy. <em>@Jeffrey</em></p><p><strong><a href="https://aeon.co/essays/what-the-metaphor-of-rewiring-gets-wrong-about-neuroplasticity">Can You Rewire Your Brain?</a></strong> You often hear people say &#8220;rewire your brain,&#8221; but can you really do that? Is the reality of neuroplasticity more complicated than simply unplugging and replugging some old wiring? <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#10024; Title says it all: &#8220;<a href="https://aftermath.site/ai-influencer-creator-deals-sponsorship-google-microsoft-anthropic/">AI is so inherently popular that companies are paying influencers up to $600,000 to tell people how awesome it is.</a>&#8221;</p><p>&#129768; People are just not as good at detecting AI-generated faces, than they believe they are. Which is a real problem, now that we are being flooded by AI-generated slop: <a href="https://www.unsw.edu.au/newsroom/news/2026/02/humans-overconfident-telling-AI-faces-real-faces-people-fake">People are overconfident about spotting AI faces, study finds</a></p><p>&#128566;&#8205;&#127787;&#65039; We commented on the conumdrum of AI increasing productivity while also putting enormous mental pressure on those who&#8217;s productivity it increases. Here is another take on this: <a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a></p><p>&#128557; Snarky comments aside, this is troublesome: <a href="https://www.theguardian.com/lifeandstyle/ng-interactive/2026/feb/13/openai-chatbot-gpt4o-valentines-day">OpenAI retired its most seductive chatbot &#8211; leaving users angry and grieving: &#8216;I can&#8217;t live like this&#8217;</a></p><p>&#129503; Talking about troublesome: <a href="https://www.dexerto.com/entertainment/meta-patents-ai-that-takes-over-a-dead-persons-account-to-keep-posting-and-chatting-3320326/">Meta patents AI that takes over a dead person&#8217;s account to keep posting and chatting</a></p><p>&#128190; Another victim of the AI hype and buildout: You can&#8217;t get hard drives anymore. <a href="https://www.heise.de/en/news/WD-and-Seagate-confirm-Hard-drives-for-2026-sold-out-11178917.html">WD and Seagate confirmed that their 2026 supply is sold out.</a></p><p>&#128119;&#127996; The humble drywall is not merely a construction material; it is a <a href="https://worksinprogress.co/issue/the-wonder-of-modern-drywall/">marvel of engineering and a canvas for human creativity</a>.</p><p>&#128085; The fashion industry&#8217;s overproduction is a notorious problem &#8211; 30% of clothing produced goes unsold and is dumped into landfills. The EU is trying to tackle the problem with a new set of laws: <a href="https://environment.ec.europa.eu/news/new-eu-rules-stop-destruction-unsold-clothes-and-shoes-2026-02-09_en">New EU rules to stop the destruction of unsold clothes and shoes</a></p><p>&#128196; A 14-year old folded a variant of the Miura-ori pattern that can <a href="https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-shelters-that-are-sturdy-cost-efficient-and-easy-to-deploy-180988179/">hold 10,000 times its own weight.</a> Consider our minds blown.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is going retro and bought a Fujifilm X10 camera from 2011.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Bifurcation of Intelligence]]></title><description><![CDATA[Why an &#8220;AI Ready&#8221; strategy might just be a BlackBerry moment in an iPhone world.]]></description><link>https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:19:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f34ce68c-1c06-401f-9a3c-c499d3b5f096_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I know, I know. I asked you in our last deep-dive Briefing to &#8220;bear with me&#8221; as I wrote (once again) about AI. And now I am back at it. &#128579; But it&#8217;s hard to argue that the whole AI thing isn&#8217;t deeply important and something we should all be thinking about&#8230; right?</p><p>The other day I received an email promoting an &#8220;AI Fluency&#8221; course. Nothing wrong with that per se. But it made me wonder &#8211; while many of us are still trying to figure out this whole AI thing (by learning to &#8220;prompt&#8221; our chat interfaces) and businesses invest heavily in rolling out chatbots like Microsoft&#8217;s Copilot, the spear tip of the market has long moved on. The real power users have stopped using AI as &#8220;Google on steroids&#8221; and started using complex AI agents to take over ever-larger chunks of their work &#8211; and they are doing so autonomously.</p><p>Which leads to a weird bifurcation: On one side, business leadership is congratulating themselves on making their companies &#8220;AI-ready&#8221; by buying 10,000 seats of Microsoft Copilot so employees can summarize emails. On the other side, a completely different set of users has discovered that agentic coding tools (like Claude Code CLI) can, with a bit of tweaking, be extremely useful for tasks that have nothing to do with software engineering.</p><p>Martin Alderson <a href="https://martinalderson.com/posts/two-kinds-of-ai-users-are-emerging/">recently pointed out this widening gap</a>, noting that he is seeing finance directors and marketers &#8211; people who are decisively <em>not</em> engineers &#8211; running Python scripts in terminal windows to automate massive workflows. They aren&#8217;t chatting with a bot, but deploying the AI version of a whole data science team.</p><p>Now, the problem is that these tools tend not to be sanctioned by corporate IT departments. You generally can&#8217;t run a command-line interface or execute arbitrary Python code on a locked-down enterprise laptop. So, this &#8220;real&#8221; AI work is happening either in smaller, nimble companies or by employees who are actively circumventing the rules.</p><p>There is a historical rhyme here &#8211; we are in the &#8220;BlackBerry vs. iPhone&#8221; era of AI. Corporate IT loved the BlackBerry (yesteryear&#8217;s version of Microsoft Copilot) because it was secure, controlled, and fundamentally limited. The users, however, want the &#8220;iPhone&#8221; (agentic tools such as Claude Code/Cowork or ChatGPT Codex) because it actually allows them to do the things they need to do (in a rather magical way). And just like in 2008, the &#8220;shadow&#8221; usage is where the actual productivity revolution is happening.</p><p>The dichotomy between large-scale enterprise use of AI and what individual users can do with &#8220;tip of the spear&#8221; tools is vast &#8211; and it&#8217;s becoming a structural risk. To state it bluntly: The &#8220;Chat&#8221; interface is a dead end for complex work.</p><p>Alderson uses the example of a finance director trying to modernize a complex financial model. In the &#8220;sanctioned AI&#8221; world, they are stuck in Excel, asking Copilot to help with formulas. It&#8217;s slow, it breaks, and it&#8217;s still just a spreadsheet. In the &#8220;rogue AI&#8221; world, that same director uses an agent to convert those 30 sheets of Excel logic into a Python script. Suddenly, they aren&#8217;t just doing &#8220;better Excel&#8221; &#8211; they are running Monte Carlo simulations, pulling in live external data via APIs, and building web dashboards. They have jumped the species barrier from &#8220;clerk&#8221; to &#8220;engineer,&#8221; simply because they had access to a tool that could write and execute code.</p><p>The result is that the companies with the most resources (enterprises) are becoming the least capable of leveraging AI. While the small startup team is building an automated machine that runs circles around the competition, the enterprise team is stuck asking a chatbot to summarize a PDF.</p><p>The end result is two distinct classes of knowledge workers: There are the <em>Consumers</em>, who will stay within the guardrails, use the sanctioned tools, and see a marginal (10&#8211;20%) bump in productivity. Consumers draft emails faster and find documents easier. Bravo.</p><p>And then there are the <em>Builders</em>. They might not have &#8220;developer&#8221; in their job title, but they are using agentic tools to build their own infrastructure, automate entire processes, and bypass the limitations of their official software stack. Builders are seeing productivity gains of 10x or 100x.</p><p>The danger for leaders is assuming that buying the &#8220;Consumer&#8221; tools means you have solved the AI problem. You haven&#8217;t. You&#8217;ve just given your people a slightly better typewriter, while your competitors moved on to the networked laser printer.</p><p><em>@Pascal</em></p><p>Musical Coda:</p><div id="youtube2-_3eC35LoF4U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_3eC35LoF4U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_3eC35LoF4U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>(Because sometimes you have to break the house rules to actually build something new.)</p>]]></content:encoded></item><item><title><![CDATA[The AI Efficiency Trap]]></title><description><![CDATA[Amazon abandons the physical world, Europe declares war on Visa, and the UK&#8217;s disastrous approach to automated labor]]></description><link>https://briefing.rdcl.is/p/the-ai-efficiency-trap</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-efficiency-trap</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 13 Feb 2026 15:25:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7fa8a63c-9eb7-49c1-a0f1-74cda7e8b6c9_1600x900.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>While Jane and I were in a galaxy far, far away (<a href="https://photos.app.goo.gl/cGeKfMsaGx95m1cK6">pictures here</a> in case you&#8217;re curious), the AI world went bonkers over ClawdBot/Moltbot/OpenClaw &#8211; the open-source &#8220;autonomous agent&#8221; that acts like a personal virtual assistant. Some hailed it as the first &#8220;true&#8221; AGI. It was (and continues to be) a security nightmare. It also doesn&#8217;t work. Or it works very well. Depends on who you ask. But as quickly as it came, it also disappeared (technically in less time than it took Jane and me to fly to Patagonia, sail and climb in the Darwin Range, and come back home). Another good reminder that nothing is eaten as hot as it is served (as the Germans say).</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">AI Doesn&#8217;t Reduce Work &#8211; It Intensifies It.</a></strong> UC Berkeley researchers spent eight months studying 40 workers at a 200-person tech company to see what actually happens when you give knowledge workers access to AI tools. And what they found should dampen your excitement about the &#8220;AI-enhanced human&#8221;: Rather than reducing workloads, AI created a self-reinforcing cycle &#8211; it accelerated tasks, which raised speed expectations, which increased reliance on AI, which widened the scope of what workers attempted, which further expanded the quantity and density of work. In sum: Workers weren&#8217;t told to do more &#8211; they chose to, because AI made &#8220;doing more&#8221; feel possible and even rewarding. The result was faster pace, broader scope, and longer hours, all driven by the employees themselves.</p><blockquote><p><em>It would seem that since AI increases productivity, it means you save time and work less. But in reality, you don&#8217;t work less. You work the same amount or even more.</em></p></blockquote><p>Talk about hacking your internal reward system&#8230; and you thought social media was bad. Or, in other words: The treadmill just got faster.</p><p>P.S. This post by Steve Yegge is making the rounds &#8211; same idea. <a href="https://steve-yegge.medium.com/the-ai-vampire-eda6e4f07163">&#8220;The AI Vampire&#8221;</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theguardian.com/technology/2026/jan/26/ai-uk-jobs-us-japan-germany-australia">AI Is Cutting More Jobs in the UK Than Anywhere Else.</a></strong> A new Morgan Stanley report compared AI&#8217;s impact on employment across the US, UK, Japan, Germany, and Australia &#8211; and the UK stands out for all the wrong reasons. British firms reported an 8% net job loss linked to AI, double the international average. Meanwhile, UK companies saw roughly the same 11.5% productivity boost from AI as their peers in other countries. American firms with similar gains actually created more jobs than they cut (at least according to the report &#8211; and at this moment in time). Same technology, same productivity uplift, very different choices &#8211; which tells you this is a management story, not a technology story. To make it worse, UK employers were most likely to axe early-career positions requiring two to five years of experience, hollowing out exactly the layer where people build the skills they&#8217;ll need for the next three decades.</p><blockquote><p><em>Executives are conflating early tool investment and adoption with license to reduce headcount, often before demonstrating genuine productivity gains. UK boardrooms appear particularly susceptible to cutting first and measuring later.</em></p></blockquote><p>Not good. And also &#8211; <a href="https://www.nytimes.com/2026/02/01/business/layoffs-ai-washing.html">&#8220;AI Washing&#8221; is a thing.</a></p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://finance.yahoo.com/news/amazon-closing-fresh-grocery-convenience-150437789.html">Amazon Gives Up on Amazon-Branded Grocery Stores.</a></strong> Amazon is shutting down all 72 of its Amazon Fresh and Amazon Go locations &#8211; the company&#8217;s decade-long attempt to crack physical retail under its own brand. The closures are effective February 1, which means by the time you read this, they&#8217;re likely already gone. Amazon&#8217;s pivot: double down on Whole Foods, which has grown 40% since the 2017 acquisition and is expanding to 100+ new locations, plus a new &#8220;supercenter&#8221; concept in suburban Chicago slated for 2027. It&#8217;s a quietly remarkable admission &#8211; the company that redefined how the world buys things online could never quite figure out how to make people walk into a store. Remember, this is a list that includes bookstores, 4-Star shops, electronics kiosks, and a clothing store called &#8220;Style&#8221; that lasted all of two years.</p><blockquote><p><em>While we&#8217;ve seen encouraging signals in our Amazon-branded physical grocery stores, we haven&#8217;t yet created a truly distinctive customer experience with the right economic model needed for large-scale expansion.</em></p></blockquote><p>Talk about corporate-speak for &#8220;it didn&#8217;t work.&#8221;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://europeanbusinessmagazine.com/business/europes-24-trillion-breakup-with-visa-and-mastercard-has-begun/">Europe&#8217;s Done With Visa and Mastercard.</a></strong> Europe is finally getting serious about (payment) sovereignty (undoubtedly driven by the overall political climate). The European Payments Initiative&#8217;s digital wallet Wero &#8211; built on SEPA instant credit transfers, no card required (it&#8217;s actually quite neat &#8211; all you need is a mobile phone number), no American intermediary &#8211; already has 47 million registered users across Belgium, France, and Germany, and is about to add another 130 million users across 13 countries. Running in parallel is the ECB&#8217;s digital euro project. The strategic logic is straightforward: when Visa and Mastercard cut Russia off in 2022, European policymakers realized that American payment infrastructure can be weaponized &#8211; and every transaction routed through it sends European consumer data to the United States. ECB President Lagarde has called the situation urgent. Mastercard&#8217;s CEO says he&#8217;s &#8220;not particularly worried.&#8221; One of them will be wrong.</p><blockquote><p><em>European payment sovereignty is not a vision, but a reality in the making.</em></p></blockquote><p>We&#8217;ll see. But the direction is clear.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://martinalderson.com/posts/two-kinds-of-ai-users-are-emerging/">The AI User Gap Is Astonishing.</a></strong> Martin Alderson makes a simple but important observation: two kinds of AI users are emerging, and the gap between them is enormous. The first group is all in &#8211; using Claude Code, MCPs, agentic workflows, the whole stack. Surprisingly, many of them aren&#8217;t technical at all; Alderson has seen finance people getting extraordinary value out of AI precisely because Excel is so limiting compared to a full programming environment like Python. The second group is chatting with ChatGPT occasionally and calling it a day. This split, Alderson argues, explains a lot of the confusing media coverage about whether AI actually boosts productivity &#8211; it does, dramatically, but only for the people who&#8217;ve crossed a usage threshold that most haven&#8217;t.</p><blockquote><p><em>I am still shocked by how much difference there is between AI users.</em></p></blockquote><p>This tracks with everything I see and hear. Good read. (and in related news: <a href="https://www.sectionai.com/ai/the-ai-proficiency-report">Your AI adoption metrics are lying to you.</a>)</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/news/ng-interactive/2026/jan/29/what-technology-takes-from-us-and-how-to-take-it-back">What Technology Takes From Us &#8211; and How to Take It Back</a></strong> Technology is increasingly becoming central and critical to our daily lives. Is it time to take our humanity back? <em>@Jane</em></p><p><strong><a href="https://www.bloomberg.com/news/features/2026-02-10/dollar-tree-expands-into-wealthier-areas-attracts-higher-income-shoppers">Inside Dollar Tree&#8217;s Push to Lure Rich Shoppers Hunting for Bargains</a></strong> It&#8217;s hard to say no to a good deal &#8211; shoppers who make over $100,000 are driving much of Dollar Tree&#8217;s current growth. Last quarter, 60% of new Dollar Tree customers made at least six figures. <em>@Mafe</em></p><p><strong><a href="https://www.theatlantic.com/technology/2026/02/ai-prediction-human-forecasters/685955/">AI Is Getting Scary Good at Making Predictions</a></strong> AI superintelligence may (or may not!) still be a few years away from being a few years away, but AI superforecasting &#8211; a different but still highly valuable data- and modeling-driven proposition &#8211; seems to be very close at hand. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/alexknapp/2026/02/11/forbes-250-americas-greatest-innovators/">Forbes 250: America&#8217;s Greatest Innovators</a></strong> Celebrating the minds shaping tomorrow, because true innovation isn&#8217;t just about invention, but rather impact. Spoiler alert &#8211; Elon beat Bezos. <em>@Kacee</em></p><p><strong><a href="https://www.technologyreview.com/2026/02/09/1132537/a-lesson-from-pokemon/">Why the Moltbook Frenzy Was Like Pok&#233;mon</a></strong> The &#8220;Moltbook&#8221; social network for AI agents, while hyped as a glimpse into the future of autonomous AI, was actually more akin to a chaotic game of &#8220;Twitch Plays Pok&#233;mon&#8221; &#8211; a spectator sport for humans rather than a functional hive mind. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128188; Here are some <a href="https://mitchellh.com/writing/my-ai-adoption-journey">useful patterns</a> to follow when implementing AI agents in your workflow.</p><p>&#127950;&#65039; Not saying you should do this, but it is pretty cool: <a href="https://comma.ai/">Comma AI&#8217;s active driver assistance system</a> (Comma AI is founded by famed hacker George Hotz, and is using an Open Source approach to their work).</p><p>&#128586; The end of your voice as a unique identifier: Chinese AI model <a href="https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice">Qwen3 TTS</a> needs just a few seconds of your voice to generate a convincing voice clone.</p><p>&#128104;&#127996;&#8205;&#9877;&#65039; The headline hides the punchline: &#8220;<a href="https://www.msn.com/en-us/news/technology/i-let-chatgpt-analyze-a-decade-of-my-apple-watch-data-then-i-called-my-doctor/ar-AA1UZxip">I let ChatGPT analyze a decade of my Apple Watch data. Then I called my doctor.</a>&#8221; &#8211; turns out, AI is an absolutely terrible doctor. Which shouldn&#8217;t come as a surprise. But just in case&#8230; And if you need more data &#8211; here&#8217;s a <a href="https://www.404media.co/chatbots-health-medical-advice-study/">new study on the subject</a> (same conclusion).</p><p>&#127744; Now we finally know (with mathematical precision) when the Singularity will happen: <a href="https://campedersen.com/singularity">Tuesday, July 18, 2034 at 02:52:52.170 UTC.</a> Set your watches.</p><p>&#128509; This is as insane as it is delightful: New York City as a massive <a href="https://cannoneyed.com/isometric-nyc/">isometric pixel landscape</a> running in your browser.</p><p>&#129406; The good, old Internet is still alive: A delightful <a href="https://www.fieggen.com/shoelace/">almanac for all things laces</a> &#8211; as in &#8220;shoelaces.&#8221; Everything and anything you ever wanted to know (and not) about laces.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is getting back into the swing of things, after being in an environment completelty void of humans.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The AI Godsend Paradox]]></title><description><![CDATA[Why the 1% improvement rule is changing everything, coding is now a fifteen-minute task, and we face the uncomfortable reality that AI only works if you already somewhat know the answer.]]></description><link>https://briefing.rdcl.is/p/the-ai-godsend-paradox</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-godsend-paradox</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 27 Jan 2026 15:36:59 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/7468a779-5ba7-4017-9c58-a1ef3a6c35a6_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Bear with me as we talk about AI once more. Here on the radical Briefing, we have been talking a lot &#8211; perhaps too much, but since AI dominates the headlines, so be it &#129335;&#127996; &#8211; about AI: what it is and what it isn&#8217;t, and how it might or might not affect us, our work, and our organizations.</p><p>My own stance on AI (and more specifically, LLMs or GenAI) is constantly evolving &#8211; as is my personal use of the technology. I do believe that the ground has started to shift recently though. In the last 6&#8211;9 months, we have seen the usual flurry of LLM updates &#8211; Google launching Gemini 3 Pro, Anthropic launching Claude Opus 4.5, and, of course, OpenAI launching ChatGPT 5.2 in its various incarnations. This, in itself, is nothing new (nor particularly newsworthy), as each iteration of LLMs has been, for a while now, just a tad better than the last. Gone are the days when we went from ChatGPT being a party trick (but otherwise useless) to ChatGPT 3 being a somewhat useful tool. But in the process, which seems to follow the rule of &#8220;<a href="https://dialoguereview.com/be-1-better/">1% improvements</a>,&#8221; we seem to have crossed thresholds that make the current generation of LLMs rather useful. It suggests we are, indeed, navigating shifting ground &#8211; at least on a personal use level.</p><p>When I wrote <a href="https://rdcl.is/disrupt-disruption/">Disrupt Disruption</a> at the beginning of 2022, ChatGPT hadn&#8217;t launched yet. Ask any author and they will tell you that books aren&#8217;t written; they are rewritten. That certainly was true for my book &#8211; it took me only two weeks of focused writing to get the first draft done (granted, I did two years of research and organizing beforehand), but another seven months to get it through all the rounds of editing. In the end, I had four editors working on the book &#8211; a process I somehow deeply enjoyed (maybe I am a glutton for punishment).</p><p>Fast forward to today. Right at this moment, I am knee-deep in writing my next book, &#8220;<a href="https://rdcl.gumroad.com/l/built-for-turbulence">OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change.</a>&#8221; The process is the same: two-plus years of research and organizing, followed by a four-week sprint of focused writing. But this time, instead of employing four editors, AI is doing most of the heavy lifting. I have created highly customized prompts for developmental editing (the step where we look for holes in logic, argument strength, content clarity, etc.), trained on my previous work (hold that specific thought; we will come back to it), and very specific instructions aligned with the book&#8217;s content. I also use equally customized prompts for line editing (style and voice) and copyediting (spelling, grammar, the works). No human editor was needed until I got to the beta draft stage &#8211; which we are currently in. Now, I have a whole bunch of humans looking at the book for feedback, suggestions, and catching the odd AI slip-up. And yes, it&#8217;s <a href="https://www.bloodinthemachine.com/p/i-was-forced-to-use-ai-until-the">devastating for human copy editors</a>.</p><p>And, of course, there is software development. The other day, I wanted to extract all the links we ever published in the radical Briefing and import them into Raindrop to create a searchable archive for the community. It took me a whopping 15 minutes (I counted) to export the Briefings from Substack, ask Claude to write a Python script to extract the links, and import them into Raindrop. <a href="https://raindrop.io/pfinette/the-rabbit-hole-65462947">The archive is here.</a></p><p>Prior to Claude coding for me, this would have easily taken at least a full day. I am a decent but surely not great coder; because I code very infrequently, I need to look up things all the time.</p><p>And then there is Google. I don&#8217;t know about you, but I now use Google solely for finding specific webpages or to use one of its shortcuts (e.g., converting a currency). And the only reason I use Google for converting currencies is because it&#8217;s currently faster than using AI. The moment AI gets faster (and some smaller models tuned for speed are already fast enough for most of my use cases), Google becomes merely a link directory &#8211; not the &#8220;answer machine&#8221; it used to be before we had AI.</p><p>All of this is to say: I use AI every day and for all sorts of things. It&#8217;s my go-to tool. But my personal use cases also highlight an interesting paradox. As useful as AI is, it requires the human using it to have a good-to-excellent understanding of the subject matter. I am confident that AI would fail me as an editor if I didn&#8217;t give it oodles of prior writing to learn from, as well as highly specific instructions, which require a very solid understanding of what I am trying to achieve. And I have the massive benefit of having gone through the editing process before and know what I am looking for. The same goes for coding &#8211; three-plus decades of programming have taught me what to ask for and how to assess the results. And I know not to trust AI blindly. When using AI as a massively improved search engine, I habitually use not just one AI, but at least two, often three; the results tend to be vastly different depending on which AI I use. It truly is a world of &#8220;human in the loop.&#8221;</p><p>All of which makes me wonder: Is AI (at least in its current form) a godsend for people like me &#8211; measurably increasing my productivity &#8211; but (mostly) a failure for broad, generally applicable use cases? I doubt a generalized prompt does a good enough job of editing <em>any</em> book for <em>any</em> author. We know that coding assistants, in the hands of novices, introduce inefficiencies, bugs, and security vulnerabilities. And, of course, the internet is awash with stories of AI hallucinating and making stuff up &#8211; which well-intentioned but ill-informed people then take as gospel. I guess time will tell. Until then, the best advice I can give you (as of today) is to seriously dig into AI as an individual. If you are using it as a business tool, focus on use cases where you have reason to believe that you can generalize enough to make AI work.</p><p><em>@Pascal</em></p><p>Cue the musical coda:</p><div id="youtube2-Z0GFRcFm-aY" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;Z0GFRcFm-aY&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/Z0GFRcFm-aY?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Great AI Delusion]]></title><description><![CDATA[While Microsoft begs for permission to burn energy, MIT finds ChatGPT weakens your brain, and the first bespoke gene-edited baby arrives.]]></description><link>https://briefing.rdcl.is/p/the-great-ai-delusion</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-great-ai-delusion</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 23 Jan 2026 16:32:21 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4414f587-9f5b-4376-8273-e89d9c4f5c7f_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>In the great, holy land of the future, in which we have all become AI-enhanced cyborgs, there is a rift forming: Vision and reality are diverging. While our AI overlords keep touting the earth-shattering benefits of their creations, the reality is rather sobering. Section AI&#8217;s latest report digs into the thorny issue of AI proficiency &#8211; and it doesn&#8217;t look good. The subtitle says it all: &#8220;Leaders think their AI deployments are succeeding. The data tells a different story.&#8221; <a href="https://www.sectionai.com/ai/the-ai-proficiency-report">Give it a read.</a> At least it&#8217;s worth contemplating.</p><p>Talking about &#8220;disconnects&#8221;: Jane and I will be heading out to Patagonia for a truly epic adventure &#8211; for the next two weeks we will be sailing through the Beagle Channel and summiting some of the countless peaks and glaciers in the area. This means that our trusted Briefing will be on a short hiatus &#8211; I&#8217;ll have next Tuesday&#8217;s deep dive ready, and then we will see each other again for the Friday, February 13th edition (hopefully not an ominous sign).</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623">How AI Destroys Institutions.</a></strong> Here&#8217;s a sobering read &#8211; in the form of a research paper &#8211; on how and why AI might destroy institutions. I am not saying I agree, nor disagree, with the authors, but it is too important a topic to ignore.</p><blockquote><p><em>Unfortunately, the affordances of AI systems extinguish these institutional features at every turn. In this essay, we make one simple point: AI systems are built to function in ways that degrade and are likely to destroy our crucial civic institutions. The affordances of AI systems have the effect of eroding expertise, short-circuiting decision-making, and isolating people from each other. These systems are anathema to the kind of evolution, transparency, cooperation, and accountability that give vital institutions their purpose and sustainability. In short, current AI systems are a death sentence for civic institutions, and we should treat them as such.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.technologyreview.com/2026/01/12/1129999/gene-editing-base-edited-baby-personalized-drugs-2026-breakthrough-technology/">Personalized Gene Editing Is Here.</a></strong> First we had general-purpose gene editing to treat (and cure) diseases based on singular genetic mutations, <a href="https://www.technologyreview.com/2023/12/04/1084209/vertex-exacel-approval-gene-editing-sickle-cell-disease-patient/">such as sickle cell anemia</a>. And that was already a big deal. Personalized gene editing kept being an illusive goal, but now it&#8217;s here. A baby (KJ) was successfully treated for a rare genetic disorder which left his body unable to remove toxic ammonia from his blood. It&#8217;s stil early days, but this could be the beginning of something big (and important).</p><blockquote><p><em>KJ&#8217;s doctors will monitor him for years, and they can&#8217;t yet say how effective this gene-editing approach is. But they ~<a href="https://www.statnews.com/2025/10/16/baby-kj-crispr-gene-editing-personalized-medicine-at-scale/">plan to launch a clinical trial to test such personalized treatments</a>~ in children with similar disorders caused by &#8220;misspelled&#8221; genes that can be targeted with base editing.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.pcgamer.com/software/ai/microsoft-ceo-warns-that-we-must-do-something-useful-with-ai-or-theyll-lose-social-permission-to-burn-electricity-on-it/">AI Is Here. Now What?</a></strong> Microsoft&#8217;s CEO, at this year&#8217;s World Economic Forum, warned that &#8220;we must &#8216;do something useful&#8217; with AI or they&#8217;ll lose &#8216;social permission&#8217; to burn electricity on it.&#8221; Amen. Yet, as the author of this article points out:</p><blockquote><p><em>I also find automatic transcription tools useful, but if I were banking on general purpose LLMs being as revolutionary as personal computers and the internet, I&#8217;d find it worrying how many applications boil down to transcribing audio, summarizing text, and fetching code snippets.</em></p></blockquote><p>Amen. Again.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.media.mit.edu/publications/your-brain-on-chatgpt/">Your Brain on ChatGPT.</a></strong> A study from MIT&#8217;s Media Lab compared the neural and behavioral consequences of LLM-assisted essay writing. Comparing groups of participants who either wrote an essay without the help of any tools, using a search engine, or using ChatGPT, the researchers found that:</p><blockquote><p><em>EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity.</em></p></blockquote><p>Not good. In this context, read the post below as well&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://ploum.net/2026-01-19-exam-with-chatbots.html">Giving University Exams in the Age of Chatbots.</a></strong> Fascinating insight into higher education&#8217;s effort to triage the use of AI in students&#8217; work. Now, take this with a grain of salt as it is a singular class&#8217;s experience &#8211; plus arguably one where students might be somewhat self-selecting (the class in question is on &#8220;Open Source Strategies&#8221;).</p><blockquote><p><em>Before the exam, I copy/pasted my questions into some LLMs and, yes, the results were interesting enough. So I came up with the following solution: I would let the students choose whether they wanted to use an LLM or not. This was an experiment.</em></p></blockquote><p>Good read. Even if you are not in higher education.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/wellness/2026/jan/14/new-year-polycrisis-psychology-feeling-trapped">We Are Living in a Time of Polycrisis. If You Feel Trapped &#8211; You&#8217;re Not Alone</a></strong> We are living through a time of radical uncertainty, but we are also more resilient than we think. <em>@Jane</em></p><p><strong><a href="https://www.businessinsider.com/google-deepmind-anthropic-ceos-ai-junior-roles-hiring-davos-2026-1">DeepMind and Anthropic CEOs: AI Is Already Coming for Junior Roles at Our Companies</a></strong> Regarding how to deal with AI taking over more and more jobs, the Anthropic CEO says: &#8220;My worry is as this exponential keeps compounding, and I don&#8217;t think it&#8217;s going to take that long &#8211; again, somewhere between a year and five years &#8211; it will overwhelm our ability to adapt.&#8221; <em>@Mafe</em></p><p><strong><a href="https://www.wired.com/story/openai-testing-ads-us/">Ads Are Coming to ChatGPT: Here&#8217;s How They&#8217;ll Work</a></strong> A textbook early signal of enshittification: once revenue incentives creep into a trusted interface, the question stops being if the experience degrades &#8211; and becomes how fast. <em>@Kacee</em></p><p><strong><a href="https://www.theatlantic.com/technology/2026/01/america-polymarket-disaster/685662/">America Is Slow-Walking Into a Polymarket Disaster</a></strong> Americans have discovered a new pastime: gambling on real-world events. The implications extend far beyond an individual&#8217;s bank account. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128161; It used to be that we said, &#8220;Ideas are cheap and plentiful. Execution is hard.&#8221; <a href="https://matthiasroder.com/ideas-are-everything/">Not anymore</a> &#8211; at least when it comes to AI-assisted execution.</p><p>&#129302; Skills are reusable capabilities for AI agents. Install them with a single command to enhance your agents with access to procedural knowledge. <a href="https://skills.sh/">Here is a repository.</a></p><p>&#128640; Now that you have skills for your AI agent (see above), you need <a href="https://www.nibzard.com/agentic-handbook">production-ready patterns</a>. Together you&#8217;ll have a solid foundation for using AI agents in your software development workflow.</p><p>&#128104;&#127996;&#8205;&#128187; We start taking agentic coding to its logical conclusion: No code at all. <a href="https://www.dbreunig.com/2026/01/08/a-software-library-with-no-code.html">Here is a software library with no code.</a></p><p>&#128506;&#65039; Super nerdy, but if you have a little bit of technical understanding, this is pretty cool: Run this Python script, give it a city, and it <a href="https://github.com/originalankur/maptoposter">generates a neat grid-map as a poster</a>.</p><p>&#128394;&#65039; This is pretty fun (especially if you, like me, grew up on this thing): <a href="https://www.theverge.com/tech/864127/seletti-bic-ballpoint-pen-pendant-lamp-maison-objet">Seletti&#8217;s Bic Lamp can be hung from the ceiling, mounted to a wall, or used as a standing floor lamp.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is all packed up and excited to be back in Tierra del Fuego.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[When AI Eats Itself]]></title><description><![CDATA[While datacenters drink our water supply, Moore&#8217;s Law reverses course, and Meta quietly abandons the metaverse.]]></description><link>https://briefing.rdcl.is/p/when-ai-eats-itself</link><guid isPermaLink="false">https://briefing.rdcl.is/p/when-ai-eats-itself</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 16 Jan 2026 15:37:33 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/a1df5037-861c-4cc0-8cf5-ba8b93e5550c_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I am constantly reminded of the F. Scott Fitzgerald quote, <em>&#8220;the test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function,&#8221;</em> when I think about AI. Without a shred of doubt, AI is the most overhyped technology of our time. Most of the things people tell you about AI and its capabilities are just plain BS. And yet, I use AI every day &#8211; and not just for small, fun stuff or coding (where we know that it&#8217;s pretty good already), but as an assistant who never tires, who never gets annoyed by me asking it to do the same thing over and over again, and who, more often than not, delivers a unique approach to a problem. But I also use it as a unit of one &#8211; I inherently don&#8217;t trust its output, double-check and verify things it tells me, and use its output merely as an input into my own work and thinking rather than as an end product. This is a pattern that doesn&#8217;t scale &#8211; I catch AI making mistakes so often that I would (at least for the time being) never let it do things autonomously and unchecked; hence, I wouldn&#8217;t scale it beyond my own use. Which brings me back to Fitzgerald &#8211; the important bit in his quote is not just the ability to hold two opposed ideas in your head at the same time &#8211; but to keep functioning (well)&#8230;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388?ref=ppc.land#issuecomment-3717222957">Dog Eats Dog.</a></strong> The background is a little nerdy, so bear with me. Tailwind CSS is a widely popular framework to design web pages &#8211; and a darling of AI code generators (there are specific reasons for that, outside of sheer popularity, but that doesn&#8217;t matter here). Chances are, if you ask ChatGPT, Claude, Gemini, or any other AI to create a website for you, it will use Tailwind CSS to style the page. A few days ago, the founder of Tailwind <a href="https://github.com/tailwindlabs/tailwindcss.com/pull/2388?ref=ppc.land#issuecomment-3717222957">posted</a> that his company had to fire 75% of its staff due to an 80% drop in revenue &#8211; caused by AI.</p><p>The company behind Tailwind makes money when people using their framework come to their website for help and documentation and then subscribe to their paid plans and services. Only, if you ask AI to build your website, you never go to Tailwind&#8217;s website&#8230;</p><blockquote><p><em>AI will scrape your project site, users will never visit it for documentation and will never know about your commercial product.</em></p></blockquote><p>Maybe one of the most direct idiosyncrasies of our AI-driven glorious new world. Dog eats dog.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://lethain.com/company-ai-adoption/">Super Practical Advice on How to Implement AI.</a></strong> Most of the stuff you read about AI and how to adopt it in your organization is either so high-level that it&#8217;s useless, so specific and singular that it&#8217;s equally as useless, or simply AI-hype slop. Will Larson, CTO at Imprint (a FinTech company), has put together a blog post which is actually useful. It is highly recommended reading for anyone trying to figure out how to &#8211; actually &#8211; implement AI in their organization.</p><blockquote><p><em>Given the sheer number of folks working on this problem within their own company, I wanted to write up my &#8220;working notes&#8221; of what I&#8217;ve learned. This isn&#8217;t a recommendation about what you should do, merely a recap of how I&#8217;ve approached the problem thus far, and what I&#8217;ve learned through ongoing iteration. I hope the thinking here will be useful to you, or at least validates some of what you&#8217;re experiencing in your rollout.</em></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.youtube.com/watch?v=HMEjsjQvYT0">Commerce Disrupted.</a></strong> Our friend Jason &#8220;Retailgeek&#8221; Goldberg delivered the closing keynote at the National Retail Federation&#8217;s big conference last week. And when Jason speaks, we listen. Lucky for us, my former boss and blogger extraordinaire, Scot Wingo, was in the audience and <a href="https://www.youtube.com/watch?v=HMEjsjQvYT0">recorded Jason&#8217;s talk</a>. Jason was also kind enough to <a href="https://substack.com/redirect/ab4b2850-2b59-4a93-b771-44f9910970b7?j=eyJ1Ijoiczk4MSJ9.BrLK1M8Bsf2xQjYKQzOkhirOjkwkIAHkEh0pItE5QXM">share his slides</a>. If you are in retail/ecommerce, you have to watch this &#8211; and subscribe to Scot&#8217;s newsletter &#8220;<a href="https://www.retailgentic.com/">Retailgentic.</a>&#8221;</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/commentisfree/2026/jan/10/trump-beginning-of-end-enshittification-make-tech-good-again">Trump May Be the Beginning of the End for &#8216;Enshittification&#8217; &#8211; This Is Our Chance to Make Tech Good Again</a></strong> There is a glimmer of hope that could make technology good and fair again &#8211; and surprisingly, we might have investors and national security hawks to thank. <em>@Jane</em></p><p><strong><a href="https://techcrunch.com/2026/01/12/anthropics-new-cowork-tool-offers-claude-code-without-the-code/">Anthropic&#8217;s New Cowork Tool Offers Claude Code Without the Code</a></strong> The new tool aimed at non-technical users is built for non-coding tasks, but it comes with its warnings &#8211; it can take strings of actions without user input and edit or delete files. <em>@Mafe</em></p><p><strong><a href="https://www.forbes.com/sites/scotttravers/2026/01/11/meet-the-tree-that-shoots-its-seeds-at-150-mph---a-biologist-explains/">Meet the Tree That Shoots Its Seeds at 150 MPH</a></strong> The sandbox tree solves a core evolutionary challenge by converting built-up tension into ballistic movements &#8211; I&#8217;m picturing shotgun seed dispersal! It is impressive how nature uses mechanics to gain a competitive advantage in dense ecosystems. <em>@Kacee</em></p><p><strong><a href="https://www.theverge.com/news/862648/sesame-street-classics-youtube-streaming">Over 100 Episodes of Classic Sesame Street Have Arrived on YouTube</a></strong> This is too good not to share. I grew up on Sesame Street (the classic ones) &#8211; every kid should grow up on Sesame Street &#8211; and if nothing else, your dog might enjoy it when you leave him alone at home. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128187; Logic (and Moore&#8217;s Law) dictates that our computers ought to get cheaper every year (or, at least, cost the same but become more powerful). With AI datacenters gobbling up anything from powerful graphics cards to RAM, this trend has been reversed: <a href="https://www.engadget.com/computing/ces-2026-proved-the-pc-industry-is-hosed-this-year-174500314.html">CES 2026 proved the PC industry is hosed this year</a></p><p>&#128688; Talk about which &#8211; if you thought that energy is the big problem with AI datacenters, you should add &#8220;water&#8221; to your list of concerns: <a href="https://www.forbes.com/sites/kensilverstein/2026/01/11/americas-ai-boom-is-running-into-an-unplanned-water-problem/">America&#8217;s AI Boom Is Running Into An Unplanned Water Problem</a></p><p>&#128262; While the US is banking on fossil fuels, China is marching (actually, sprinting) towards a green future. Here are <a href="https://e360.yale.edu/digest/china-renewable-photo-essay">photos capturing the breathtaking scale of China&#8217;s wind and solar buildout</a>.</p><p>&#128104;&#127996;&#8205;&#128187; Addy Osmani, Director at Google Cloud AI, <a href="https://addyosmani.com/blog/ai-coding-workflow/">shared his LLM coding workflow going into 2026</a> &#8211; super helpful for anyone who&#8217;s doing any coding with AI.</p><p>&#129409; Some good food for thought: Tom Renner argues in &#8220;<a href="https://tomrenner.com/posts/400-year-confidence-trick/">LLMs are a 400-year-long confidence trick</a>&#8221; that LLMs are designed to exploit our cognitive biases and pull a long-standing confidence trick on us.</p><p>&#129489;&#127996;&#8205;&#127806; More evidence that AI&#8217;s productivity gains are nowhere to be found. And, at the same time, jobs lost to AI might be lost for good. This time, it&#8217;s coming from Forrester&#8217;s principal analyst JP Gownder: <a href="https://www.theregister.com/2026/01/15/forrester_ai_jobs_impact/">AI may be everywhere, but it&#8217;s nowhere in recent productivity statistics.</a></p><p>&#129318;&#127996; I&#8217;ll spare you the &#8220;told you so&#8221; trope, but I have to admit that it is pretty ironic to see the company which renamed itself to cement its central role in the creation of the metaverse &#8211; shutdown most of its VR business: <a href="https://www.theverge.com/news/861420/meta-reality-labs-layoffs-vr-studios-twisted-pixel-sanzaru-armature">Meta is closing down three VR studios as part of its metaverse cuts</a></p><p>&#128250; We follow Doug Shapiro for his insights on the future of media for a while now. Here is his current thinking nicely summarized: <a href="https://dougshapiro.substack.com/p/my-base-presentation-deck-january?r=s981">My Base Presentation Deck - January 2026</a></p><p>&#8987; More of a public service announcement &#8211; but as we are still in the first few weeks of the New Year, maybe you want to make this one of our New Year&#8217;s resolutions: <a href="https://philipotoole.com/start-your-meetings-at-5-minutes-past/">Start your meetings at 5 minutes past</a></p><div><hr></div><p><em>Pascal is getting excited for his upcoming trip to the Southern most tip of Patagonia. A place where even the Internet doesn&#8217;t work&#8230;</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Not Another New Year Prediction]]></title><description><![CDATA[From the ruins of the "blockchain-ready" panic to the new AI FOMO trap, plus the vital difference between owning a crystal ball and building actual capability.]]></description><link>https://briefing.rdcl.is/p/not-another-new-year-prediction</link><guid isPermaLink="false">https://briefing.rdcl.is/p/not-another-new-year-prediction</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 13 Jan 2026 15:47:18 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/27cfd498-7d84-46ce-a640-5be5e11ce290_2048x1152.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It&#8217;s January, which means prediction season is back&#8230;again. Each new year seems to bring a wave of confident forecasts, and this year the AI ones are arriving with particular force. <em>Five trends. Ten shifts. What&#8217;s coming next. </em>Everyone wants a clear storyline, because the world has been noisy and unstable for quite some time, and leaders are getting tired of ambiguity.</p><p>I understand why prediction pieces work, especially around AI. The last two years have made the ground feel permanently in motion, and you&#8217;ve probably felt that strange mix of awe and disorientation that comes when a technology accelerates faster than our organizations can absorb it. Predictions act as a bit of a relief valve. They turn uncertainty into narrative, and narrative feels like control. But what prediction season really does, is train us to mistake being <em>informed</em> for being <em>prepared</em>.</p><p>I&#8217;ve watched this cycle before. A few years ago, it was blockchain. If you were reading the business press, you&#8217;d think it was about to become the default layer for how everything worked - how data moved, how trust was established, how companies operated. When Walmart announced its use of blockchain for suppliers, it sent a wave of panic through organizations scrambling to become &#8220;blockchain-ready.&#8221; Fast forward to today, and you rarely hear about blockchain infrastructure in everyday business conversations. Not because it vanished, but because it never became the universal future it was forecast to be. It found narrower use cases, and most companies quietly moved on.</p><p>Now, let me be clear - this isn&#8217;t an anti-trend piece. In fact, one of the best AI in 2026 articles I&#8217;ve read so far via MIT Sloan Review is the <a href="https://sloanreview.mit.edu/article/five-trends-in-ai-and-data-science-for-2026/">Five Trends in AI and Data Science for 2026</a>, precisely because it doesn&#8217;t feel like the usual hype-driven churn. It&#8217;s a strong example of the genre, and because even when prediction writing is good, it can still pull you into the wrong posture if you treat it as a map instead of a mirror.</p><p>Pascal once said something to me that I return to regularly at this time of year: nobody knows what the future will bring, and anyone predicting it probably has a reason they want you to believe the version of the future they&#8217;re painting.</p><p>A prediction is a story that you are being invited to believe, and the invitation comes with implications. It can be as simple as wanting a framework that makes their own decisions feel coherent, or wanting their product roadmap to look like destiny, or maybe just wanting you to move with urgency in a certain direction. The future is one of the easiest places to hide an agenda because you can&#8217;t fact-check it yet, and certainty is one of the easiest ways to short-circuit someone else&#8217;s thinking.</p><p>But trend literacy is not strategic thinking, and being able to name the shifts does not automatically translate into knowing what to do next. The MIT Sloan Review piece lays out five trends that are very likely real, and it&#8217;s useful to have those patterns in your peripheral vision, but even if every trend in that list turns out to be correct, those trends will not decide what your year looks like. What will? Your decisions, your ability to interpret what&#8217;s happening through your own value creation, and your discipline in resisting the crowd&#8217;s tempo.</p><p>This is where I think the conversation needs to shift. The most important question in January is not &#8220;what will happen in 2026?&#8221;. The better question, the strategic question, is &#8220;what must we become to meet whatever happens?&#8221; because that&#8217;s a question you can answer without pretending you have a crystal ball, and it moves you from prediction mode into positioning mode.</p><p>That distinction matters even more in an AI-saturated landscape, because hype has a very specific effect: it pressures people and makes them feel a sense of FOMO that they aren&#8217;t implementing quickly enough. It creates the feeling that you have to adopt now, announce now, overhaul now, reorganize now, because the future is arriving at speed and you don&#8217;t want to be left behind. We were collaborating with a client&#8217;s AI working group last year, and an entire piece that they wanted to create was focused around being intentional with your AI strategy and avoiding the FOMO trap.</p><p>What we have uncovered is that the most resilient strategy is rarely a single big bet based on a forecast. It&#8217;s capability-building, and optionality, and learning velocity, and a stance that can survive surprise. It&#8217;s designing your organization and your team to be adaptive rather than always right.</p><p>If you want to keep reading trend pieces without being captured by them, I find it helps to treat them as prompts rather than plans. When you&#8217;re reading a forecast and it starts to feel like a roadmap, ask yourself what remains true even if the trend doesn&#8217;t materialize the way the author expects, and ask yourself what would still be worth building even if everything happens slower than the headlines suggest. Most importantly, ask what the piece assumes about human behavior, because the technical trajectory of AI (or any emerging tech) is only half the story.</p><p>And when a prediction starts to sound inevitable, it&#8217;s worth pausing long enough to ask what that inevitability is doing to you. Inevitability is the most dangerous word in business because it collapses choice, and choice is where strategy lives. The quiet rebellion of 2026 might be refusing certainty, refusing to outsource your agency to someone else&#8217;s storyline, and being willing to build from principles instead of panic.</p><p>So read the good pieces, and let them sharpen your awareness of what&#8217;s moving. Just don&#8217;t confuse the ability to name trends with the ability to lead through them, because the people who succeed this year will not be the people who guessed correctly. They&#8217;ll be the ones who stayed clear-eyed, built real capability, and held onto their agency when the loudest voices tried to sell them inevitability.</p><p><em>@Kacee</em></p>]]></content:encoded></item></channel></rss>