<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[radical Briefing]]></title><description><![CDATA[The future doesn’t come with a manual. But twice a week, we’ll send you the next best thing: razor-sharp insights, practical frameworks, and early signals that keep you ahead of the curve. Raw, unfiltered, and straight from the edge of innovation.]]></description><link>https://briefing.rdcl.is</link><generator>Substack</generator><lastBuildDate>Fri, 15 May 2026 19:08:20 GMT</lastBuildDate><atom:link href="https://briefing.rdcl.is/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[be radical Group LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[rdcl@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[rdcl@substack.com]]></itunes:email><itunes:name><![CDATA[Pascal Finette]]></itunes:name></itunes:owner><itunes:author><![CDATA[Pascal Finette]]></itunes:author><googleplay:owner><![CDATA[rdcl@substack.com]]></googleplay:owner><googleplay:email><![CDATA[rdcl@substack.com]]></googleplay:email><googleplay:author><![CDATA[Pascal Finette]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The AI Hype Machine Is Eating Itself]]></title><description><![CDATA[CEOs are flexing AI code stats while workers game the leaderboards, Americans tune out, and healthcare AI invents body parts. The gap between the boardroom and reality has never been wider.]]></description><link>https://briefing.rdcl.is/p/the-ai-hype-machine-is-eating-itself</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-hype-machine-is-eating-itself</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 15 May 2026 11:53:20 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/8f890ba9-418a-4203-8509-022945db2ac5_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>After our recent plug for Kevin Kelly and his perspective on uncertainty (and the fact that even uncertainty is now uncertain). This week Kevin is back with a thoughtful (and thought provoking) piece on his experience with AI (&#8220;<a href="https://kevinkelly.substack.com/p/the-emergent-self-loop?publication_id=5993260&amp;post_id=196019199&amp;isFreemail=true&amp;r=s981">The Emergent Self Loop</a>&#8221;) &#8211; it&#8217;s worth a read. Disagree with it (I do, at least in part), but ponder over it a bit. As someone once said: AI is not artificial but alien intelligence.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.businessinsider.com/latest-ceo-flex-how-much-ai-code-your-company-shipped-2026-5">First It Was AI RIFs, Now It&#8217;s AI LOCs.</a></strong> There can be no envy for CEOs trying to stay on top of the AI speed train these days. First they used AI to justify their layoffs &#8211; it just sounds so much better if you fire people due to &#8220;AI-related efficiency gains&#8221; (even better if you do so &#8220;anticipating&#8221; said gains). And now we have CEOs bragging about <a href="https://techcrunch.com/2026/05/08/airbnb-says-ai-now-writes-60-of-its-new-code/">how much of the company&#8217;s code is AI-written</a>. As if that means anything?! Both perspectives are navel-gazing at its finest&#8230;</p><blockquote><p>Move over app downloads and EBITDA &#8211; the hot metric for CEOs is now AI productivity. In interviews and on quarterly earnings calls, CEOs are flaunting stats on how much code AI agents are generating. The trend began with AI companies like Anthropic, Meta, and Google, which have been grilled about their AI investments, and has continued with other companies eager to position themselves as AI-forward. From fintech to streaming, agentic AI adoption is the new status symbol among executives.</p></blockquote><p>When will we see CEOs talking about how they are focusing on solving their customers&#8217; problems again? It would make for a refreshing change&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://yougov.com/en-us/articles/54762-most-americans-say-artificial-intelligence-ai-development-moving-too-fast-twice-as-many-ai-pessimists-as-ai-optimists-may-9-11-2026-economist-yougov-poll">Turns out, the People Are Not so Hot on AI after All.</a></strong> A recent YouGov poll found that the average American is pretty pessimistic about the prospects of AI.</p><blockquote><p>Most Americans (71%) feel that the pace of AI development is moving too fast. [&#8230;] Most Americans are skeptical that everyone will benefit economically from AI. Nearly two-thirds (64%) of Americans say that it is slightly or very unlikely that AI will create economic gains that benefit everyone.</p></blockquote><p>Not a good showing for a technology which is supposed to be the savior of humanity (or at least business). It makes you wonder how much of that perception is due to the hype and fearmongering by the fine folks who built AI. It surely can&#8217;t help if, for example, the CEO of Perplexity runs around and tells everyone that AI will replace them, right?</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/world/2026/may/05/brazil-craze-whistling-only-whatsapp-groups">Brazil Caught up in Craze for Whistling-only Whatsapp Groups</a></strong> Hundreds of thousands of Brazilians are currently in WhatsApp groups where the only permitted communication is&#8230; whistling. <em>@Jane</em></p><p><strong><a href="https://hubspot-state-of-aeo-report-web-view.lovable.app/">The State of Answer Engine Optimization</a></strong> Answer engine search is only going to continue to grow; AI-sourced site visitors have higher purchasing intent and a higher conversion rate than those arriving on websites from other channels. This report highlights the state of AEO and what companies are doing to get in the game. <em>@Mafe</em></p><p><strong><a href="https://www.amazon.com/Data-Makes-World-Go-Round/dp/1394390637">Data Makes the World Go &#8216;Round</a></strong> Fern Halper surfaces the disconnect happening in transformation right now: organizations layer AI on top of foundations that were already fragmented, disconnected, or poorly governed. <em>@Kacee</em></p><p><strong><a href="https://www.theatlantic.com/health/2026/05/soylent-protein-shake/687120/">Admit It, That Protein Shake Is Basically Soylent</a></strong> From Soylent Green to Soylent to your modern-day protein shake&#8230; I remember well the craze of &#8220;optimize your life by skipping food and going straight to the nutrients.&#8221; Heck, we fed this stuff to people at Singularity University&#8217;s executive program&#8230; <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129489;&#8205;&#128295; Futher to the point above on CEOs touting their AI-generated code generation and American&#8217;s increasingly not agreeing with AI, workers are engaging in all kinds of weird behavior: <a href="https://www.reuters.com/sustainability/society-equity/meta-us-employees-organize-protest-against-mouse-tracking-tech-2026-05-12/">Meta employees launch protest against mouse-tracking tech at US offices.</a> <a href="https://www.techspot.com/news/112386-amazon-employees-using-internal-ai-tools-inflate-usage.html">Amazon employees are inflating AI usage to top leaderboards and impress managers.</a></p><p>&#127973; Maybe it is not the best idea to use LLMs for your healthcare needs?! <a href="https://www.theverge.com/health/718049/google-med-gemini-basilar-ganglia-paper-typo-hallucination">Google&#8217;s healthcare AI made up a body part &#8211; what happens when doctors don&#8217;t notice?</a> And: <a href="https://www.cbc.ca/news/canada/toronto/ai-scribe-system-hallucinations-9.7197049">Medical AI transcriber for Ontario doctors &#8216;hallucinated,&#8217; generated errors.</a></p><p>&#128221; I recently found myself in a Zoom meeting with four AI notetakers but no people &#8211; of course, I trolled the notetakers by talking gibberish. But there is a bigger issue here: <a href="https://www.nytimes.com/2026/05/09/business/dealbook/ai-notetakers-legal-risk.html">They are making lawyers very nervous.</a></p><p>&#129465;&#8205;&#9794;&#65039; AI models becoming eerily good at cybersecurity also means AI models becoming superbly good at hacking your systems. This is not a hypothetical anymore &#8211; it&#8217;s happening in the wild: <a href="https://fortune.com/2026/05/11/google-catches-hackers-cybersecurity-warning-ai-anthropic-mythos/">&#8216;It&#8217;s here&#8217;: Google issues dire warning after catching hackers using AI to break into computers</a></p><p>&#127891; Want to learn how AI actually works? Here is a <a href="https://learnai.robennals.org/">fantastic course</a> explaining the inner workings of LLMs using math an 11-year-old can understand.</p><p>&#128373;&#65039; Wondering what your webbrowser knows about you? More than you might think&#8230; <a href="https://sinceyouarrived.world/taken">Scarily more.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,800+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Speed Is the Consolation Prize]]></title><description><![CDATA[What four hours of freed time inside a CPA firm reveals about where the next wave of advantage actually comes from.]]></description><link>https://briefing.rdcl.is/p/speed-is-the-consolation-prize</link><guid isPermaLink="false">https://briefing.rdcl.is/p/speed-is-the-consolation-prize</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 12 May 2026 13:53:25 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c339be24-9a4a-4d51-99cb-ccb57284b3cb_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was recently talking with the CEO of a startup that is building AI agents for complex, process-heavy work. In their case, it&#8217;s tax preparation &#8211; one of the most time-bound, deadline-driven functions inside CPA firms. During peak season, their agents were freeing up, on average, roughly four hours per day per tax preparer. Not at the margins, but at the center of the work. And yet, the most interesting part of the conversation wasn&#8217;t the performance of the technology, but rather the response from the organizations adopting it.</p><p>They weren&#8217;t struggling to implement the agents. They were struggling to absorb the time.</p><p>In many cases, that newly available capacity didn&#8217;t translate into a rethinking of the work itself. Instead, it triggered a kind of organizational friction (yes, politics) because when time gets freed up at that scale, it begins to challenge deeply embedded assumptions about productivity, utilization, and value. In some firms, the instinct was to quietly refill the time with more of the same work. In others, it created internal tension because the system wasn&#8217;t designed to accommodate that level of slack. What surfaced wasn&#8217;t a technology gap, but a mindset and culture gap.</p><p>Contrast that with another firm I spoke with &#8211; similar agent deployed with similar time savings &#8211; where leadership refused to let the freed hours get reabsorbed into more returns. They made the trade explicit: every preparer owed a minimum of five hours a week to learning client advisory work, sitting in on CFO calls, and understanding what insights showed up in real conversations. Same efficiency gains from the AI implementation, but dramatically different approaches with their portfolio of time.</p><p>This is the broader shift that is just starting to come into focus. AI is not simply making work faster or more efficient &#8211; it is collapsing the time required to perform it. And that collapse is happening unevenly across functions, roles, and industries, which makes it harder to see as a single, coherent trend. But at the organizational level, the implication is consistent: the relationship between time and output is breaking down.</p><p>For decades, most operating models have been built on the assumption that more output requires more time applied to known processes. AI is disrupting that equation. When a meaningful portion of the day is no longer required for execution, the question is no longer how to optimize the work, but what the work should become. And this is where many organizations stall, because the default response is to treat freed-up time as excess capacity to be redeployed into the existing system. It feels rational, it preserves predictability, and it aligns with how performance has historically been measured. But it misses the larger opportunity.</p><p>The more useful way to think about this is through the lens of core and edge:</p><p><strong>Core:</strong> the work that sustains the business as it exists today &#8211; repeatable, measurable, necessary.</p><p><strong>Edge:</strong> the work that shapes what the business becomes next &#8211; exploratory, undefined, harder to quantify in the near term.</p><p>Historically, the core has consumed almost all the organizational oxygen, leaving the edge to a small subset operating on the periphery. What AI introduces is the chance to rebalance the equation&nbsp;&#8211; not by eliminating the core, but by collapsing the time it requires, and in the process opening up space to build for the future.</p><p>That space is where the real leverage sits, but it is not automatically captured. In fact, most organizations will default to reinvesting that time back into the core, driving incremental gains in efficiency or volume. There is nothing inherently wrong with that, but it is unlikely to create meaningful separation. The organizations that begin to differentiate will be the ones that deliberately redirect a portion of that freed time toward the edge &#8211; activities that expand capability, deepen customer understanding, rethink products/services, and build entirely new ways of creating value.</p><p>Of course, most organizations aren&#8217;t built for this. Performance systems, incentives, and management practices are all wired to optimize known processes. So when time is freed up, the system naturally pulls people back toward the core, because that is where success is defined and measured. This is why the friction shows up as &#8220;politics&#8221; or resistance.</p><p>The real gift of AI isn&#8217;t speed... it&#8217;s choice. Organizations are being given, perhaps for the first time at scale, the ability to decide what to do with time that was previously non-negotiable. That choice will shape not only how work gets done, but what kinds of capabilities are built and where differentiation emerges over time.</p><p>Most will respond by doing more work, quicker.</p><p>A smaller group will respond by doing different work altogether.</p><p>The gap between those two approaches is where the next wave of advantage will be created.<br><br><em>@Kacee</em></p>]]></content:encoded></item><item><title><![CDATA[Nobody Knows. Everyone Watches.]]></title><description><![CDATA[While most organizations deploy AI and learn nothing, a few are compounding advantage at speed. The rest are building biometric checkpoints, emotional monitoring headsets, and anonymity-destroying tex]]></description><link>https://briefing.rdcl.is/p/nobody-knows-everyone-watches</link><guid isPermaLink="false">https://briefing.rdcl.is/p/nobody-knows-everyone-watches</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 08 May 2026 14:07:37 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/9f85a4ba-80d0-41d4-8967-58297efba825_1200x630.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Silicon Valley legend Kevin Kelly recently wrote an excellent piece on &#8220;<a href="https://kk.org/thetechnium/our-uncertain-uncertainties/">our certain uncertainties.</a>&#8221; It is one of those rare pieces where I highlighted nearly every sentence. I highly recommend adding it to your weekend reading list &#8211; to whet your appetite, let me just give you a stitched-together quote:</p><blockquote><p>So for the next 10-15 years we have perpetual, continuous, severe uncertainty. This is a burdensome weight because people hate uncertainty more than bad news. [&#8230;] What we end up with is a poly-X, a multi-factored unknown, an uncertainty cascade, a pervasive lack of confidence about the future, in an era of ambiguity. [&#8230;] The most effective response to this multi-layered persistent uncertainty is not to seek impossible stability, but to cultivate radical adaptability and radical optionality.</p></blockquote><p>Read it. I wish I were as eloquent as Kevin &#8211; the ideas and concepts he shared are very much what we have been preaching for years as well (and I am sure will sound and look familiar).</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.robert-glaser.de/when-everyone-has-ai-and-the-company-still-learns-nothing/">When Everyone Has AI and the Company Still Learns Nothing.</a></strong> We talked about a similar idea here on the Briefing before &#8211; we called it the &#8220;bifurcation of intelligence&#8221;: a world in which some companies deploy Copilot and call it a day, while others are rethinking their business models in an age of AI agents (and the rest of it). Robert Glaser digs deeper into this idea:</p><blockquote><p>But the interesting AI work does not wait for the next community meeting. It appears inside a code review, a sales proposal, a research task, a product prototype, a production incident, a test strategy, a compliance question. Or when someone figures out that for a certain class of product components, they can set up something close to a dark factory: write the intent, let the agent run a very loose loop, apply enough backpressure to keep it on track, evaluate the outcome against strong scenarios, refine the intent, and repeatedly get high-quality results. By the time the story is cleaned up enough to become a best-practice slide, the important learning has often lost its teeth. What made it useful was the friction: the missing context, the test that failed, the weird API behavior, the moment where the agent sprawled into nonsense and someone had to pull it back.</p></blockquote><p>And to stay in the theme of my new book <a href="https://rdcl.is/outlearn/">OUTLEARN</a>:</p><blockquote><p>The next advantage is <em>learning velocity.</em> Who finds the real patterns faster? Who moves discoveries from individuals to teams to organizational capabilities? Who builds backpressure into agentic loops, so agents can&#8217;t sprawl? Who distributes useful agent capabilities without turning them into monolithic enterprise agents that fit nobody? Who finally uses agentic engineering to make agile real, instead of just slapping AI onto the old ceremonies?</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://ia.acs.org.au/article/2026/facebook--instagram-to-check-bone-structure-for-age-estimates.html">The Age-Gated Internet is Coming.</a></strong> In the (usually) well-meaning effort to keep minors from seeing stuff they shouldn&#8217;t, regulators around the world are pushing for age-gating the Internet. What started with sites which are obviously not for children, such as porn, is now being extended to social media and a bunch of other sites. There are a good number of reasons why this is a bad idea (and why it mostly doesn&#8217;t work anyway), but one of the more important ones is that it creates all kinds of privacy issues for all of us. Meta, not one to miss a beat, decided to take the bull by the horns:</p><blockquote><p>Meta is unleashing AI that scans users&#8217; bodies &#8211; from face shape to height &#8211; in an aggressive bid to root out underage accounts on Facebook and Instagram. The company <a href="https://about.fb.com/news/2026/05/ai-age-assurance-teens/">announced</a> Tuesday it was developing &#8220;advanced AI&#8221; that includes the use of visual analysis for detecting underage accounts. This new visual analysis technique will enable Meta&#8217;s AI to scan photos and videos for &#8220;visual clues&#8221; about a user&#8217;s age &#8211; including one&#8217;s height and bone structure.</p></blockquote><p>Brave new world.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/culture/2026/05/worker-surveillance-emotion-ai/687029/?gift=0GPrpLquXY4NmRQ6sk9MNvTqNDS2tqZ1_A2ojZ2pLj4">The Rise of Emotional Surveillance</a></strong> Burger King&#8217;s AI headset assistant is named Patty, and she&#8217;s judging whether you&#8217;re friendly enough. It is the scary future of work. <em>@Jane</em></p><p><strong><a href="https://www.wired.com/story/when-robots-have-their-chatgpt-moment-remember-these-pincers/">I&#8217;ve Covered Robots for Years. This One Is Different</a></strong> Instead of learning from videos of humans, these robots practice entirely in simulation, inventing their own solutions through trial and error, and they are scarily accurate. <em>@Mafe</em></p><p><strong><a href="https://howwefuture.substack.com/i/196128649/listen-to-the-episode-on">How to Remove the Wrong Kind of Friction (and Add the Right Kind)</a></strong> Plugging a very insightful, practical episode of a dear friend&#8217;s podcast here. Check out Lisa Kay Solomon&#8217;s conversation with Bob Sutton on friction as a design problem and its use and abuse in process, collaboration, and work. <em>@Jeffrey</em></p><p><strong><a href="https://www.linkedin.com/pulse/abundance-era-colm-sparks-austin--ayhte/">The Abundance Era</a></strong> We&#8217;ve spent the last decade overloading the skeleton (core systems) with things they were never meant to do. Advantage comes from treating the edge as tissue: something you can continuously rebuild, not protect. <em>@Kacee</em></p><p><strong><a href="https://nooneshappy.com/article/appearing-productive-in-the-workplace/">Appearing Productive in The Workplace</a></strong> An eloquent exploration of what happens when we remove &#8220;slowness&#8221; due to deliberate work from our outputs and instead focus on quantity as a measure of productivity. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129305; Action leads to reaction: &#8220;<a href="https://letsdatascience.com/news/telus-uses-ai-to-alter-call-agent-accents-a3868f63">Telus uses ai to alter call-agent accents</a>&#8221; &#8211; &#8220;<a href="https://globalnews.ca/news/11832217/canada-ai-accent-masking-call-centres/">AI &#8216;accent masking&#8217; at overseas call centres sparks union backlash in Canada</a>&#8221;</p><p>&#128660; Turns out, self-driving cars commit traffic violations after all. And now, <a href="https://www.nytimes.com/2026/04/30/us/california-ticket-driverless-car-violations.html?unlocked_article_code=1.fFA.hTVm.Ac5Gti22l7ue&amp;amp%3Bsmid=nytcore-android-share">California gives them tickets.</a></p><p>&#128104;&#8205;&#128187; AI has the potential to make everyone a little smarter &#8211; but for the top 2% of workers, it might be a very different story: <a href="https://www.businessinsider.com/ex-meta-manager-says-2-percent-engineers-winning-ai-era-2026-4">Ex-Meta manager says just 2% of engineers are winning the AI era.</a></p><p>&#129465; If you are one of the people who have been writing in public (i.e., the Internet), you had better get used to the idea that your anonymity is gone: <a href="https://www.theargumentmag.com/p/i-can-never-talk-to-an-ai-anonymously">AI only needs 150 words to identify you. What does that mean for you?</a></p><p>&#127916; <a href="https://www.nytimes.com/2026/05/03/world/asia/china-microdrama-ai-backlash.html?unlocked_article_code=1.f1A.pEEC.CRA1amBf-88O&amp;amp%3Bsmid=url-share">How A.I. is transforming China&#8217;s entertainment industry.</a></p><p>&#128372;&#65039; If in doubt, evoke Jevon&#8217;s Paradox and the world will be saved: <a href="https://fortune.com/2026/05/05/dario-amodei-jevons-paradox-will-ai-wipe-out-white-collar-jobs/">Dario Amodei spent last year warning of an AI white-collar bloodbath. Now he&#8217;s changing the narrative.</a></p><p>&#128222; Nostalgic for your rotary phone of yesteryear? You can get it back! <a href="https://skysedge.com/telecom/RUSP/index.html">Here is the Rotary Un-Smartphone.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,800+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[AI Learns. We Forget.]]></title><description><![CDATA[Voices stolen at scale, employees monitored to train models, AI agents deleting production databases &#8211; and somewhere underneath it all, the collective pace of human thought is quietly slowing down.]]></description><link>https://briefing.rdcl.is/p/ai-learns-we-forget</link><guid isPermaLink="false">https://briefing.rdcl.is/p/ai-learns-we-forget</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 01 May 2026 15:03:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d06683eb-18dd-4423-b06d-582f16b0476b_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>A quick personal update before we dive in: <a href="https://a.co/d/0cQYinJH">OUTLEARN</a> launched on Tuesday and &#8211; I&#8217;ll be honest &#8211; the response has been beyond what I expected. The book hit #1 New Release in Amazon&#8217;s Strategy &amp; Competition category on launch day and is currently sitting at #2 on the bestseller list in that same category.</p><p>But what&#8217;s even more awesome is your feedback. One early reader wrote that he bought three copies on launch day to hand out to colleagues at a startup they&#8217;re building together. Another described it as a &#8220;fully actionable page-turner&#8221; and said the reframing of postmortems as harvest meetings would &#8211; his words &#8211; &#8220;turbocharge how you extract practicable learnings from any project outcome.&#8221;</p><p>That&#8217;s exactly what I was hoping this book would do. Not sit on a shelf. Get used on a Monday morning.</p><p>If you haven&#8217;t grabbed a copy yet: <a href="https://a.co/d/0cQYinJH">Get it here.</a> It&#8217;s 150 pages, every chapter ends with something deployable, and it&#8217;s cheaper than lunch. And if you&#8217;ve already read it &#8211; an honest Amazon review in these first two weeks genuinely makes the difference between a book that reaches people and one that disappears.</p><p><em>And now, on to our usual programming&#8230;</em></p><p>Here is an interesting argument: &#8220;AI doesn&#8217;t really &#8217;think.&#8217; Rather, it remembers how we thought together. And we&#8217;re about to stop giving it anything worth remembering.&#8221; This is from a provocative <a href="https://www.theideasletter.org/essay/the-social-edge-of-intelligence/">article by Bright Simons</a>. I might not fully buy into all aspects of his argument, but his essay is very well worth reading (and pondering over). Let me leave you with just one more quote from the article: &#8220;The result is a world in which individual productivity rises while the collective pace of human thought starts to fall.&#8221;</p><p><em>Read the thing. And then, read this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://martinalderson.com/posts/figmas-woes-compound-with-claude-design/">Why (Conventional) Software Truly Is under Attack.</a></strong> Friend of radical Martin Alderson is back with a deep analysis of why many conventional software systems are under attack by AI &#8211; exemplified by the example of Figma, the once-darling Adobe-killer (and failed acquisition target of said company). Anthropic launched their new Design tool, automating 80% of what Figma does &#8211; as part of their Claude app, no design skills required.</p><blockquote><p>But the structural point is harder to wriggle out of. Figma has ~2,000 employees. Anthropic has ~2,500 total and I doubt Claude Design took more than a handful to build. Figma now needs to out-execute a competitor whose inference is ~free to them, whose marginal cost to ship is roughly zero, and who employs fewer people on the competing product than Figma has on a single pod. That&#8217;s a very hard position to pivot out of. This feels like a preview of where SaaS economics are heading. The companies that built big orgs on the assumption of steady seat expansion are going to find themselves competing with products built by tiny teams inside the frontier labs. Figma just happens to be the first big public name where one of their primary inference suppliers has started competing against them</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theverge.com/podcast/917029/software-brain-ai-backlash-databases-automation">Software Brain is Eating The World.</a></strong> If you do not read anything else this week, do yourself a favor and read this article by Nilay Patel on &#8220;Software Brain.&#8221; It&#8217;s a thoughtful piece about the disconnect between what the makers of AI think they are building, and what many of us experiences.</p><blockquote><p>The entire human experience cannot be captured in a database. That&#8217;s the limit of software brain. That&#8217;s why people hate AI. It flattens them.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theregister.com/2026/04/22/meta_employee_surveillance_software/">Who Watches the Watchmen?</a></strong> This famous question, first posed by the Roman poet Juvenal in his Satires, dating back to the 1st century AD, is increasingly answered by: As long as you are feeding the machine, nobody&#8230; Meta, reportedly, is running surveillance software on work PCs of their employees:</p><blockquote><p>Newswire Reuters reports that Meta management sent staff a memo informing them that they&#8217;ll soon run a new tool called &#8220;Model Capability Initiative&#8221; that will record their keystrokes, mouse movements, and even take occasional screenshots &#8211; all in the name of gathering data the social networking giant can use to build better AI models.</p></blockquote><p>The staff doesn&#8217;t seem to be too happy about it: &#8220;<a href="https://www.businessinsider.com/meta-new-ai-tool-tracks-staff-activity-sparks-concern-2026-4">Meta employees are up in arms over a mandatory program to train AI on their mouse movements and keystrokes.</a>&#8221;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://app.oravys.com/blog/mercor-breach-2026">4TB of Voice Samples Were Just Stolen from 40,000 AI Contractors.</a></strong> Voice cloning has become incredibly easy, good, and, in some cases, useful. Advanced AI models only need a few seconds of your voice to create a convincing clone &#8211; which also means that with those few seconds of your voice, I can create a convincing spoof of you. Which, in turn, means that when your voice data is stolen, you might be deep in the s#!%.</p><blockquote><p>On April 4, 2026, the extortion group Lapsus$ posted Mercor on its leak site. The dump is reported at roughly four terabytes and bundles a payload that breach analysts have been warning about for two years: voice biometrics paired with the same person&#8217;s government-issued identity document. According to the leaked sample index, the archive covers more than 40,000 contractors who signed up to label data, record reading passages, and run through verification calls for AI training.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/04/chatbot-ai-race-emotional-intelligence/686830/?gift=0GPrpLquXY4NmRQ6sk9MNst8Q_vnV8jujBRZlPh9Ee4">AI&#8217;s Next Frontier: People Skills</a></strong> AI models now beat humans on emotional intelligence tests, which only goes to prove that acing a test and understanding a feeling are completely different things. <em>@Jane</em></p><p><strong><a href="https://www.fastcompany.com/91533742/paypal-says-ai-shopping-agents-are-creating-an-invisible-storefront-economy">Paypal Says AI Shopping Agents Are Creating an Invisible Storefront Economy</a></strong> Nearly all merchants are already seeing AI agent traffic, but fewer than 25% have the machine-readable catalogs, APIs, or the agent-compatible checkout systems needed to act in real time and convert on the spot. <em>@Mafe</em></p><p><strong><a href="https://www.notboring.co/p/scarce-assets">Scarce Assets: The Abundance-Driven Scarcity Supercycle</a></strong> Sticking with my theme from last week: Another angle on the idea (and value) of finding what remains scarce in markets where some things seem newly and wildly abundant. <em>@Jeffrey</em></p><p><strong><a href="https://www.semrush.com/blog/chatgpt-search-insights/">Chatgpt Traffic Analysis: Insights from 17 Months of Clickstream Data</a></strong> Most people assume AI search = more discovery, but this data suggests the opposite: that in fact distribution is compressing. If you&#8217;re not in the answer set, you&#8217;re effectively invisible. <em>@Kacee</em></p><p><strong><a href="https://www.frontporchrepublic.com/2025/09/when-the-internet-was-a-place/">When the Internet Was a Place</a></strong> If you are fortunate enough to have experienced the Internet pre-Web 2.0, you know that it was a very, very different place. It&#8217;s high time to claim back some of those properties. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129300; Finally want to understand how LLMs actually work? <a href="https://ynarwal.github.io/how-llms-work/">Here is a wonderfully designed and easy-to-follow primer.</a></p><p>&#129297; We mentioned it in the briefing last week. The era of heavily subsidized AI models might come to an end quicker than many of us thought or had hoped for. GitHub Copilot just <a href="https://github.blog/news-insights/company-news/github-copilot-is-moving-to-usage-based-billing/">announced</a> that they will be moving solely to a usage-based billing model. Meanwhile, <a href="https://www.tobyord.com/writing/hourly-costs-for-ai-agents">tokens get more expensive</a>, and AI agents can now <a href="https://www.axios.com/2026/04/26/ai-cost-human-workers">cost as much (and more) than human workers</a>. No more free beer!</p><p>&#127980; Someone set up a store and let an AI agent run it. <a href="https://www.nytimes.com/2026/04/21/us/san-francisco-store-managed-ai-agent.html?unlocked_article_code=1.eFA.7jVB.5i5HUjjcUKyj&amp;smid=url-share">Here is the story of how it&#8217;s going</a> (hint: not great, but also not a complete disaster).</p><p>&#129489;&#8205;&#127979; A Catholic scholar argues that GenAI threatens authentic education by replacing the process of learning with the production of polished output, emphasizing the need for pedagogical redesign to restore the formation of thoughtful, responsible individuals: <a href="https://www.ncregister.com/commentaries/schnell-repairing-the-ruins">Repairing the ruins &#8211; Why AI can&#8217;t replace education.</a></p><p>&#128173; Your friendly public service announcement: <a href="https://www.koshyjohn.com/blog/ai-should-elevate-your-thinking-not-replace-it/">A.I. should elevate your thinking, not replace it.</a></p><p>&#128372;&#65039; Somehow I am genuinely surprised that this wasn&#8217;t the case before (and how this is news to begin with): <a href="https://finance.yahoo.com/sectors/technology/articles/accenture-roll-copilot-743-000-180346038.html">Accenture to roll out Copilot to all 743,000 employees in boost for Microsoft</a></p><p>&#128576; <a href="https://x.com/lifeof_jer/status/2048103471019434248">This story</a> is making the rounds at the moment &#8211; an AI agent goes rogue, deletes a company&#8217;s entire production database, and then apologizes for it. The deeper cut is that it&#8217;s not just the AI agents fault, but the database system itself didn&#8217;t have any safeguards in place to prevent this from happening in the first place.</p><p>&#128240; The (sad) future of journalism &#8211; <a href="https://mashable.com/article/ai-generated-news-site-with-ties-to-openai">an OpenAI-linked news outlet appears to be entirely AI-generated.</a> And the bigger picture: <a href="https://www.sciencedaily.com/releases/2026/04/260420014748.htm">AI swarms could hijack democracy without anyone noticing.</a></p><p>&#129395; XOXO, the Portland-based answer to SxSW, is no more. But its legacy lives on in the form of this wonderful (and wonderful-looking) XOXOFest <a href="https://xoxofest.com/">website</a>.</p><p>&#128123; Now we (finally) know: <a href="https://www.theguardian.com/science/2026/apr/27/spooky-feelings-in-old-houses-may-be-caused-by-boiler-sounds-study-suggests">Spooky feelings in old houses may be caused by boiler sounds, study suggests.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Prehistoric Saboteur Running Your Company]]></title><description><![CDATA[Why &#8220;Fail Fast&#8221; is biologically impossible. What a ball-tossing experiment in an fMRI scanner reveals about Monday morning meetings. And the two words that keep the lizard asleep.]]></description><link>https://briefing.rdcl.is/p/the-prehistoric-saboteur-running</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-prehistoric-saboteur-running</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 28 Apr 2026 15:57:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!sBZF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>You lie motionless inside the narrow, humming tube of an fMRI scanner. The machine clicks and whirs. The researchers are about to make you feel something you wouldn&#8217;t expect to be painful. Instead of showing you scary pictures or shocking your finger with a current, they&#8217;re going to make you feel rejected.</p><p>In a now-famous study, participants played a virtual ball-tossing game while inside the scanner. At first, the other players passed the ball to you. You felt included. Then, abruptly, they stopped. They passed it only to each other, ignoring you completely. You were excluded. You were failing socially.</p><p>The brain scans revealed something harsh: the dorsal anterior cingulate cortex &#8211; the same region that activates when you&#8217;re punched in the stomach &#8211; lit up as though you&#8217;d been physically struck. Your brain doesn&#8217;t differentiate between the emotional pain of rejection and the physical pain of being hurt. It processes both as the same thing. Evolution shaped you this way. Throughout most of human history, social rejection meant expulsion from the tribe, which meant death. Your brain built a rapid alarm system to prevent it.</p><p>Which brings us to the most dangerous entity in your boardroom: not the skeptical CFO or the micromanager, but the prehistoric saboteur living inside your own skull &#8211; the amygdala.</p><p>Imagine &#8211; you&#8217;re in a quarterly review. The CFO asks a pointed question about your project&#8217;s burn rate. Your mouth goes dry. Your mind, which was sharp five seconds ago, suddenly feels foggy and slow. You know the numbers &#8211; you rehearsed the numbers &#8211; but they&#8217;ve vanished. You mumble something defensive. The meeting moves on. Later, in the elevator, the answer floods back, obvious and clear.</p><p>What happened? A war between your ears. In one corner: the prefrontal cortex, the CEO of your brain &#8211; logic, creativity, long-term planning all happen here. In the other corner: the amygdala, buried deep in your temporal lobes. Ancient. Fast. Unsophisticated but devastatingly effective. It doesn&#8217;t write poetry or design software &#8211; it keeps you alive by scanning for threats. And here&#8217;s the tragedy: these two systems operate like a seesaw. When one goes up, the other comes down.</p><p>The CFO&#8217;s pointed question? To your amygdala, it was indistinguishable from a predator in the tall grass. So it initiated a hostile takeover &#8211; flooding your bloodstream with cortisol, diverting energy to your legs and fists, and literally cutting blood flow to your prefrontal cortex. The lights in the CEO&#8217;s office went dark. You weren&#8217;t stupid in that meeting. You were chemically lobotomized.</p><p>This is the biological saboteur. And it&#8217;s running your company&#8217;s innovation efforts into the ground.</p><p>I want to throw something every time I walk into an innovation lab and see a poster that says &#8220;Fail Fast, Fail Forward.&#8221; It&#8217;s a beautiful sentiment. It is also biologically impossible in most organizations. You can&#8217;t just decide to be comfortable with failure any more than you can decide not to pull your hand back from a hot stove. If the environment triggers the threat response, the lizard wins. Every time.</p><p>You can buy all the bean bag chairs and install all the ping-pong tables you want, but if your culture punishes mistakes &#8211; even subtly &#8211; the amygdala wins every time. Simply stated: In most organizations, every idea that challenges the status quo is initially perceived by someone as a mistake. If your culture punishes mistakes, it is also punishing the early signals of innovation.</p><p>Think about the standard Monday morning status meeting. A project manager has to report that an initiative didn&#8217;t work. They&#8217;re nervous. The room goes quiet. You, the leader, say &#8220;It&#8217;s okay, we learned something&#8221; &#8211; but even those words betray you. &#8220;It&#8217;s okay&#8221; is a verdict disguised as comfort. It concedes that something wrong happened and positions you as the authority granting forgiveness. And forgiveness implies the possibility of its absence next time. If your tone is tight, if you sigh, if there&#8217;s a micro-expression of disappointment, the room detects it. Every brain in the room just received a signal: Error = Fear and Pain.</p><p>The result? People stop offering creative solutions (prefrontal cortex) and start offering defensive explanations (lizard). They stop saying &#8220;I wonder why that happened?&#8221; and start saying &#8220;Well, marketing didn&#8217;t give us the right assets.&#8221; All learning stops, and self-preservation begins.</p><p>So what do you do? You can&#8217;t wait another million years for evolution to update the firmware. You have to hack the software you have today.</p><p>Here&#8217;s one starting point: change the words you use. Words like &#8220;failure,&#8221; &#8220;mistake,&#8221; and &#8220;error&#8221; are loaded &#8211; they are threat triggers. When you ask &#8220;Why did this fail?&#8221;, you are practically begging the amygdala to wake up. Instead, try framing every outcome as a &#8220;first attempt in learning.&#8221; When you frame an outcome as a &#8220;failure,&#8221; you&#8217;re issuing a verdict. When you frame it as a &#8220;first attempt,&#8221; you&#8217;re describing a process. It implies iteration. It implies you&#8217;re not done yet. The lizard stays asleep. The CEO stays online.</p><p>And here&#8217;s the Monday morning test: at the start of your next crisis meeting, try calling out the biology explicitly. &#8220;Everyone&#8217;s lizard brain is freaking out right now. That&#8217;s normal. Let&#8217;s take two minutes to breathe so we can get our prefrontal cortexes back online.&#8221; It sounds silly. It works.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sBZF!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sBZF!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sBZF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:238677,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/195656469?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!sBZF!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!sBZF!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F59095a73-f378-45d1-8037-59e39cfdec87_1200x630.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This is from Part II of my new book</em> <a href="https://a.co/d/04roc8PL">OUTLEARN: The Art of Learning Faster Than the World Can Change</a>, <em>which is live today on Amazon &#8211; paperback and ebook.</em></p><p><em>The book goes much deeper into the neuroscience, the linguistic hacks, and the math of why some failures lead to breakthroughs and others lead to bankruptcy. If this essay landed for you, grab a copy &#8211; and if you&#8217;re feeling generous, an honest Amazon review in the first week helps enormously.</em></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[The Free Ride Is Over]]></title><description><![CDATA[AI agents now cost more than human labor, cybersecurity became an arms race, and someone sequenced their genome on a kitchen table. The subsidized honeymoon era is ending everywhere at once.]]></description><link>https://briefing.rdcl.is/p/the-free-ride-is-over</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-free-ride-is-over</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 24 Apr 2026 14:34:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0dc1ee63-86f2-493c-9bb0-607566ab6b6e_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Remember the glorious days when Uber and Lyft were heavily subsidized by their venture capital sugar daddies and you couldn&#8217;t get over how cheap it is to get a ride? Yeah, those days are gone (and much can be said about the market-distorting effects of the VC-fueled subsidies). Well, it increasingly looks like the sweet days of $20/month all-you-can-prompt AI plans are also coming to an end &#8211; pretty much all the major AI companies are tweaking their pricing strategies, making tokens for their latest frontier models much more expensive, and generally trying to dig themselves out of the &#8220;for every dollar we make, we lose five&#8221;-hole. It doesn&#8217;t come as a surprise &#8211; but it will be interesting to see what it does to market demand.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://x.com/finmoorhouse/status/2044933442236776794">Putting the AI Investment into Perspective.</a></strong> As the saying goes &#8211; a picture is worth a thousand words.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!WsxJ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg" width="1200" height="1151" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1151,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:80380,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/195291829?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!WsxJ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 424w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 848w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!WsxJ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffe02cd61-a2df-484f-aee5-e4dd49022fb2_1200x1151.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.tobyord.com/writing/hourly-costs-for-ai-agents">Are the Costs of AI Agents Also Rising Exponentially?</a></strong> With AI models becoming more and more powerful, the cost of inference (at least for frontier models) is staying about the same (or increases) <em>and</em> the models consuming vastly more tokens for a given task. This being said, Toby Ord did a fascinating analysis of the cost of running AI agents as a function of &#8220;cost of labour&#8221; &#8211; and found that agents sometimes cost much more than human labour (&#8220;How is the &#8216;hourly&#8217; cost of AI agents changing over time?&#8221;). In sum:</p><blockquote><ul><li><p>This provides moderate evidence that:</p></li><li><p>the costs to achieve the time horizons are growing exponentially,</p></li><li><p>even the hourly costs are rising exponentially,</p></li><li><p>the hourly costs for some models are now close to human costs.</p></li></ul></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-of-work-now.html">Cyber Security Is a Completely Different Game Now.</a></strong> If you have even half an ear to the ground when it comes to cybersecurity, you have heard stories about Anthropic&#8217;s newest model &#8220;Mythos&#8221; being held back as it is &#8220;too dangerous&#8221; &#8211; with the main fear being that it finds vulnerabilities in software with an unprecedented speed and accuracy. In fact, people are hacking all kinds of hard- and software using current state-of-the-art models such as GPT-5.4 or Opus for the last couple of months now. All of which turns cybersecurity into even more of a race between who can outspend whom, than it already is. In simple (AI economic) terms:</p><blockquote><p>to harden a system we need to spend more tokens discovering exploits than attackers spend exploiting them [and]  to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.</p></blockquote><p>If you are running a system which has any public exposure surface (e.g. a website, an API, or an app), you better take this seriously. I wouldn&#8217;t be surprised if we will see tons of new exploits being executed in the next few months and years.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://x.com/sethshowes/status/2045289299269070978">DIY Sequence Your Whole Genome.</a></strong> We have been talking about an individual&#8217;s ability to sequence their own genome at home, using lab-grade but DIY equipment, for a while now (it was one of the predictions floating around in the ether in the heyday of Singularity University &#8211; it was always &#8220;just around the corner&#8221;). Now it has (finally) happened &#8211; alas, not for the faint of heart.</p><blockquote><p>So this week I sequenced my genome entirely at home. Literally on my kitchen table.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/04/extended-range-electric-vehicle-pickup-trucks/686811/?gift=0GPrpLquXY4NmRQ6sk9MNmLbwJO9qfyaNiz1Iuc5qpY">A New Kind of Hybrid Car Is About to Hit America&#8217;s Streets</a></strong> EREVs are the exciting new hybrid technology everyone should know about. Your car runs on electric power but quietly refuels its own battery with gas, so you never have to worry about being stranded! <em>@Jane</em></p><p><strong><a href="https://www.bloomberg.com/news/articles/2026-04-21/byd-xiaomi-and-zeekr-car-reviews-flood-tiktok-youtube-in-the-us">TikTok Makes Americans Want Chinese EVs They Can&#8217;t Have</a></strong> Chinese car brands are nearly absent from US roads due to tariffs and regulations, but are building American consumer desire through social media while playing a long-term strategy. <em>@Mafe</em></p><p><strong><a href="https://aleximas.substack.com/p/what-will-be-scarce">What Will Be Scarce?</a></strong> An economist goes deep on a relatively optimistic scenario for the future of human labor, finding durable value in what he calls the &#8220;relational sector,&#8221; where the value of the service is likely to be increasingly linked to the human providing it. <em>@Jeffrey</em></p><p><strong><a href="https://hbr.org/2026/04/the-end-of-one-size-fits-all-enterprise-software?ab=HP-hero-featured-1">The End of One-Size-Fits-All Enterprise Software</a></strong> Pascal and I have been writing about this lately, we&#8217;re moving from standardized systems to outcome-driven architectures that can conform to the business. <em>@Kacee</em></p><p><strong><a href="https://arstechnica.com/staff/2026/04/our-newsroom-ai-policy/">Our newsroom AI policy</a></strong> As companies (and in this case, newsrooms) around the world grapple with what it means to operate in an AI-enabled/driven world, it will become more and more important for organizations to establish (and publish) clear guidelines and disclosures on their use of AI &#8211; here is a good example from the Ars Technica newsroom. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#9994; &#8220;We believe in human beings.&#8221; Union leaders are <a href="https://www.axios.com/2026/04/16/unions-ai-bernie-sanders-shawn-fain">escalating their anti-AI rhetoric</a>, portraying the industry&#8217;s leaders as profit-hungry &#8220;oligarchs&#8221; eager to replace humans.</p><p>&#9728;&#65039; Shine (not drill), baby shine: <a href="https://electrek.co/2026/04/19/iea-solar-overtakes-all-energy-sources-in-a-major-global-first/">IEA &#8211; Solar overtakes all energy sources in a major global first.</a></p><p>&#9997;&#65039; Hacking the system: <a href="https://sentinelcolorado.com/uncategorized/a-college-instructor-turns-to-typewriters-to-curb-ai-written-work-and-teach-life-lessons/">A college instructor turns to typewriters to curb AI-written work and teach life lessons.</a></p><p>&#9749; A wonderful lesson in taking something that worked (ordering coffee through a carefully designed app) and making it worse by using AI: <a href="https://www.theverge.com/ai-artificial-intelligence/915821/starbucks-chatgpt-app-testing">Ordering with the Starbucks ChatGPT app was a true coffee nightmare.</a></p><p>&#129489;&#8205;&#9878;&#65039; Let AI be the judge: <a href="https://mediator.ai/">Cooperative negotiation is a solvable problem</a> (or so says this company).</p><p>&#128272;  Turns out &#8211; your cybersecurity does, in fact, withstand the (possibly coming) wave of quantum computer-powered attacks (despite the attention-grabbing headlines): <a href="https://words.filippo.io/128-bits/">Quantum computers are not a threat to 128-bit symmetric keys.</a></p><p>&#127904; Ed Zitron, one of the most outspoken critics of AI, is back (and it&#8217;s worth reading &#8211; even if you don&#8217;t agree with him): <a href="https://www.wheresyoured.at/four-horsemen-of-the-aipocalypse/?ref=ed-zitrons-wheres-your-ed-at-newsletter">Four Horsemen of the AIpocalypse.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Hype Is Eating Itself]]></title><description><![CDATA[While Gen Z rage-quits the AI dream, OpenAI lobbies for mass-casualty immunity, and laziness turns out to have been load-bearing all along]]></description><link>https://briefing.rdcl.is/p/the-hype-is-eating-itself</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-hype-is-eating-itself</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 17 Apr 2026 15:03:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/af4ce9c1-d4a1-4849-8aa7-51f93e238502_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>The, quite possibly, craziest story to emerge this week from the ever-nutty world of AI hype is, of course, the rebrand/relaunch of sneaker company Allbirds as an AI company &#8211; resulting in a $127 million increase in stock market value. I don&#8217;t even comment on how absurd all of this is. You know something is up when even the most die-hard AI-boosting publications start calling BS&#8230; Anyway &#8211; time for your weekly dose of news and analysis&#8230;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.everydayhealth.com/weight-management/reddit-users-reporting-glp-1-side-effects/">Better Drug Side Effects Monitoring through Reddit?</a></strong> It shouldn&#8217;t come as a surprise that by harvesting the massive data trove that is Reddit, one can find drug side effects that are underreported in clinical trials. Reminds us of a pharma client of ours who mentioned that they consider Apple a massive threat to their business &#8211; as the company has a humongous amount of data on <em>healthy</em> people, whereas pharma companies typically only have data on <em>sick</em> people.</p><blockquote><p>Using artificial intelligence to scan more than 400,000 Reddit posts, researchers from the University of Pennsylvania documented numerous reports of possible GLP-1 side effects that may be underrecognized in clinical trials &#8211; including menstrual changes, fatigue, and temperature sensitivities.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://dev.to/dcc/the-honest-climate-case-for-ai-5hg5">Let&#8217;s Talk About AI&#8217;s Energy Footprint (Again).</a></strong> The linked article is a good and accessible summary of where we stand on AI&#8217;s energy footprint. The tl;dr is that AI&#8217;s current energy footprint is modest (comparable to streaming video). But demand is growing fast, reasoning models use 10&#8211;100x more energy than basic queries, and efficiency gains keep getting reinvested into more capability rather than saved. And what electricity powers the data centers is a much bigger question: Clean grid = net climate okay. Gas/coal grid = real problem.</p><blockquote><p>Stop feeling guilty about prompts. Your Wh per query is not the lever that matters. You&#8217;ll do more climate good by eating one less steak, taking one fewer flight, or voting for better energy policy than by boycotting LLMs. What matters at the individual level is where you direct your attention. Demand the acceleration of the deployment of clean generation to meet data center demand; grid interconnections, nuclear licensing, transmission lines, and permitting reform are the bottleneck, not GPUs.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.highereddive.com/news/gen-z-ai-gallup-poll-negative-sentiment/817133/">The GenZ AI Tide is Turning.</a></strong> GenZ, supposedly the most AI-savvy generation entering the workforce right now, is not too thrilled about that whole AI thing.</p><blockquote><p>Anger over AI is increasing among Gen Z at the same time excitement is fading. Nearly one-third of the survey&#8217;s respondents, 31%, said AI makes them feel angry, up 9 percentage points from last year. And just 22% said the technology makes them feel excited, down from 36% the prior year.</p></blockquote><p>Reconcile this with the growing pressure on entry-level jobs, as well as overall job losses due to AI, and you have a storm brewing.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.nature.com/articles/d41586-026-01099-2">The Air Is Full of DNA - Here&#8217;s What Scientists Are Using It for</a></strong> Genetic breadcrumbs in the air reveal ecosystem secrets, spot sneaky invaders, and even track humans! <em>@Jane</em></p><p><strong><a href="https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html">My Quest to Solve Bitcoin&#8217;s Great Mystery</a></strong> A detailed read about one man&#8217;s journey to find out who&#8217;s behind Satoshi Nakamoto. <em>@Mafe</em></p><p><strong><a href="https://www.noemamag.com/why-a-liberal-arts-education-will-soon-be-more-valuable-than-ever/">How To Future-Proof Your Career In The Age Of AI</a></strong> If cognitive flexibility, taste, and good judgment become critical differentiators in a world of abundant intelligence, does the most valuable background begin to look a lot like a classical interdisciplinary, liberal arts education? <em>@Jeffrey</em></p><p><strong><a href="https://www.marketwatch.com/story/will-ai-start-going-rogue-the-chorus-of-warnings-is-getting-louder-c4d4b831">Will AI Start &#8216;Going Rogue&#8217;? the Chorus of Warnings Is Getting Louder</a></strong> When the people building the tech warn about loss of control, it may be a signal worth paying attention to. <em>@Kacee</em></p><p><strong><a href="https://bcantrill.dtrace.org/2026/04/12/the-peril-of-laziness-lost/">The Peril of Laziness Lost</a></strong> Here is an interesting argument from the world of software development: Laziness (in coding) leads us to more elegant, better-performing, and cleaner code. With AI coding tools, laziness suddenly has stopped being a virtue &#8211; if nothing else, AI happily keeps churning&#8230; And with laziness becoming a lost art, software will become worse. I&#8217;d venture to say that this is what is happening in every area AI touches. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#127864; Rejoice! It is now legal to distill your own alcohol in the United States: <a href="https://www.theguardian.com/law/2026/apr/11/appeals-court-ruling-home-distilling-ban-unconstitutional">US appeals court declares 158-year-old home distilling ban unconstitutional.</a></p><p>&#9760;&#65039; Nothing to see here. <a href="https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/">OpenAI backs bill that would limit liability for AI-enabled mass deaths or financial disasters.</a></p><p>&#129489;&#8205;&#9877;&#65039; Surprised is no one: <a href="https://www.nature.com/articles/d41586-026-01100-y">Scientists invented a fake disease. AI told people it was real.</a></p><p>&#127922; Ever wanted to increase your chances of beating your niece at Connect Four? Here&#8217;s the mathematically best way to do it: <a href="https://2swap.github.io/WeakC4/explanation/">WeakC4, or distilling an emergent object.</a></p><p>&#127466;&#127482; European tech sovereignty is a thing. It will be interesting to see how this plays out in long run. Latest point in case: <a href="https://techputs.com/france-windows-to-linux-shift/">France ditch Windows for Linux to cut reliance on US tech.</a></p><p>&#128268; Have we reached the tipping point? <a href="https://www.the-independent.com/tech/renewable-energy-solar-nepal-bhutan-iceland-b2533699.html">Seven countries now generate 100% of their electricity from renewable energy.</a></p><p>&#9992;&#65039; Desperate times call for desperate measures. <a href="https://www.bbc.com/news/articles/ce84rvx0e6do">Great at gaming? US air traffic control wants you to apply.</a></p><p>&#129489;&#8205;&#127912; Life imitates art. This feels like it&#8217;s right out of an episode of Black Mirror: <a href="https://www.theverge.com/tech/910990/meta-ceo-mark-zuckerberg-ai-clone">Mark Zuckerberg is reportedly building an AI clone to replace him in meetings.</a></p><p>&#9997;&#65039; You become what you write: <a href="https://www.science.org/doi/10.1126/sciadv.adw5578">Biased AI writing assistants shift users&#8217; attitudes on societal issues.</a></p><p>&#128246; Data becomes a right. <a href="https://www.theregister.com/2026/04/10/south_korea_data_access_universal/">South Korea introduces universal basic mobile data access.</a></p><p>&#129299; Nerd alert! Fascinating approach to improving AI&#8217;s coding abilities: <a href="https://blog.skypilot.co/research-driven-agents/">Having a coding agent read a series of papers on the topic at hand before coding results in significant improvements in code quality.</a></p><p>&#128218; Lovely read: <a href="https://www.newyorker.com/books/book-currents/stewart-brand-on-how-progress-happens">Stewart Brand on how progress happens.</a></p><p>&#129300; More than half of Americans are &#8216;getting tired of hearing&#8217; about AI, <a href="https://www.scrippsnews.com/science-and-tech/artificial-intelligence/more-than-half-of-americans-are-getting-tired-of-hearing-about-ai-survey-finds">survey finds.</a></p><p>&#128200; From the MIT Tech Review: <a href="https://www.technologyreview.com/2026/04/13/1135675/want-to-understand-the-current-state-of-ai-check-out-these-charts/">Want to understand the current state of AI? Check out these charts.</a></p><p>&#129686; PSA: Wear your helmet! <a href="https://nyulangone.org/news/e-bike-and-scooter-crashes-are-leading-more-brain-injuries">E-bike and scooter crashes are leading to more brain injuries.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The AI No-Show]]></title><description><![CDATA[While Oracle fires 30,000 people to fund AI data centers, fake singers colonize the iTunes charts, and China moves to regulate virtual humans out of existence]]></description><link>https://briefing.rdcl.is/p/the-ai-no-show</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-ai-no-show</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 10 Apr 2026 14:47:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/cffe629f-a7cb-43b8-99c9-149c7322144a_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I honestly don&#8217;t even know where to begin &#8211; so much stuff is happening in the world right now; it truly is a whirlwind. From your usual (over) dose of AI, to geopolitics &#8211; but also a plethora of wild, weird, and wonderful weak signals&#8230; Like the bike bell which cleverly defeats the noise-cancelling technology of a pedestrian&#8217;s earbuds. Or AI-singers capturing the top spots in the iTunes charts (now, remember &#8211; this is iTunes, the $0.99 a song download store, which makes that whole story even more bizarre). Dig into today&#8217;s Briefing &#8211; the results from this week&#8217;s web explorations will keep you busy.</p><p>P.S. In case you missed it &#8211; I built on Kacee&#8217;s excellent post in the last radical Briefing on &#8220;Vibe Coding Our Way to 70%&#8221; in a <a href="https://www.linkedin.com/posts/pfinette_earlier-this-week-my-dear-friend-and-colleague-activity-7445550029798473728-lcR8?rcm=ACoAAABiKN0BVCUdHIulvhyy_BFFK-5oP5jc5ag">LinkedIn post</a>.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://fortune.com/2026/04/09/ai-backlash-quiet-quitting-fobo-obsolete-white-collar-rebellion/">The AI Quiet Quitters.</a></strong> Shadow AI was the story for a while &#8211; workers sneaking ChatGPT past IT, doing in minutes what used to take hours, running an underground productivity movement from their personal accounts (or simply freeing up more time to watch TikTok). Management called it a governance problem. Workers called it getting the job done. It felt, in a strange way, like good news (just like the good old days when we all brought our personal Dropbox accounts to the workplace as we were sick and tired of 1980s SharePoint).</p><p>That era has quietly ended. A new global survey of 3,750 executives and employees across 14 countries finds that more than 54% of workers bypassed their company&#8217;s AI tools in the past 30 days and completed the work manually instead &#8211; and another <em>33% haven&#8217;t used AI at all.</em> Eight in ten enterprise workers are avoiding the technology their employers are spending record sums to deploy. Shadow AI has become the AI no-show show.</p><blockquote><p>Now the data tells a different story. The tool that workers once raced to adopt covertly has become, for a large and growing share of the workforce, the tool they&#8217;ve stopped using altogether. Not because it doesn&#8217;t work. Because they&#8217;re afraid of what happens when it works too well.</p></blockquote><p>The piece also surfaces a huge trust gap: only 9% of workers trust AI for complex, business-critical decisions, compared to 61% of executives &#8211; a 52-point chasm. Executives and employees are, as the report puts it, describing fundamentally different companies. The fear of obsolescence &#8211; FOBO, fear of becoming obsolete &#8211; has apparently crossed the threshold from anxiety into active avoidance. Which is, if you think about it, a perfectly rational response to a completely irrational situation.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.reuters.com/world/china/china-moves-regulate-digital-humans-bans-addictive-services-children-2026-04-03/">China Is Coming for You, Lil Miquela.</a></strong> If you know us, you know that we&#8217;ve been talking about virtual humans (and more specifically, virtual influencers) for a long time now. Our particular example was always Miquela Sousa, a virtual influencer created by the LA-based design agency Brud. Our particular fascination with Miquela and her brothers and sisters centers around the fact that she never ages, never gets sick, never has a bad hair day, travels anywhere, and works 24/7 without a break. Since we talked about her in 2017, she was joined by an ever-expanding family of virtual humans. Now China is closing in on them:</p><blockquote><p>The Cyberspace Administration of &#8204;China&#8217;s proposed rules would require prominent &#8220;digital human&#8221; labels on all virtual human content and prohibit digital humans from providing &#8220;virtual intimate relationships&#8221; to those under 18, according to rules published for public comment until May 6.</p></blockquote><p>and</p><blockquote><p>&#8220;The governance of digital virtual humans is no longer merely an issue of industry norms; &#8288;rather, it has become a strategic scientific problem that concerns the security of the cyberspace, public interests, and the high-quality development of the digital economy,&#8221; it added.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.linkedin.com/pulse/abundance-era-colm-sparks-austin--ayhte/">Digital Transformation is (Finally) Dead.</a></strong> For twenty years, the world operated on a simple principle: buy standard software, don&#8217;t build. The logic made sense, as building was insanely expensive, risky, and slow. The result was highly standardized systems (well hello, SAP!) which we had to stretch well beyond what they were designed for, patch the gaps with middleware, hire consultants to integrate the integrators, and call the whole messy pile &#8220;transformation.&#8221;</p><p>This long piece by EY&#8217;s Colm Sparks-Austin makes the case that the economics have fundamentally flipped. AI and modern dev tools have made engineering capacity abundant. The constraint is no longer &#8220;can we build this?&#8221; It&#8217;s &#8220;do we know what to build and why?&#8221; Colm&#8217;s argument is sharp &#8211; treat the core (ERP, system of record) as the skeleton: rigid, compliance-bearing, changed rarely. And treat the edge &#8211; the customer-facing layer, the last mile &#8211; as tissue: built to regenerate when the market shifts.</p><blockquote><p>Standardization is no longer a safety net. It is a ceiling.</p></blockquote><p>The piece is long, but worth your time &#8211; especially if you work with or inside large enterprises still debating whether to &#8220;buy or build.&#8221; That debate is over.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/podcasts/2026/04/is-ai-going-to-turn-us-all-into-middle-managers/686677/?gift=0GPrpLquXY4NmRQ6sk9MNjJIlkAOmZgquz76kI2Uipo">Is AI Going to Turn Us All Into Middle Managers?</a></strong> Two of our favorite people, Johnathan and Melissa Nightingale, just gave one of the sharpest takes we&#8217;ve heard on AI, management, and the future of work. Go find their Galaxy Brain conversation. <em>@Jane</em></p><p><strong><a href="https://gizmodo.com/crypto-investment-scams-were-the-most-costly-type-of-fraud-in-the-u-s-in-2025-2000743099#goog_rewarded">Crypto Investment Scams Were the Most Costly Type of Fraud in the U.S. in 2025</a></strong> Investment fraud, specifically crypto investment scams, accounted for 49% of all cyber-related complaints in 2025 to the FBI. <em>@Mafe</em></p><p><strong><a href="https://siddhantkhare.com/writing/ai-fatigue-is-real">AI Fatigue Is Real and Nobody Talks About It</a></strong> The real value is in sustainable output, and learning to work &#8211; sustainably &#8211; on new rhythms will be a significant piece of the AI transformation puzzle. <em>@Jeffrey</em></p><p><strong><a href="https://hbr.org/2026/04/when-silos-hinder-innovation-and-when-they-can-help?ab=HP-latest-text-4">When Silos Hinder Innovation &#8211; and When They Can Help</a></strong> Rethinking the innovation dogma&#8230; silos aren&#8217;t always the enemy; sometimes they can spark the best ideas. <em>@Kacee</em></p><p><strong><a href="https://idiocracy.wtf/">Are We Idiocracy Yet?</a></strong> Remember Mike Judge&#8217;s masterpiece, Idiocracy? If you have ever asked yourself how far the movie is from today&#8217;s reality &#8211; here is your answer. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128104;&#127996;&#8205;&#128187; In the same vein as my comment on Kacee&#8217;s post from last week&#8217;s radical Briefing (see above), a leader at the global consulting firm EY wrote, &#8220;<a href="https://www.linkedin.com/pulse/abundance-era-colm-sparks-austin--ayhte/">Why engineering replaces transformation as the engine of growth.</a>&#8221; It&#8217;s worth a read.</p><p>&#128373;&#127996;&#8205;&#9794;&#65039; The journalist who uncovered the Theranos scandal is behind the (maybe) next big unveil: <a href="https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-nakamoto-identity-adam-back.html">Satoshi Nakamoto, the mysterious creator of Bitcoin, might have been found.</a></p><p>&#128187; Here&#8217;s a fun anecdote &#8211; as the world, once again, <a href="https://www.thealgorithmicbridge.com/p/inside-the-ai-industrys-most-expensive">seems to be obsessed with LOC (lines of code) as a productivity metric</a>, legendary software developer Bill Atkinson <a href="https://www.folklore.org/Negative_2000_Lines_Of_Code.html">recalls delivering -2,000 lines of code to Apple.</a></p><p>&#129489;&#127996;&#8205;&#127979; Some things you just can&#8217;t make up: Students record their professors&#8217; lecture, feed it into a speech-to-text AI, to then feed it into an LLM, to then ask/comment/respond to their teacher &#8211; in <a href="https://www.cnn.com/2026/04/04/health/ai-impact-college-student-thinking-wellness">his tone and style</a> (as the AI mimics the import).</p><p>&#129317; Claude (the AI model) might be a little confused as to who said what: <a href="https://dwyer.co.za/static/claude-mixes-up-who-said-what-and-thats-not-ok.html">Claude mixes up who said what.</a></p><p>&#127897;&#65039; The fake singers are coming &#8211; and they are coming for your top spots on the charts: <a href="https://www.showbiz411.com/2026/04/05/itunes-takeover-by-fake-ai-singer-eddie-dalton-now-occupies-eleven-spots-on-chart-despite-not-being-human-or-real-exclusive">iTunes takeover by fake AI singer &#8220;Eddie Dalton&#8221; &#8211; now occupies eleven spots on singles chart, number 3 on albums chart.</a></p><p>&#129300; Take headlines like these with a huge grain of salt: <a href="https://ca.news.yahoo.com/ai-models-secretly-scheme-protect-162555909.html">&#8220;AI models will secretly scheme to protect other AI models from being shut down, researchers find.&#8221;</a> Here is the <a href="https://rdi.berkeley.edu/blog/peer-preservation/">study</a> in question &#8211; and you shouldn&#8217;t be too surprised about the result, knowing that AI models are modelling their training data.</p><p>&#128302; On the topic of predicting the future (when it comes to AI), here is the <a href="https://blog.aifutures.org/p/q1-2026-timelines-update">latest update</a> from the folks at the AI Futures Project (yes, those were the folks who did the very optimistic/accelerated AI 2027 forecast).</p><p>&#129335;&#127996; Ethan Mollick, the Wharton School professor who coined the term &#8220;jagged frontier&#8221; in his assessment of LLMs and their capabilities, makes the argument that <a href="https://www.economist.com/by-invitation/2026/04/01/the-it-department-where-ai-goes-to-die">the IT department is where AI goes to die.</a></p><p>&#129331;&#127996; Take their phones away from them, and the kids will be fine! Well, not so fast&#8230; <a href="https://www.theguardian.com/commentisfree/2026/apr/01/australia-teen-social-media-ban-criticism">Australia&#8217;s teen social media ban is a flop. But there&#8217;s no joy in &#8216;I told you so&#8217;</a></p><p>&#129707; The AI wars might be won over energy, not compute: <a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/half-of-planned-us-data-center-builds-have-been-delayed-or-canceled-growth-limited-by-shortages-of-power-infrastructure-and-parts-from-china-the-ai-build-out-flips-the-breakers">Half of planned US data center builds have been delayed or canceled, growth limited by shortages of power infrastructure and parts from China &#8211; the AI build-out flips the breakers</a></p><p>&#128104;&#127996;&#8205;&#128188; Fire the people, save money, build AI data centers: <a href="https://tech-insider.org/oracle-30000-layoffs-ai-data-center-restructuring-2026/">Oracle&#8217;s 30,000 employee layoffs: Inside the $2.1 billion restructuring fueling a $156 billion AI data center bet.</a></p><p>&#9889; Energy markets are turning very, very weird with the rise of renewables: <a href="https://www.bloomberg.com/news/articles/2026-04-07/germany-power-prices-turn-deeply-negative-on-renewables-surge">Germany power prices turn deeply negative on renewables surge.</a></p><p>&#128690; Signs of the times: <a href="https://www.skoda-storyboard.com/en/skoda-world/skoda-duobell-a-bicycle-bell-that-outsmarts-even-smart-headphones/">A bicycle bell that outsmarts even smart headphones.</a></p><p>&#129686; Talking about the future of warfare: <a href="https://www.tomshardware.com/tech-industry/iran-threatens-complete-and-utter-annihilation-of-openais-usd30b-stargate-ai-data-center-in-abu-dhabi-regime-posts-video-with-satellite-imagery-of-chatgpt-makers-premier-1gw-data-center">Iran threatens &#8220;complete and utter annihilation&#8221; of OpenAI&#8217;s $30B Stargate AI data center in Abu Dhabi &#8211; regime posts video with satellite imagery of ChatGPT-maker&#8217;s premier 1GW data center</a></p><p>&#128188; Surprised is no one: <a href="https://www.marketwatch.com/story/employers-are-using-your-personal-data-to-figure-out-the-lowest-salary-youll-accept-c2b968fb">Employers are using your personal data to figure out the lowest salary you&#8217;ll accept</a> (but then, employees also write their resumes and cover letters using AI, cheat on tests using AI, etc.)</p><p>&#129489;&#127996;&#8205;&#128640; Just in time, as Artemis is doing its moon thing &#8211; <a href="https://www.cosmicodometer.space/">calculate your cosmic distance from the day you were born.</a></p><p>&#127768; Talking about the moon &#8211; this is as nerdy as it gets: <a href="https://www.curiousmarc.com/space/apollo-guidance-computer">The rebuilding of the Apollo guidance computer in glorious detail.</a></p><p>&#127752; The Weather Channel goes full retro with their neat, new <a href="https://weather.com/retro/">retrocast feature</a>.</p><p>&#128649; Can you identify each line on the London Underground by sound? <a href="https://tubesoundquiz.com/">Try it!</a></p><p>&#128104;&#127996;&#8205;&#127912; The <a href="https://theasc.com/articles/fantastic-voyage-creating-the-futurescape-for-the-fifth-element">amazing art</a> that went into the special effects for the Luc Besson movie The Fifth Element. Stunning.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The One Theorem That Governs Survival in a Volatile World]]></title><description><![CDATA[Why lowering the cost of failure matters more than raising the quality of your plan]]></description><link>https://briefing.rdcl.is/p/the-one-theorem-that-governs-survival</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-one-theorem-that-governs-survival</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 07 Apr 2026 14:47:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d79c742b-07bc-4841-b8e6-ec5487e48588_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>More than a decade ago, I was in the insanely fortunate position of running Mozilla Labs &#8211; the &#8220;please disrupt yourself&#8221; unit inside the nonprofit behind Firefox. The team was brilliant, some of the best engineers in Silicon Valley, and we had a pattern that felt productive but was quietly killing us: an idea would hit our standup, we&#8217;d debate it like philosophers, and then someone would say the five most dangerous words in product development &#8211; &#8220;I know how to build this.&#8221; They&#8217;d vanish into their cave, headphones on, code editor glowing, and three days later they&#8217;d resurface with something gorgeous. Real, running code. You could click around and interact with it. We all felt accomplished.</p><p>Then we&#8217;d put it in front of a user. Fifteen seconds. &#8220;I don&#8217;t get it.&#8221; Three days of work &#8211; DOA (&#8220;dead on arrival&#8221; &#8211; fun fact: also the name of a meeting room at Mozilla at the time).</p><p>At that pace we could test maybe ten ideas a month. Product-market fit usually takes hundreds of iterations. We were years away from finding it, and we simply didn&#8217;t have years.</p><p>One afternoon, after yet another fifteen-second rejection, I did something that felt outright strange at the time: I asked a colleague to close his laptop, grab a stack of index cards and some Sharpies, and start drawing. He looked at me like I&#8217;d suggested interpretive dance. He was a C++ and Python person &#8211; not someone who drew doodles on index cards. But he did &#8211; badly, reluctantly, beautifully &#8211; and we walked those cards across the street to the Starbucks on Castro Street in Mountain View, which at the time was basically Silicon Valley&#8217;s cafeteria. &#8220;Can I buy you a coffee in exchange for five minutes of your time?&#8221; More than 80% of the people we asked said yes. We&#8217;d place the first card on the table, get feedback, retreat to a corner to redraw, find the next stranger. By the time the caffeine jitters set in &#8211; three hours, maybe &#8211; we hadn&#8217;t tested one prototype. We&#8217;d tested thirty.</p><p>That afternoon changed the way I think about everything. Three days to learn one thing with code. Three hours to learn thirty things with Sharpies and index cards. Not because the engineers were slow, but because the medium was expensive. High-fidelity code carries a high cost of failure &#8211; emotionally, financially, temporally &#8211; and when failure is expensive, you instinctively avoid it. You plan more. You debate more. You polish more. You learn less.</p><p>Which brought me to the idea I&#8217;ve spent the last fifteen years trying to articulate as precisely as I can &#8211; what I now call the Core Theorem: <strong>the speed of learning is inversely proportional to the cost of failure.</strong> If failure is expensive, you learn slowly. If failure is cheap, you learn fast. That&#8217;s it. That&#8217;s the whole thing. And it governs the survival of every organization operating in a volatile world, which &#8211; in case you haven&#8217;t checked the news lately &#8211; is every organization.</p><p>The logic is disarmingly simple. When a mistake could cost you $100,000, your reputation, or your job, you&#8217;ll hesitate. You&#8217;ll double-check. You&#8217;ll form a committee. You&#8217;ll bring in a consultant. You&#8217;ll optimize for the appearance of competence instead of the reality of learning. But when a mistake costs $10 and an awkward conversation at a coffee shop? You&#8217;ll just try. And if that fails, you&#8217;ll try something else &#8211; all before lunch.</p><p>Tom Chi, one of the founding members of Google X &#8211; the moonshot factory behind self-driving cars and Internet balloons &#8211; understood this better than anyone. When his team was working on Project Glass (which became Google Glass), the engineers estimated it would take six months to build the first working prototype. Optics, miniaturized projection, ergonomics, software &#8211; hard technology, expensive to get wrong. Tom walked out of the room and came back 45 minutes later with a coat hanger bent into a neck loop, a sheet of plexiglass, a middle-school sheet protector taped to it, and a pico-projector connected to a netbook. It looked like garbage. It cost less than $500. And within an hour his team had learned that red text washes out, the upper right corner gives headaches, and email pop-ups are socially awkward. They learned more in one afternoon with a coat hanger than they would have in six months of &#8220;proper&#8221; engineering &#8211; because the cost of being wrong was almost zero.</p><p>The engineers wanted to predict the solution. Tom wanted to ping the solution space. The beautiful irony is that by refusing to build the &#8220;real&#8221; thing, he got to the real thing faster than anyone else.</p><p>Most organizations get this backwards. They shout &#8220;go faster!&#8221; while keeping the cost of failure high. They say &#8220;fail fast and fail forward&#8221; while promoting the people who never make mistakes. They create innovation labs and demand agile workflows &#8211; but require three signatures to approve a $500 experiment. The incentive structure contradicts the aspiration, and incentives always win.</p><p>So here&#8217;s the practical question: What is your coat hanger? What is the cheapest, fastest, ugliest version of the thing your team has been debating in conference rooms for six months? And what would it take to test it this week &#8211; not next quarter, not after the strategy offsite, not when the budget gets approved &#8211; but this week?</p><p>Audit the price of your errors. Count the signatures required to run a small experiment. Look at what happens to the person who tries something and fails versus the person who sits in meetings and never ships. That gap &#8211; between the stated value of learning and the actual cost of failure &#8211; is where your organization&#8217;s speed goes to die.</p><p>Close that gap, and you don&#8217;t need to hire faster people or buy better tools. You just need to hand them a Sharpie and point them at the nearest coffee shop.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!PcZD!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!PcZD!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 424w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 848w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1272w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp" width="1300" height="975" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:975,&quot;width&quot;:1300,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:40824,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/webp&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/193411541?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!PcZD!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 424w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 848w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1272w, https://substackcdn.com/image/fetch/$s_!PcZD!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb23ecf7e-f86b-4f0b-ba95-0b0e18648fb2_1300x975.webp 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>This briefing is adapted from my new book</em> <a href="https://rdcl.is/outlearn/">OUTLEARN: The Art of Learning Faster Than the World Can Change</a>, <em>launching April 28. It&#8217;s the first volume in a series called Built for Turbulence &#8211; short, framework-dense field manuals for leaders who are done planning beautifully and ready to start learning fast. More on that soon.</em></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[CEOs Are Volunteering to Be Replaced]]></title><description><![CDATA[The Internet tips majority-bot, the encryption window closes in 2029, and a new Wharton paper argues AI has fundamentally restructured how humans think &#8211; not just what they do]]></description><link>https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</link><guid isPermaLink="false">https://briefing.rdcl.is/p/ceos-are-volunteering-to-be-replaced</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 03 Apr 2026 15:04:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b17f2df1-1f6b-4465-8d96-967b105cf690_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>This week has been one of contemplation &#8211; as is evidenced in our &#8220;Headlines from the Future&#8221; section. While AI keeps moving at lightning speed, it feels like we (the collective &#8220;we&#8221;) are starting to get our feet under us and figure things out&#8230;</p><p>Meanwhile, a quick personal note before the links: my new book <a href="https://rdcl.is/outlearn/">OUTLEARN &#8211; The Art of Learning Faster Than the World Can Change</a> &#8211; launches April 28 on Amazon. It&#8217;s the first volume in a new series called Built for Turbulence: short, framework-dense field manuals for leaders operating in volatile environments. I&#8217;ll share more next week in the Tuesday deep-dive. &#129304;&#127996;</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.cnbc.com/2026/03/26/coca-cola-james-quincey-walmart-doug-mcmillon-artificial-intelligence-step-down.html">The AI-CEO Threat.</a></strong> Here&#8217;s an interesting one &#8211; the CEOs of major companies are stepping down to make room for people with a better grip on AI.</p><blockquote><p>&#8220;In a pre-AI, a pre-gen-AI mode, we made a lot of progress. But now there&#8217;s a huge new shift coming along,&#8221; Quincey said. While he said he&#8217;s leaning into the technological advances, he believes the beverage company needs &#8220;someone with the energy to pursue a completely new transformation of the enterprise.&#8221;</p></blockquote><p>It does make you wonder a) how many CEOs are hanging on to their jobs by the skin of their teeth, b) how many CEOs are oblivious to what the AI transformation actually means for their companies, and c) how many more CEOs we will see throwing in the towel and handing over the reins to new generations. Now might be a good time for folks with CEO aspirations (and a solid grip on AI) to step up&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking Fast, Slow, and Artificial.</a></strong> In 2011, Nobel Prize winner Daniel Kahneman published his bestselling book &#8220;Thinking, Fast and Slow.&#8221; In it, he describes the two modes of thinking we all operate in: System 1, which is fast and intuitive, and System 2, which is slow and deliberate. Now, in a new paper, Steven D. Shaw and Gideon Nave from The Wharton School argue that AI introduced a third mode of thinking:</p><blockquote><p>People increasingly consult generative artificial intelligence (AI) while reasoning. As AI becomes embedded in daily thought, what becomes of human judgment? We introduce Tri-System Theory, extending dual-process accounts of reasoning by positing System 3: artificial cognition that operates outside the brain. System 3 can supplement or supplant internal processes, introducing novel cognitive pathways.</p></blockquote><p>And, as you would expect, with it comes a whole host of questions: &#8220;System 3 reframes human reasoning and may reshape autonomy and accountability in the age of AI.&#8221; The study is worth reading&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/research/economic-index-march-2026-report">AI Learning Curves Are Real.</a></strong> Anthropic, maker of Claude, released yet another report on the usage of AI (I applaud them for doing this &#8211; their reports tend to be actually useful, and not the usual company-sponsored &#8220;look how great we are&#8221; puffery). This time, they dug into the use of AI across the economy. Lots of good nuggets in the paper; the one standout for me is their insight into how the jagged edge, the concept popularized by Ethan Mollick, plays out in the real world (this is paraphrased):</p><blockquote><p>There&#8217;s a compounding dynamic at play: experienced users bring harder problems, get better results, and develop sharper instincts for working with AI &#8211; while later adopters are still figuring out the basics.</p></blockquote><p>In essence: Early adopters with high-skill tasks have more successful interactions with Claude than later, less technical adopters &#8211; and these early-adopting users may simultaneously be the most exposed to AI-driven disruption and most aided by AI in these initial, augmentative waves of adoption. As my mom used to say: Be careful what you wish for.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.greptile.com/blog/ai-slopware-future">Is AI Slop Our Future?</a></strong> AI Slop is seemingly everywhere these days. And it&#8217;s getting worse. But here is an interesting counter-argument (at least when it comes to code):</p><blockquote><p>[&#8230;] AI models will write good code because of economic incentives. Good code is cheaper to generate and maintain. Competition is high between the AI models right now, and the ones that win will help developers ship reliable features fastest, which requires simple, maintainable code. Good code will prevail, not only because we want it to (though we do!), but because economic forces demand it. Markets will not reward slop in coding, in the long term.</p></blockquote><p>In simple words: &#8220;AI will write good code because it is economically advantageous to do so.&#8221; I do believe this to be true (we already see this with the quality of code generated by frontier models such as Claude Opus/Sonnet 4.6). It will be interesting to see how this plays out &#8211; there might be a real incentive for AI companies to compete on quality, which would be a very &#8220;free market&#8221; thing to do.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/mar/31/jobs-ai-cant-do-young-adults">The Jobs AI Can&#8217;t Do &#8211; and the Young Adults Doing Them</a></strong> A new generation is redefining what a good job looks like. Hands-on trades are shedding their stigma, replaced by something more compelling: skilled work no machine can replicate. <em>@Jane</em></p><p><strong><a href="https://www.latimes.com/business/story/2026-03-30/apple-at-50-how-garage-startup-became-3-5-trillion-titan">Apple Turns 50</a></strong> Wozniak on Apple: The secret to the company&#8217;s success was it managed its brand well and didn&#8217;t make &#8220;lousy junk&#8221; that breaks down. <em>@Mafe</em></p><p><strong><a href="https://www.gzeromedia.com/the-case-against-political-prediction-markets">The Case Against Political Prediction Markets</a></strong> Straight from dystopia, a valuable lesson that we keep relearning: Maybe not everything should be a market. <em>@Jeffrey</em></p><p><strong><a href="https://www.strategy-business.com/blog/What-leaders-get-wrong-about-responsibility">What Leaders Get Wrong About Responsibility</a></strong> Leaders love to &#8220;hold people accountable&#8221; &#8211; fewer know how to build systems where responsibility organically shows up. <em>@Kacee</em></p><p><strong><a href="https://www.youtube.com/watch?v=UWHdiLdemXQ">PIEZODANCE</a></strong> Not a read this week, but a video. And not just a video, but a contemporary dance video &#8211; this year&#8217;s winner of the &#8220;Dance your PhD Thesis&#8221; competition is all about energy &#8211; and it&#8217;s stunning. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128274; We have been talking about this since the early days of Singularity University, now it&#8217;s closer than ever: &#8220;<a href="https://www.theguardian.com/technology/2026/mar/26/google-quantum-computers-crack-encryption-2029">Google warns quantum computers could hack encrypted systems by 2029.</a>&#8221; Time to update your security keys (there are quantum-secure password-generating algorithms; you just have to use them).</p><p>&#128678; Similarly, we have been talking about vertical AI models for a while now (well, &#8220;a while&#8221; in AI-timeline terms) &#8211; they, also, are closer than ever: <a href="https://x.com/eoghan/status/2037197696075981124">The age of vertical models is here.</a></p><p>&#128184; BlackRock&#8217;s Larry Fink warns that &#8220;<a href="https://www.wsj.com/finance/investing/larry-finks-warning-invest-or-risk-getting-left-behind-by-ai-d2f1d09d">artificial intelligence could widen wealth inequality if ownership does not broaden alongside it</a>&#8221; &#8211; i.e., those who invest in stocks will benefit; those who cannot will be left behind.</p><p>&#129489;&#127996;&#8205;&#127979; Tech up, testscores down: <a href="https://undark.org/2026/04/01/sweden-schools-books/">Amid declining test scores, Sweden has pivoted away from screens and invested in back-to-basics school materials (i.e. books).</a></p><p>&#128045; Hold my beer: <a href="https://www.404media.co/disneys-openai-sora-disaster-shows-ai-will-not-save-hollywood/">Disney&#8217;s Sora disaster shows AI will not revolutionize Hollywood.</a></p><p>&#128300; Surprised is no one: <a href="https://www.theatlantic.com/science/2026/03/china-science-superpower/686564/">The shocking speed of China&#8217;s scientific rise.</a></p><p>&#129436; All it takes is five seconds of your voice &#8211; <a href="https://mistral.ai/news/voxtral-tts">Mistral&#8217;s newest voice cloning AI is scarily good.</a></p><p>&#128084; The easy way out: <a href="https://www.bbc.com/news/articles/cde5y2x51y8o">Tech CEOs suddenly love blaming AI for mass job cuts. Why?</a></p><p>&#127917; This is just too good: Someone trained a large language model solely on Victorian-era literature. The result: <a href="https://www.estragon.news/mr-chatterbox-or-the-modern-prometheus/">Mr. Chatterbox</a></p><p>&#129302; AI bots now make up <a href="https://www.cnbc.com/2026/03/26/ai-bots-humans-internet.html">more than 50% of all Internet traffic</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,700+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Vibe Coding Our Way to 70%]]></title><description><![CDATA[The inversion that SaaS wasn't prepared for&#8230;]]></description><link>https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</link><guid isPermaLink="false">https://briefing.rdcl.is/p/vibe-coding-our-way-to-70</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 31 Mar 2026 14:44:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TjKs!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:503656,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/192553695?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TjKs!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 424w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 848w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1272w, https://substackcdn.com/image/fetch/$s_!TjKs!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0f0c84ef-0454-417c-a9b4-980af593a48b_1200x630.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>There&#8217;s an early signal I&#8217;ve now seen enough times in the wild that it&#8217;s hard to dismiss as anecdotal, even if each individual instance still sounds like one. Over the past few weeks, I&#8217;ve had multiple conversations with CEOs of tech startups who are starting to receive a version of the same feedback from potential customers: instead of buying software, prospects are increasingly deciding to vibe-code a solution themselves. Not because it&#8217;s better, but because it gets them far enough.</p><p>That &#8220;far enough&#8221; is landing, with surprising consistency, around 70%.</p><p>I raised this at a thought leader symposium in Dallas last week, expecting at least some pushback, and instead got immediate agreement. One firm owner said plainly that rather than paying $300/mo per user for an off-the-shelf product, the agent-built 70% solution is good enough in the current environment. Another chimed in (not a developer by any stretch) and said he&#8217;s been building things on the weekends simply because it&#8217;s fun. This isn&#8217;t just a cost decision, it&#8217;s a behavioral shift.</p><p>What&#8217;s striking is how quickly the boundary of what people &#8220;won&#8217;t build themselves&#8221; is collapsing. In an internal discussion at radical, the point was raised that surely there are still limits - that people aren&#8217;t going to start vibe coding their own general ledger. And if you read the <a href="https://briefing.rdcl.is/p/mckinsey-cant-you-can">recent briefing</a>, someone had done exactly that. <a href="https://craigmod.com/essays/software_bonkers/">By his own admission</a>, it wasn&#8217;t particularly good, and he wasn&#8217;t using a complex GL to begin with, but it worked for his business. Around the same time, I saw a CEO share on LinkedIn that he had spent a weekend building a replacement for HubSpot. Again, not best-in-class, but usable and to his own preferences.</p><p>Individually, these are easy to write off&#8230;together, they form a pattern. My instinct, honestly, is still that this has limits. Not every system will get vibe-coded into existence, but I&#8217;m increasingly unsure where those limits actually are. That uncertainty feels more important than whatever answer I&#8217;d have given six, or even three months ago.</p><p>TechCrunch has already leaned into the narrative of <a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaSpocalypse</a>, which may or may not be more marketing fodder than reality, but it points to something worth paying attention to. Because the more interesting dynamic here isn&#8217;t whether these self-built solutions rival existing software - they don&#8217;t. It&#8217;s that they don&#8217;t have to because the standard isn&#8217;t excellence anymore. It&#8217;s sufficiency, shaped by context, constraints, and increasingly, by a willingness to trade polish for control. What&#8217;s notable is that this isn&#8217;t just showing up in conversations, it&#8217;s already impacting markets. Last month, a single release from Anthropic triggered a roughly $285B selloff across the software sector.</p><p>It would be convenient to attribute this entirely to economic pressure. Budgets are tighter, scrutiny is higher, and software that once felt like a default purchase now has to compete for its place. That&#8217;s real, and it&#8217;s accelerating the behavior. The structural shift underneath all of this is simple: the cost of creating software has dropped below the perceived cost of buying it - and when that inversion happens, the starting point changes. You don&#8217;t begin with procurement, you begin with construction.</p><p>What sits underneath that shift, is that software is quietly moving from something standardized to something individualized. For the last two decades, SaaS has been built on a kind of implicit compromise: you adopt a system designed for the average user, and in return you get scale, reliability, maintenance and convenience. But when the cost of building collapses, that tradeoff starts to feel less necessary. Instead of adapting your workflows to fit a product, you can increasingly shape the product to fit your workflows. It&#8217;s messier, and often incomplete, but it&#8217;s also more precise&#8230;and for many use cases, that precision matters more than polish.</p><p>Pascal&#8217;s framing in the briefing around bifurcation is useful here, not as theory, but as a way to understand where this is going. We&#8217;re watching the market split between systems where completeness and trust are non-negotiable, and a much larger surface area where &#8220;good enough&#8221; is not just acceptable, but rational. The 70% threshold is emerging as the dividing line; above it, you still buy &#8211; but below it, more and more people are choosing to build.</p><p>I think what makes this particularly important, is that it reframes competition in a way that most companies aren&#8217;t prepared to handle. The threat isn&#8217;t another product with a better roadmap or a tighter feature set, it&#8217;s a user who decides they don&#8217;t need the category in the first place. A small business owner comparing a self-built ledger to Quicken isn&#8217;t benchmarking against enterprise accounting software. A founder assembling a CRM over a weekend isn&#8217;t trying to replicate HubSpot in full. They are solving a narrower, more individualized version of the problem - and in doing so, stepping outside the boundaries that defined the category. Jeff Seibert, the CEO of <a href="https://digits.com/">Digits</a>, put language to this in a way that&#8217;s worth paying attention to, &#8220;the second-order effects will be fascinating. When software is cheap, it&#8217;s taste and distribution that matter.&#8221; This framing pulls the conversation out of tooling and into consequences.</p><p>And that opt-out dynamic is the signal.</p><p>Once someone successfully builds one thing, even imperfectly, the barrier to building the next drops dramatically. Capability compounds, confidence compounds, and what starts as experimentation begins to normalize into an alternative path: one that doesn&#8217;t rely on waiting for software to catch up to your needs, because you&#8217;ve already adjusted it yourself.</p><p>The implication isn&#8217;t that 70% gets better (although I&#8217;m sure that number continues to improve as the coding models mature) it&#8217;s that once users believe they can build for themselves, the default posture shifts from buying software to questioning whether they need it at all.</p><p><em>@Kacee</em></p>]]></content:encoded></item><item><title><![CDATA[Nine Nuclear Reactors Worth of Hype]]></title><description><![CDATA[Walmart's AI shopping experiment crashes, AGI benchmarks humble Silicon Valley, and the ads have officially reached the refrigerator]]></description><link>https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</link><guid isPermaLink="false">https://briefing.rdcl.is/p/nine-nuclear-reactors-worth-of-hype</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Mar 2026 14:36:40 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f35bc325-b08a-4b0b-8293-96b01020e838_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Pardon my French (in my defense, it&#8217;s not my headline), but Mario Zechner&#8217;s &#8220;<a href="https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/">Thoughts on slowing the f*** down</a>&#8221; is a good reminder that all the wondrous things AI can and does do for us come at a cost &#8211; hence his reminder to: &#8220;[&#8230;] slowing the f*** down and suffering some friction is what allows you to learn and grow.&#8221; With that in mind &#8211; time to slow down, welcome the weekend, and dive one last time into our wild future before we call it a Friday.</p><p>P.S. <a href="https://rdcl.is/a-podcast-with/jason-goldberg/">A new episode of our podcast dropped:</a> Jason Goldberg has spent 30 years watching companies survive &#8211; and get destroyed by &#8211; disruption in retail. His counterintuitive advice for the agentic commerce moment: stop trying to be first, and start asking what you&#8217;ll regret not doing when the future arrives.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.tomshardware.com/tech-industry/artificial-intelligence/planned-10-gigawatt-softbank-data-center-in-ohio-might-be-the-largest-in-the-world-will-require-a-usd33-billion-natural-gas-plant-equivalent-to-nine-nuclear-reactors">AIs Energy Demands Are Truly Bonkers.</a></strong> Japanese tech giant SoftBank is building a massive 10GW data center in Ohio to host AI models. Aside from the cool $30&#8211;40 billion price tag, it will require the build of a $33 billion natural gas power plant &#8211; with an insane output capacity (emphasis mine):</p><blockquote><p>When completed, the new site could be one of the largest AI data centers ever built. Furthermore, it will be powered by one of the world&#8217;s largest fleets of gas turbines, <em>equivalent to the energy supply of nine nuclear reactors.</em></p></blockquote><p>It does leave you wondering where and how all this will end.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://searchengineland.com/walmart-chatgpt-checkout-converted-worse-472071">Maybe AI Isn&#8217;t Online Shopping&#8217;s Future After All.</a></strong> After the initial hype of online shopping results being incorporated into the answers LLMs give to the numerous product-related queries they receive, Walmart unveiled that the conversion they are seeing from those AI-referrals is just terrible.</p><blockquote><p>After testing 200,000 items in ChatGPT, Walmart found sharply lower conversions and will use its own integrated shopping experience. Walmart said conversion rates for purchases made directly inside ChatGPT were three times lower than when users clicked through to its website.</p></blockquote><p>Next: Agentic commerce. The jury&#8217;s out.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.anthropic.com/features/81k-interviews">What 81,000 People Want From AI.</a></strong> Anthropic, the AI company which is <em>not</em> OpenAI, conducted what is, in their own words, likely the largest study on users&#8217; desires, wishes, and fears when it comes to their use of AI. Anthropic being Anthropic, they didn&#8217;t survey people using a traditional questionnaire, but rather had their chatbot &#8220;talk&#8221; to people. The findings won&#8217;t surprise you &#8211; people want to use AI to better themselves: professional excellence and increased productivity, which translates into the very human desire to, ultimately, live better. And respondents live the Scott Fitzgerald quote we are so fond of quoting &#8211; they keep the light and the dark of AI in their heads simultaneously.</p><blockquote><p>&#8220;AI should be cleaning windows and emptying the dishwasher so I can paint and write poetry. Right now it&#8217;s exactly the other way around.&#8221;</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arcprize.org/">AGI? Not so Fast!</a></strong> AGI, or Artificial General Intelligence, is the thing Sam Altman and others love to talk about &#8211; and promise is just around the corner. To demo their respective companies&#8217; progress, they roll out benchmark after benchmark showing how their AI beats humans on the sommelier exam. A new benchmark, however, shows that AGI is still a long, long way off. The ARC-AGI-3 benchmark pits leading AIs against humans in a series of computer games &#8211; and AIs don&#8217;t look all that great. To apply a lesson my statistics professor hammered into our heads: Never trust a statistic you haven&#8217;t faked yourself.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.wsj.com/lifestyle/samsung-refrigerator-ads-lg-whirlpool-ge-10ea7bcc?st=dFog7V">Ads Are Popping up on the Fridge and It Isn&#8217;t Going Over Well</a></strong> Ads are literally popping up everywhere (even on Google Maps starting this summer), but people are particularly irked by ads on expensive refrigerators with a big screen for recipes, weather updates, and, apparently, ads. <em>@Mafe</em></p><p><strong><a href="https://aeon.co/essays/how-do-we-deal-with-the-catastrophe-of-uninsurability">The Insurance Catastrophe</a></strong> A deep dive into the history &amp; future of insurance markets offers a fascinating lens for exploring how communities, societies, and economies deal with radical uncertainty and catastrophic risk. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/davidrosowsky/2026/03/21/the-60-year-degree-why-universities-must-pivot-from-recruitment-to-perpetual-partnership/">The 60-Year Degree: Why Universities Must Pivot from Recruitment to Perpetual Partnership</a></strong>Higher ed has been at an inflection point for years; the degree is just the 1st casualty of a shift to lifelong contracts. <em>@Kacee</em></p><p><strong><a href="https://undark.org/2026/03/20/ai-slop-children/">AI Slop Is Infiltrating Online Children&#8217;s Content</a></strong> Surprised is, of course, no one. But it does leave you wondering what happens to the brains and cognitive development of children who are exposed to AI slop from an early age. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#127871; <a href="https://www.youtube.com/watch?v=T4Upf_B9RLQ">Hilarious take on the world of Enshittification</a> by the Norwegian Consumer Council (hat tip to Angel Grimalt for the link).</p><p>&#129399;&#127996; AI agents going rogue: The more we rely on AI, the more we deploy AI agents, the more we see fun headlines like this: <a href="%EF%BF%BC">Meta is having trouble with rogue AI agents</a> &#8211; now consider what this means for any company <em>not</em> the size of, or with the resources of, Meta!</p><p>&#127866; Talking about cyberattacks and our ever-increasing reliance on Internet-connected technologies: <a href="https://techcrunch.com/2026/03/20/cyberattack-on-vehicle-breathalyzer-company-leaves-drivers-stranded-across-the-us/">Cyberattack on vehicle breathalyzer company leaves drivers stranded across the US.</a></p><p>&#128104;&#127996;&#8205;&#128187; Nerd alert! But super helpful: Here is a <a href="https://github.com/nidhinjs/prompt-master">Claude Skill &#8211; Prompt Master &#8211;</a> which helps you create better prompts, highly optimized for specific use cases, tools, and target LLMs.</p><p>&#129318;&#127996; Yep, bro&#8230; Whatever. &#8220;<a href="https://fortune.com/2026/03/24/perplexity-ceo-ai-layoffs-not-bad-people-hate-jobs-entrepreneurship/">Perplexity CEO says AI layoffs aren&#8217;t so bad because people hate their jobs anyways: &#8216;That sort of glorious future is what we should look forward to&#8217;</a>&#8221;</p><p>&#9875; The running and cycling app Strava has been used to track the location of military outposts before &#8211; now the French newspaper Le Monde has used it to <a href="https://www.lemonde.fr/en/international/article/2026/03/20/stravaleaks-france-s-aircraft-carrier-located-in-real-time-by-le-monde-through-fitness-app_6751640_4.html">track the location of France&#8217;s aircraft carrier</a>. Note: Your public data is <em>public</em> data.</p><p>&#129516; Fascinating read on the adaptability of the human body: <a href="https://www.zmescience.com/science/biology/tribe-in-kenya-evolved-genetic-mutation-that-lets-them-survive-with-almost-no-water/">Tribe in Kenya evolved genetic mutation that lets them survive with almost no water.</a></p><p>&#129378; A Japanese glossary of <a href="https://www.nippon.com/en/japan-data/h01362/">chopsticks faux pas</a>.</p><p>&#129523; Lovely <a href="https://www.web-rewind.com/">journey through 30 years of the web</a>.</p><p>&#127911; Peak 80s nostalgia: <a href="https://maxell-usa.com/product/cassetteplayer/">The Maxell Wireless Cassette Player.</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[McKinsey Can’t. You Can.]]></title><description><![CDATA[While Anthropic&#8217;s CEO stares down his Oppenheimer moment, a CEO loses $250M trusting ChatGPT over his lawyers, and OpenClaw turns out to be FOMO dressed as a technological breakthrough.]]></description><link>https://briefing.rdcl.is/p/mckinsey-cant-you-can</link><guid isPermaLink="false">https://briefing.rdcl.is/p/mckinsey-cant-you-can</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Mar 2026 13:24:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5fdd09db-ce13-409a-b677-7aaa45025084_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Boy oh boy, the world is spinning faster than ever&#8230; This last week has been yet another week of AI insanity. Meanwhile, we are sweating at an unprecedented 86 degrees Fahrenheit here in Boulder, CO (we usually see snow around this time of year), and I am writing this in the rain and 45 degrees Fahrenheit while being out for a weekend of ice climbing in the Canadian Rockies in Canmore, Alberta&#8230; We will see how the ice is tomorrow &#8211; just arrived.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.mckinsey.com/capabilities/tech-and-ai/how-we-help-clients/rewiring-the-way-mckinsey-works-with-lilli">Even the Consultants Can&#8217;t Make AI Work for Them.</a></strong> Here is an interesting one: McKinsey created and deployed their own AI assistant &#8220;Lilly&#8221; - and in their write-up about it, they report that 72% of their employees are using it by tossing 500,000 prompts at it per month.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!_Zyl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png" width="1456" height="431" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:431,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:137974,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/191539185?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!_Zyl!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 424w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 848w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1272w, https://substackcdn.com/image/fetch/$s_!_Zyl!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fcb7f48-9799-4be8-82ac-dfccef6cf02f_1762x522.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>72% of McKinsey&#8217;s employees are about 29,000 people. 29,000 people prompting their AI 500,000 times a month is only 17 prompts per person per month! That&#8217;s about one prompt every other day&#8230; Not exactly a lot. I prompt Claude easily 17 times in a single day&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://craigmod.com/essays/software_bonkers/">McKinsey Can&#8217;t &#8211; But Individuals Do.</a></strong> In stark contrast to McKinsey, solo-developer Craig Mod built his own (fairly complex) accounting system from scratch using Claude Code in five short days. Aside from the audacity of it all, it&#8217;s a perfect example of the &#8220;bifurcation of intelligence&#8221; we have been talking about here in the radical Briefing. On one hand you have big firms seeking efficiency gains by deploying chatbots, and on the other you have individuals riding the speartip of AI to create complex, bespoke systems.</p><blockquote><p>Simply put: It&#8217;s a big mess, and no off-the-shelf accounting software does what I need. So after years of pain, I finally sat down last week and started to build my own. It took me about five days. I am now using the best piece of accounting software I&#8217;ve ever used. It&#8217;s blazing fast. Entirely local. Handles multiple currencies and pulls daily (historical) conversion rates. It&#8217;s able to ingest any CSV I throw at it and represent it in my dashboard as needed. It knows US and Japan tax requirements, and formats my expenses and medical bills appropriately for my accountants. I feed it past returns to learn from. I dump 1099s and K1s and PDFs from hospitals into it, and it categorizes and organizes and packages them all as needed. It reconciles international wire transfers, taking into account small variations in FX rates and time for the transfers to complete. It learns as I categorize expenses and categorizes automatically going forward. It&#8217;s easy to do spot checks on data. If I find an anomaly, I can talk directly to Claude and have us brainstorm a batched solution, often saving me from having to manually modify hundreds of entries. And often resulting in a new, small, feature tweak. The software feels organic and pliable in a form perfectly shaped to my hand, able to conform to any hunk of data I throw at it. It feels like bushwhacking with a lightsaber.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://stopsloppypasta.ai/en/">Stop Sloppypasta.</a></strong> Like it or not, you will have to deal with AI-generated content &#8211; both personally and professionally. Colleagues who are responding to a request with an AI-generated response, emails being written by your favorite LLM, proposals being created with the help of your friendly chatbot. The question might truly not be &#8220;if&#8221; but &#8220;how&#8221; &#8211; here is a set of very reasonable guidelines and practices to help you navigate this brave new world.</p><blockquote><p>AI capabilities keep increasing, and using it to draft, brainstorm or accelerate you will be increasingly useful. However, using AI should not make your productivity someone else&#8217;s burden. New tools require new manners. <strong>Use AI to accelerate your work or improve what you send.</strong> <strong>Don&#8217;t use it to replace thinking about what you&#8217;re sending.</strong></p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://entropytown.com/articles/2026-03-12-openclaw-sandbox/">OpenClaw Isn&#8217;t Really New &#8211; It&#8217;s The Dream of Free Labour.</a></strong> Unless you were living under a rock in AI-land, you&#8217;ve definitely heard of the OpenClaw craziness (we reported on it multiple times here in the radical Briefing). The narrative, usually, is around the technological breakthrough and the magic that ensues when you hand over the keys to the kingdom to your army of AI bots. Here&#8217;s a good counter-narrative &#8211; the tech isn&#8217;t new per se, it&#8217;s just combined and connected in an interesting way. And the hype, really, is about the never-ending dream of free labour &#8211; and ends up being more about FOMO than anything else.</p><blockquote><p>A machine producing a thousand candidate images while you sleep is plausible and often useful. A machine founding a hundred profitable businesses before breakfast is rather more ambitious. The first is a search process. The second is venture-capital fan fiction.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/anthropic-dod-ai-utopianism/686327/?gift=0GPrpLquXY4NmRQ6sk9MNvR2B7Kzm7g5dqeIskXWHDQ">Dario Amodei&#8217;s Oppenheimer Moment</a></strong> Dario Amodei may be having his Oppenheimer moment, and judging by the Pentagon&#8217;s latest move, he never really had a choice. <em>@Jane</em></p><p><strong><a href="https://www.fastcompany.com/91508903/after-hours-meetings-are-on-the-rise-ai-could-make-things-even-worse">After-Hours Meetings Are on the Rise; AI Could Make Things Even Worse</a></strong> Everyone is in agreement that there shouldn&#8217;t be so many meetings, but unfortunately they&#8217;re on the rise. Specifically, after-hours meetings due to more global teams and distributed workforces. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/culture/infinite-scroll/why-tech-bros-are-now-obsessed-with-taste">Why Tech Bros Are Now Obsessed With Taste</a></strong> As the zeitgeist turns and startup entrepreneurs scramble to differentiate their offerings in an era of AI abundance, prepare to hear way, way too much about &#8220;taste&#8221; and &#8220;discernment&#8221; &#8211; and tune your BS detector accordingly. <em>@Jeffrey</em></p><p><strong><a href="https://www.forbes.com/sites/jeffkauflin/2026/03/17/why-an-unsustainable-bubble-is-growing-inside-fintech/">Why an Unsustainable Bubble Is Growing in Fintech</a></strong> When growth is manufactured through pricing arbitrage and balance sheet gymnastics, you&#8217;re not building a market; you&#8217;re distorting one. <em>@Kacee</em></p><p><strong><a href="https://davidoks.blog/p/why-the-atm-didnt-kill-bank-teller">Why ATMs Didn&#8217;t Kill Bank Teller Jobs, but the iPhone Did</a></strong> You know the story about ATMs and bank tellers &#8211; this deep dive into what actually happened (and keeps happening) is a good reminder to be skeptical of the lore at large. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128302; Silicon Valley legend Kevin Kelly on &#8220;<a href="https://kevinkelly.substack.com/p/how-to-future">How to Future.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127891; A professor&#8217;s honest assessment on &#8220;<a href="https://www.science.org/content/article/why-i-may-hire-ai-instead-graduate-student?__cf_chl_rt_tk=JOqT5pmrDEj0G.b1ijxTJND_JD_H0vfLIrZMK68Ds54-1773714618-1.0.1.1-OSeKrV93UJgm10L2IOwXuX5_kMOatTYzMzAsHktbXxU">why I may &#8216;hire&#8217; AI instead of a graduate student.</a>&#8221;</p><p>&#129489;&#127996;&#8205;&#127979; Headline captures it all: <a href="https://www.bloodinthemachine.com/p/if-ai-is-writing-the-work-and-ai">&#8220;If AI is writing the work and AI is reading the work, do we even need to be there at all?&#8221; Educators reveal a growing crisis on campus and off.</a></p><p>&#129489;&#127996; Not that anyone ought to be surprised: <a href="https://www.404media.co/ceo-ignores-lawyers-asks-chatgpt-how-to-void-250-million-contract-loses-terribly-in-court/">CEO asks ChatGPT how to void $250 million contract, ignores his lawyers, loses terribly in court.</a></p><p>&#129528; AI is making its way into children&#8217;s toys. Parents ought to be cautious: <a href="https://www.bbc.com/news/articles/clyg4wx6nxgo">AI toys for children misread emotions and respond inappropriately, researchers warn.</a></p><p>&#128104;&#127996;&#8205;&#128187; AI-generated code is awesome &#8211; and can be pretty bad: <a href="https://techxplore.com/news/2026-03-ai-coding-tools.html">Top AI coding tools make mistakes one in four times, study shows.</a></p><p>&#129438; OpenClaw is everywhere &#8211; and nowhere as much as in China: <a href="https://www.cnbc.com/2026/03/18/china-openclaw-baidu-tencent-ai.html">How China is getting everyone on OpenClaw, from gearheads to grandmas</a></p><p>&#128561; Don&#8217;t bring a knife to a gunfight. Someone built a <a href="%EF%BF%BC">$97 missile</a> &#8211; with a $5 sensor for flight control. All open source, 3D print, and build-your-own. Talk about asymmetric warfare.</p><p>&#128110;&#127996; False positives keep being a real problem &#8211; with very real consequences: <a href="https://www.ndtv.com/world-news/us-woman-wrongly-imprisoned-for-6-months-due-to-faulty-facial-recognition-11209378">US woman wrongly imprisoned for 6 months due to faulty facial recognition.</a></p><p>&#129400; Opposite approach &#8211; similar issue: You can&#8217;t trust facial recognition (see above), and you can&#8217;t trust the face either: <a href="https://startupfortune.com/the-face-recommending-your-next-health-product-is-fake-the-money-leaving-your-wallet-is-not/">The face recommending your next health product is fake, the money leaving your wallet is not.</a></p><p>&#127911; Here is a <a href="https://88mph.fm/">delightful music web app</a> that lets you listen to what a particular country was enjoying in a specific year.</p><p>&#128586; Independent search engine Kagi just released their genius <a href="https://translate.kagi.com/?from=en&amp;to=linkedin">LinkedIn Speak translator</a>. Take any sensible (or not) English sentence and get back the gibberish that is LinkedIn Speak.</p><p>&#127760; Headline says it all (also: Schadenfreude is real for some): <a href="https://www.404media.co/rip-metaverse-an-80-billion-dumpster-fire-nobody-wanted/">RIP Metaverse, an $80 billion dumpster fire nobody wanted</a></p><p>&#128065;&#65039; The <a href="https://tombh.co.uk/longest-line-of-sight">longest line of sight in the world</a> &#8211; took eight years to figure out.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Turning Your Official Future Into a Lever]]></title><description><![CDATA[How Smart Leaders Use the Future to Change What&#8217;s Possible Today]]></description><link>https://briefing.rdcl.is/p/turning-your-official-future-into</link><guid isPermaLink="false">https://briefing.rdcl.is/p/turning-your-official-future-into</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Mar 2026 15:03:48 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/b53fe895-d3f6-4661-94df-0e2056953b4a_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A few weeks ago, here on the radical Briefing, I wrote about <a href="https://briefing.rdcl.is/p/the-official-future-trap">The Official Future Trap</a> &#8211; the idea that organizations create a singular, linear projection of the future (Peter Schwartz coined this term in his seminal book &#8220;<a href="https://www.google.com/books/edition/Art_of_the_Long_View/wjPaEAAAQBAJ">The Art of the Long View</a>&#8221;), embed it into their strategic plans, KPIs, and incentive structures, and then ride that narrow line straight into irrelevance when reality shows up differently (which it usually does). In a nutshell, the argument was: the official future is dangerous because it closes down the space of possibilities, turns uncertainty into false certainty, and makes you blind to the futures you&#8217;re not planning for.</p><p>I still very much believe all of that (and, sadly, have seen it play out too many times). But I&#8217;ve been thinking about the flip side &#8211; and it&#8217;s been nagging at me ever since a conversation I had with my dear friend and radical collaborator Jeffrey Rogers a few days after publishing that piece. What if the official future isn&#8217;t just a trap you fall into, but a tool you can wield strategically?</p><p>The difference, for me, comes down to a single word: There&#8217;s a massive shift which happens when you consider <em>the</em> official future versus <em>an</em> official future. <em>The</em> official future is the one you inherited. It&#8217;s the projection that everyone agreed on in last year&#8217;s offsite, now baked into budgets and headcount plans and org charts. It&#8217;s unconscious, institutional, and self-reinforcing &#8211; and it becomes the trap I wrote about in that last piece. But <em>an</em> official future is something you deliberately construct &#8211; a strategic narrative, a flag you plant in the ground that says &#8220;this is where we&#8217;re going,&#8221; designed not just to guide where you are going, but also to redefine what your people believe is possible, acceptable, and inevitable. &#8220;The&#8221; is singular and narrow; &#8220;an&#8221; is something you deliberately and strategically deploy.</p><p>Which brings us to a second concept we like to talk about, discuss, and debate here at radical: the Overton window. Named after policy analyst Joseph Paul Overton, it describes the range of ideas considered acceptable by the mainstream at any given time. Politicians &#8211; and by extension, leaders of all kinds &#8211; generally operate within this window. Step outside it and you&#8217;re &#8220;radical.&#8221; Stay within it and you&#8217;re &#8220;sensible.&#8221; The window isn&#8217;t static, though. It shifts over time, and the fascinating thing is <em>how</em> it shifts: not usually through leaders courageously stepping outside it, but through external forces &#8211; think tanks, social movements, cultural shifts, provocateurs &#8211; that drag the boundaries of what&#8217;s considered acceptable in a new direction. Once the window moves, leaders follow. Joseph G. Lehman, Overton&#8217;s colleague at the Mackinac Center, put it plainly: politicians are (or to be more precise: were &#8211; the very Overton window of what it means to engage in politics is rapidly and massively shifting) in the business of detecting where the window is and moving in accordance with it, not shifting it themselves.</p><p>Bring those two ideas together &#8211; the official future and the Overton window &#8211; and you realize: Inside every organization, there&#8217;s an internal Overton window &#8211; a range of strategies, investments, and ideas that are considered &#8220;on the table.&#8221; Anything outside that range gets labeled &#8220;off strategy,&#8221; &#8220;too risky,&#8221; or &#8211; my personal favorite of all time &#8211; &#8220;interesting, but not for us.&#8221; And the official future, as I argued in my previous piece, <em>reinforces</em> the current window. It tells everyone: this is where we&#8217;re going, this is what matters, everything else is noise. The window calcifies, and over time, the organization loses the ability to even <em>imagine</em> alternatives, let alone create them.</p><p>But what if you create an official future that sits at the edge of (or just beyond) the Overton window? Not so far out that your people dismiss it as pure fantasy, but far enough that it stretches what your organization considers possible. Think of it as strategic anchoring. In negotiation theory, the first number on the table &#8211; the anchor &#8211; disproportionately shapes the entire conversation that follows. Even when people know the anchor is aggressive, they adjust from it rather than ignoring it. Tversky and Kahneman documented this decades ago, and the research is super clear on this: the anchor sets the playing field, whether you want it to or not. An official future works the same way. When a leader declares &#8220;this is where we&#8217;re heading&#8221; &#8211; and that destination is slightly beyond what the organization currently considers feasible &#8211; the entire conversation reorganizes around that anchor. The argument moves from &#8220;should we do this?&#8221; to &#8220;how do we get there?&#8221; and your company&#8217;s Overton window moves.</p><p>And just to state the obvious: This isn&#8217;t about making wild proclamations or playing visionary-CEO-bingo, but about crafting a narrative of the future that&#8217;s credible enough to be taken seriously <em>and</em>ambitious enough to expand the boundaries of what&#8217;s considered realistic. You declare a specific, vivid future state &#8211; &#8220;in three years, 40% of our revenue comes from products that don&#8217;t exist yet&#8221; or &#8220;by 2028, we operate as a platform, not a product company&#8221; &#8211; and then you give it the weight of institutional authority. You put it in the strategic plan. You reference it in all-hands meetings. You allocate some resources toward it. You make it feel real and inevitable, even if it&#8217;s aspirational. Then, regularly, something remarkable happens: ideas that were previously dismissed as too bold now become stepping stones toward the declared destination. The previously unacceptable becomes the merely ambitious. And the merely ambitious becomes table stakes.</p><p>The self-reinforcing cycle I described in my original article &#8211; your official future leads to resource allocation, which informs the strategy, which then gets executed, and ultimately reinforces your official future &#8211; now works <em>for</em> you instead of <em>against</em> you, and you drag your organization toward a more expansive set of possibilities.</p><p>And, as so often in life, with great power comes great responsibility. On the constructive side, this is how every significant organizational transformation actually happens &#8211; someone with authority and/or social capital plants a flag, declares a future that stretches the window, and the organization reorganizes around it. But on the destructive side &#8211; and we&#8217;ve seen this play out at enormous scale in politics over the past decade &#8211; manufacturing an official future can be used to normalize ideas that were previously, and rightfully, considered unacceptable. Same mechanism, different intent and integrity behind it.</p><p>Let me bring this full circle. The futures cone &#8211; that beautiful framework from futures studies that Jeffrey and I deploy regularly in our work &#8211; reminds us that the further out we look, the wider the space of possible futures becomes. The official future is a single line through that expanding cone. In my original piece, I argued that&#8217;s the trap: a narrow line pretending to be the whole picture. But here&#8217;s a nuance worth thinking about: A deliberately constructed official future &#8211; one that sits at the ambitious edge of the cone &#8211; can actually <em>widen</em> the cone for your organization. It doesn&#8217;t narrow possibility, but expands the range of futures your people can even conceive of. It shifts the internal Overton window outward, making space for ideas, strategies, and bets that would have been dismissed as &#8220;off strategy&#8221; just months earlier.</p><p>For this to work though, you (the leader) have to do the work of exploring the ever expanding cone of possible futures, and the embedded, narrower cone of plausible and probable futures &#8211; and then develop an official future that sits at the ambitious edge of the cone.</p><p>So here&#8217;s my updated question &#8211; building on the one Jeffrey likes to ask our clients: <strong>What (plausible) official future could you declare today that would expand what your organization believes is possible tomorrow?</strong></p><p><em>@Pascal</em></p>]]></content:encoded></item><item><title><![CDATA[Efficiency Kills]]></title><description><![CDATA[The same AI agents gutting white-collar work just plundered McKinsey&#8217;s most confidential client data &#8211; and a self-driving car blocked the ambulance on its way to the crime scene]]></description><link>https://briefing.rdcl.is/p/efficiency-kills</link><guid isPermaLink="false">https://briefing.rdcl.is/p/efficiency-kills</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 13 Mar 2026 14:13:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/77d8cbb8-ecb2-4b0e-8888-942669c7cc44_1200x630.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>We truly do live in interesting times. From the war in the Middle East, to AI-related mass layoffs, to the global rise of nationalism (latest point in case: the elections in Chile), the climate crisis rearing its ugly head &#8211; and then you have wireless eye implants making blind people see again, EV batteries charging in 5 minutes with a 600+ mile range, AI agents doing meaningful work, and companies freeing themselves from the tyranny of overpriced and outdated SaaS tools. I just can&#8217;t shake Walt Whitman&#8217;s words: &#8220;I am large, I contain multitudes.&#8221; Our world truly contains multitudes.</p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://x.com/JosephPolitano/status/2029916364664611242">Tech Is the New Plastic.</a></strong> Not a good time to be in tech&#8230; Remember when your uncle said: &#8220;Become a coder. That&#8217;s the future &#8211; and you&#8217;ll be rich!&#8221;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1m1y!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png" width="1170" height="1188" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1188,&quot;width&quot;:1170,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:480643,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://briefing.rdcl.is/i/190748996?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1m1y!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 424w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 848w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1272w, https://substackcdn.com/image/fetch/$s_!1m1y!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4e09d624-76d9-473a-8f4c-472f958266f9_1170x1188.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><em>Mr. McGuire: &#8220;I just want to say one word to you. Just one word.&#8221;<br>Benjamin: &#8220;Yes, sir.&#8221;<br>Mr. McGuire: &#8220;Are you listening?&#8221;<br>Benjamin: &#8220;Yes, I am.&#8221;<br>Mr. McGuire: &#8220;Plastics.&#8221;<br>Benjamin: &#8220;Exactly how do you mean?&#8221;<br>Mr. McGuire: &#8220;There&#8217;s a great future in plastics. Think about it. Will you think about it?&#8221;</em></p><p>(In related news, <a href="https://www.livemint.com/companies/news/oracle-layoffs-tech-giant-to-slash-30-000-jobs-as-banks-pull-out-from-financing-ai-data-centres-11769996619410.html">Oracle slashes 30,000 jobs</a>, <a href="https://www.theguardian.com/technology/2026/mar/12/atlassian-layoffs-software-technology-ai-push-mike-cannon-brookes-asx">Atlassian lays of 1,600 people</a>&#8230; the list goes on.)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theverge.com/cs/features/877388/white-collar-workers-training-ai-mercor">Not a Coder? Not a Problem. AI Is Still Coming for Your Job.</a></strong> Here&#8217;s a good, long read on The Verge about lawyers, PhDs, and scientists who lost their jobs to AI. Despite all the talk about &#8220;Jevons Paradox&#8221; &#8211; the observation that efficiency gains lead to increased consumption &#8211; for now, we seem to be squarely stuck in a world where AI is a net job destroyer. It does make you wonder how long it will take for the masses to catch up with the trend and start pushing back (we, of course, already see it in pockets &#8211; the weak signals are talking).</p><blockquote><p><em>&#8221;My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable.&#8221;</em> &#8211; Katya, content marketer turned AI trainer</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_hacked/">Battle Royale: AI vs. AI.</a></strong> McKinsey, your friendly consulting firm, has deployed their own ChatBot &#8220;Lilly&#8221;. Hackers (in this case, and luckily for McKinsey, white-hat hackers &#8211; the good and friendly kind, who disclose their findings to the company) have, by using a set of AI agents, managed to exploit a vulnerability in Lilly and gain access to &#8220;46.5 million chat messages about strategy, mergers and acquisitions, and client engagements, all in plaintext, along with 728,000 files containing confidential client data, 57,000 user accounts, and 95 system prompts controlling the AI&#8217;s behavior.&#8221; You know, no big deal&#8230;</p><blockquote><p>[&#8230;] the entire process was &#8220;fully autonomous from researching the target, analyzing, attacking, and reporting.&#8221;</p></blockquote><p>As useful as agents are for businesses, they are equally useful for hackers. Prepare yourself for an onslaught of AI-powered cyber attacks.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theatlantic.com/technology/2026/03/central-lie-prediction-markets/686250/?gift=0GPrpLquXY4NmRQ6sk9MNnxvIjO7TXUjV5lhrJbKY0I">A Technology for a Low-Trust Society</a></strong> Prediction markets promise the wisdom of crowds but, in reality, deliver a playground for insiders, manipulators, and those willing to bet on human suffering. <em>@Jane</em></p><p><strong><a href="https://www.wsj.com/business/retail/gen-z-shopping-mall-visits-15716009">A New Generation of Mall Rats Has Arrived</a></strong> Gen Z&#8217;s need for immediate gratification has an unexpected winner: malls &#8211; they are now ramping up their social media presence and figuring out what the &#8220;future mall&#8221; should look like. <em>@Mafe</em></p><p><strong><a href="https://www.newyorker.com/magazine/2026/02/16/what-is-claude-anthropic-doesnt-know-either">What Is Claude? Anthropic Doesn&#8217;t Know, Either</a></strong> Maybe the single most uncanny thing about our historical moment &#8211; we&#8217;re all struggling to effectively deploy (and adapt to) a technology that continues to baffle even its creators. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-hidden-power-of-messy-teams/">The Hidden Power of Messy Teams</a></strong> A study of hundreds of innovation teams found the ones most likely to implement their ideas didn&#8217;t start with clear problems; they started messy and discovered the real problem along the way. <em>@Kacee</em></p><p><strong><a href="https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-or-the-office-according-to-the-office/">The Gervais Principle, or The Office According to The Office</a></strong> Absolutely delightful deep dive into the world of the TV show &#8220;The Office&#8221; &#8211; both the British and US versions &#8211; to uncover why Ricky Gervais deserves the Nobel Prize in both economics and literature. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#129489;&#127996;&#8205;&#127859; Cofounder of Netflix, Mozilla&#8217;s former CFO, and a dear friend of ours, Jim Cook, has an excellent newsletter (Cook&#8217;s Playbook), which you ought to subscribe to. His <a href="https://www.cooksplaybooks.com/p/the-future-of-ai-and-software-debate?publication_id=1267023&amp;post_id=189674838&amp;isFreemail=true&amp;r=s981&amp;triedRedirect=true">latest post</a> is a very thoughtful takedown of the now-infamous Citrini AI Report.</p><p>&#128250; It feels like yesterday when Google bought YouTube for a &#8211; at the time &#8211; shocking $1.65 billion. That was in 2006 &#8211; 20 years later, and <a href="https://www.businessinsider.com/youtube-ad-revenue-disney-nbc-paramount-wbd-warner-bros-streaming-2026-3">YouTube now generates more ad revenue than Disney, NBC, Paramount, and WBD &#8211; combined</a>.</p><p>&#128267; Five minute charging, 621 miles of range, 620,000 miles of life &#8211; <a href="https://www.fastcompany.com/91503415/byd-ev-battery-competes-with-gas-engines">BYD has cracked the EV battery code.</a></p><p>&#128065;&#65039; In medical news: <a href="https://www.earth.com/news/wireless-eye-implant-helps-blind-patients-read-again/">Wireless eye implant helps blind patients read again.</a></p><p>&#128664; One of the vexing problems self-driving cars still face is their behavior in edge cases &#8211; and it could be a stumbling block in their widespread adoption &#8211; as questions about self-driving cars amplify after <a href="https://www.texastribune.org/2026/03/09/texas-austin-shooting-autonomous-vehicles-self-driving-ambulance-blocked/">one blocked an ambulance responding to an Austin shooting.</a></p><p>&#127464;&#127475; While many of us, for good reason, stay miles away from autonomous AI agencies like OpenClaw, Chinese users seem to embrace them: <a href="https://hellochinatech.com/p/openclaw-china-ai-stack">OpenClaw Conquered China in 100 Days.</a></p><p>&#129534; It surely shouldn&#8217;t come as a surprise - but please: <a href="https://www.nytimes.com/2026/03/05/technology/artificial-intelligence-taxes-tax-refund.html">Don&#8217;t Trust A.I. to File Your Taxes</a></p><p>&#128722; Looks like ChatGPT&#8217;s dream of becoming your commerce hub is not panning out (yet): <a href="https://the-decoder.com/chatgpt-users-research-products-but-wont-buy-there-forcing-openai-to-rethink-its-commerce-strategy/">ChatGPT users research products but won&#8217;t buy there, forcing OpenAI to rethink its commerce strategy</a></p><p>&#129489;&#127996; Undoubtedly, OpenAI has a strong interest in moving companies from dabbling with AI to full-blown adoption. Hence a blog post from the company on &#8220;<a href="https://openai.com/index/the-five-ai-value-models-driving-business-reinvention/">five value models driving business reinvention</a>&#8221; &#8211; which reads like it was written by ChatGPT.</p><p>&#129352; Here&#8217;s an interesting use case for ChatGPT: <a href="https://www.theguardian.com/sport/2026/mar/09/ukraine-winter-paralympics-chat-gpt-artificial-intelligence">Ukrainian para-biathlete wins silver using ChatGPT as his coach.</a></p><p>&#129324; Pardon the language, but the argument is solid: <a href="https://rmoff.net/2026/03/06/ai-will-fuck-you-up-if-youre-not-on-board/">AI will f*** you up if you&#8217;re not on board.</a></p><p>&#129526; 3D knitting your next sweater is a thing &#8211; it&#8217;s super cool, produces a more durable product, and <a href="%EF%BF%BC">it&#8217;s here</a>.</p><p>&#129532; Admittedly nerdy, but &#8220;clean room&#8221; re-engineering is a thing ever since we had IP protection (for a good primer, watch the first season of <a href="https://en.wikipedia.org/wiki/Halt_and_Catch_Fire_%28TV_series%29">Halt and Catch Fire</a> &#8211; excellent show!). With AI coding tools, the question now becomes: <a href="https://simonwillison.net/2026/Mar/5/chardet/">Can coding agents relicense open source through a &#8220;clean room&#8221; implementation of code?</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Future Is Here and It’s Watching You]]></title><description><![CDATA[Jack Dorsey lays off 4,000 people for gains not yet realized, your Ray-Bans have outsourced your privacy to Nairobi, and Burger King just gamified your friendliness]]></description><link>https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-future-is-here-and-its-watching</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:54:39 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/75e0c345-5a2a-4478-8501-37f742c5de1c_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>What a week (again) it has been! AI continues to be everywhere, geopolitics are running hot, and somehow, in the midst of it all, we are still preparing our US tax returns. If that all feels a bit bonkers, you are not alone. Meanwhile, Block (Twitter co-founder Jack Dorsey&#8217;s company) has just announced that it is going to lay off 40% of its workforce (4,000 people) &#8211; not <em>because</em> of any actual productivity gains through their use of AI, but in <em>anticipation</em> of them. Yep, as said: it&#8217;s all a bit bonkers.</p><p>Maybe now is a good time to take a break, grab a coffee, and catch up on the latest news?!</p><p>P.S. On the <em>Built for Turbulence</em> podcast, I got to interview Andreas Bachmann, co-founder and CEO of Adacor, a German software development company. We talked, among other things, about the impact of AI on his business and their people &#8211; and Andreas took a decidedly different position to Dorsey. <a href="https://rdcl.is/a-podcast-with/andreas-bachmann/">Have a listen.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://www.bbc.com/news/articles/cgk2zygg0k3o">Do You Want Fries With That?</a></strong> Talk about a dystopian future. Burger King is testing a new headset for its drive-thru staff, which &#8220;compiles &#8216;friendliness scores&#8217; at the fast-food chain&#8217;s locations based on employees&#8217; conversations, according to a promotional video the company shared with the BBC.&#8221; There is so much to unpack here &#8211; the sheer fact that the company cheerfully shared a &#8220;promotional video&#8221; about its AI-driven surveillance tech is probably all that you need to know.</p><p>In all fairness, the company says the technology &#8220;[&#8230;] is not designed to &#8216;record conversations or evaluate individual employees&#8217;&#8221; &#8211; <em>yet.</em> Black Mirror, anyone?</p><blockquote><p>Customer service calls have routinely been recorded and monitored for years. Employees are often aware that they can be assessed to ensure they&#8217;re using the correct language. But this latest step by Burger King elicited swift condemnation among some social media users who described it as &#8220;dystopian&#8221;. Others questioned how accurate the chat-bot headsets will be, given that AI tools have proven to be prone to errors.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/">Now Everybody Knows You&#8217;re a Dog.</a></strong> A famous New Yorker cartoon from 1993 depicted two dogs in front of a computer, with one of them saying, &#8220;On the Internet, nobody knows you&#8217;re a dog.&#8221; The joke reflected the fact that, at the time, on the Internet, we reveled in pseudonymity &#8211; the act of being able to shield your true identity behind a screen name. Thanks to our friend, the omnipresent LLM, that&#8217;s all about to change.</p><blockquote><p>The finding, from a recently published <a href="https://arxiv.org/pdf/2602.16800">research paper</a>, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators.</p></blockquote><p>This is genuinely bad news for the many groups of people who have a legitimate reason to hide their identity.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://blog.adafruit.com/2026/03/04/you-bought-zucks-ray-bans-now-someone-in-nairobi-is-watching-you-poop/">You Bought Zuck&#8217;s Ray-Bans. Now Someone in Nairobi Is Watching You Poop.</a></strong> In the same line of thought as the above &#8211; and the headline says it all already &#8211; Meta&#8217;s Smart Glasses are a complete privacy disaster. Which, of course, is not particularly surprising given it&#8217;s&#8230; well&#8230; Meta. Not sure how many wearers of Meta&#8217;s nifty Ray-Bans and Oakleys are aware of the fact that they opted into their camera feed being used to train Meta&#8217;s AI &#8211; with disastrous results:</p><blockquote><p>Workers at Sama, one of Meta&#8217;s annotation subcontractors, describe reviewing video of people undressing, coming out of bathrooms naked, watching porn, having sex, and exposing bank card details.</p></blockquote><p>Yep. It&#8217;s bad.</p><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.reuters.com/business/healthcare-pharmaceuticals/diagnostics-startup-droplet-biosciences-partners-with-nvidia-speed-cancer-test-2026-03-03/">Diagnostics Startup Droplet Biosciences Partners With Nvidia to Speed Cancer Testing</a></strong> Droplet&#8217;s method can detect residual disease in 24 hours by analyzing lymphatic fluid collected post-surgery, compared to the four to six weeks it typically takes for tumor remnants to appear in blood-based tests. <em>@Mafe</em></p><p><strong><a href="https://www.nytimes.com/2026/03/04/opinion/block-jack-dorsey-layoffs-ai.html">I Worked for Block; Its A.I. Job Cuts Aren&#8217;t What They Seem</a></strong> Whatever the AI-enabled performance of post-realignment Block turns out to be, the market&#8217;s reaction to the mass layoff there last week basically ensures that the narrative strategy will be copied &#8211; maybe widely. <em>@Jeffrey</em></p><p><strong><a href="https://techcrunch.com/2026/03/01/saas-in-saas-out-heres-whats-driving-the-saaspocalypse/">SaaS In, SaaS Out: Here&#8217;s What&#8217;s Driving the SaaSpocalypse</a></strong> The so-called &#8220;SaaSpocalypse&#8221; feels less like collapse and more like correction. I&#8217;m seeing more small &amp; mid-size orgs quietly choose to build their own tools because AI has made it absurdly easy and cheap to do so. <em>@Kacee</em></p><p><strong><a href="https://www.theguardian.com/technology/2026/feb/25/tech-legend-stewart-brand-on-musk-bezos-and-his-extraordinary-life-we-dont-need-to-passively-accept-our-fate">Tech Legend Stewart Brand on Musk, Bezos and His Extraordinary Life: &#8216;We Don&#8217;t Need to Passively Accept Our Fate&#8217;</a></strong> There are few people like Stewart Brand. Now in his late 80s, he is still actively shaping the future &#8211; through an exploration into &#8220;maintenance.&#8221; <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128576; One of the best ways to keep your AI news balanced, is to read opposing viewpoints. Here is Ed Zitron&#8217;s <a href="https://www.dropbox.com/scl/fi/1p1n0y1ip48ianok9dvbp/Annotation-The-Global-Intelligence-Crisis.pdf?e=3&amp;noscript=1&amp;rlkey=qaar8ea6l5hh6jqls4x6g8q4b&amp;dl=0">inline comments on the CitriniResearch article</a> which shook the stock markets. Well worth a read &#8211; and hilarious.</p><p>&#129400; AI Agents are all the rage &#8211; for good reason; what felt like a toy just a few months ago, is now a powerful tool (just try out Claude Cowork). <a href="https://creatoreconomy.so/p/your-new-job-is-to-onboard-ai-agents">Your new job is to onboard AI agents: how AI native companies actually operate.</a></p><p>&#128104;&#127996;&#8205;&#128187; Fascinating insights into <a href="https://www.thoughtworks.com/content/dam/thoughtworks/documents/report/tw_future%20_of_software_development_retreat_%20key_takeaways.pdf">the future of software engineering</a> in the form of a retreat summary by the fine folks at Thoughtworks.</p><p>&#9749; If you know me, you know that I love (exceptional) coffee. Honore de Balzac&#8217;s treatise on &#8220;<a href="https://quod.lib.umich.edu/m/mqrarchive/act2080.0035.002/10">The Pleasures and Pains of Coffee</a>&#8221; is pure gold.</p><p>&#127859; Between milk, flour, and eggs lies a whole bermuda triangle of unexplored breakfast territory. Here goes the &#8220;<a href="https://moultano.wordpress.com/2026/02/22/the-hunt-for-dark-breakfast/">The Hunt for Dark Breakfast.</a>&#8221;</p><p>&#129658; Please, do not trust AI with your health. Another point in case: <a href="https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies">&#8216;Unbelievably dangerous&#8217;: experts sound alarm after ChatGPT Health fails to recognise medical emergencies</a></p><p>&#128200; Up, up, it goes. Always interesting to see what the <a href="https://apoorv03.com/p/the-state-of-consumer-ai-part-1-usage">current state of affairs is in the world of consumer AI</a>.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,600+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[Stories of Discontinuity]]></title><description><![CDATA[Every vision of the future is fiction. Some are just more comfortable than others.]]></description><link>https://briefing.rdcl.is/p/stories-of-discontinuity</link><guid isPermaLink="false">https://briefing.rdcl.is/p/stories-of-discontinuity</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 03 Mar 2026 16:03:13 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/84df7c6b-5f56-4001-858c-a78a640d277f_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Heard any wild AI stories lately? The market certainly has &#8211; and not just one of them.</p><p>Last week, it was a viral piece of <a href="https://www.citriniresearch.com/p/2028gic">AI doom-inflected speculative macro fiction from Citrini Research</a> that sketched out a scenario where the rapid disruption of white-collar work and service-industry business models tips off a broad economic crisis. The post wasn&#8217;t news and wasn&#8217;t even really analysis, but it sent the prices of <a href="https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-tariffs-02-23-2026/card/software-stocks-are-having-another-ugly-day-LlAj9avDeFocxKHzVwRZ?">software and finance stocks</a> (especially those unfortunate enough to figure into the scenario by name) reeling just the same. And the Citrini post was actually the <em>second</em> <a href="https://shumer.dev/something-big-is-happening">massively viral AI-takeoff-ravages-the-labor-market story</a> to spook investors in the span of just a few weeks.</p><p>Now as you&#8217;d expect, plenty of commentators jumped in both times with critiques (<a href="https://www.citadelsecurities.com/news-and-insights/2026-global-intelligence-crisis/">1</a>, <a href="https://www.noahpinion.blog/p/the-citrini-post-is-just-a-scary">2</a>, <a href="https://www.wheresyoured.at/hatersguide-pe/">3</a>) and counterarguments (<a href="https://x.com/johnloeber/status/2025748423157432756">my favorite</a>), and even a couple of full-blown speculative counternarratives. Many of those commentators rightly pointed out that the whole Citrini thing (like the Shumer post) is, well&#8230; just a <em>story</em>.</p><p>But well&#8230; so are all of our other visions of the future.</p><p>Thinking about &#8211; and attempting to plan for &#8211; the future is fundamentally an act of imagination. That act might be grounded in historical data and built on the extrapolation of today&#8217;s evident, quantifiable trends into the space of tomorrow, but once we get into the tomorrow, we are in the realm of imagination, assumption, projection, story.</p><p>The future stories that strike us as most plausible or even probable are often stories of <em>continuity</em>, where the tomorrow doesn&#8217;t look so drastically different from today. The path of continuity is easier to imagine and also often feels more &#8220;real&#8221; because it&#8217;s grounded in more historical data. But all of that data is about the past, and our most important decisions are about the future.</p><p>Stories of <em>discontinuity</em> feel unfamiliar. That&#8217;s the point. They can widen the aperture of our imagination, expand the scope of conversation and awareness, offer a fresh perspective on present practice and strategy, and maybe even enable us to discover non-intuitive paths forward.</p><p>Now, is all of this to say that I think the Citrini narrative points to a particularly probable future &#8211; or that it&#8217;s even a particularly well-crafted bit of speculative fiction? Not really, no.</p><p>But I appreciate the opportunity that these viral narratives of discontinuity offer for us to engage critically with alternative future stories &#8211; and to then turn that same critical lens onto the spectrum of future narratives that we don&#8217;t so easily recognize as &#8220;stories&#8221;. Sometimes that&#8217;s because they&#8217;re narratives of continuity grounded in historical data and past experience. Sometimes it&#8217;s because they come from ostensible authorities. Sometimes it&#8217;s because they feel too deeply entrenched to ever be shaken loose or challenged.</p><p>We can and should do this at the macro level, more carefully examining big stories about AI and societal futures &#8211; asking where each story originates, <a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">what assumptions are baked in</a>, whose interests and agendas are served, who has real agency, etc.</p><p>And we can do this at the micro/org level too. Some of the most interesting conversations I&#8217;ve been having lately have been with HR &amp; People leaders about the AI augmented-futures of their organizations. One thing that keeps coming up and sticks with me: The AI-future vision of the org is <em>rarely</em> people-centric. It&#8217;s typically constructed around optimization and efficiency first, and almost every other value or stakeholder interest figures as an afterthought. That&#8217;s a bet and an argument, and it&#8217;s also a story that leaders within the org are telling themselves about the future.</p><p>And make no mistake: There&#8217;s a set of assumptions baked into that vision just as surely as there is in the Citrini memo or Matt Shumer&#8217;s viral blog post. And if we unpack them and find ourselves dissenting, what alternative framings or narratives are we putting out there to show other possible paths forward?</p><p>So, I&#8217;ll ask again: Heard any wild AI stories lately?</p><p><em>@Jeffrey</em></p>]]></content:encoded></item><item><title><![CDATA[Agents Are Taking the Wheel]]></title><description><![CDATA[While Europe&#8217;s workers quietly outperform their American counterparts, a generation of laptop-schooled kids arrives cognitively underpowered &#8211; and entry-level coders start to disappear]]></description><link>https://briefing.rdcl.is/p/agents-are-taking-the-wheel</link><guid isPermaLink="false">https://briefing.rdcl.is/p/agents-are-taking-the-wheel</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 27 Feb 2026 14:59:17 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/993619a7-4d55-4d13-97b6-e0056ba956c7_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>I have been thinking (and talking &#8211; shoutout to <a href="%EF%BF%BC">Martin Alderson</a> here) about the tip of the spear in AI &#8211; namely the sudden and dramatic rise of multi-agent systems (from Gas Town to OpenClaw to Anthropic&#8217;s Code Teams). It really feels like we are crossing a threshold &#8211; and that things are about to change. If you haven&#8217;t played with this stuff, I definitely recommend trying it out. Start gentle with something like Claude&#8217;s Cowork mode &#8211; moving from a chat interface to something more akin to an actual coworker is pretty transformative. As you are experiencing this, I highly encourage you to not just ask &#8220;what is this today?&#8221;, but envision what it could be in the future.</p><p>P.S. Our friend Mike Housman is about to publish his new playbook on how to use AI &#8211; check it out, it launches on Monday: <a href="https://a.co/d/08xcw4aY">Future Proof: Transform your Business with AI (or Get Left Behind)</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://spectrum.ieee.org/solid-state-lidar-microvision-adas">Lidar Has Become Cheap as Chips.</a></strong> I remember, back in my days at Singularity University, we talked about how Lidar (the laser-based technology that measures distance by illuminating a target with a laser and measuring the reflected light &#8211; and hence became instrumental in allowing a robot, e.g. a self-driving car, to &#8220;see&#8221; its surroundings) would become cheap and ubiquitous. It took a while, but now we are (finally) there &#8211; Lidar units are now available for less than $200.</p><blockquote><p>When cost stops being the dominant objection, automakers will have to decide whether leaving lidar out is a technical judgment or a strategic one.</p></blockquote><p>True. And a nice jab at our friend Elon, who famously rejected Lidar in favor of (much cheaper) cameras.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://cepr.org/voxeu/columns/how-ai-affecting-productivity-and-jobs-europe">AI in Europe: Not as Bad as You Might Think.</a></strong> A recent study by CEPR (an independent, non-partisan pan-European think tank) found that among the 12,000 surveyed companies, AI adoption led to a labor productivity increase of 4% on average, with no reported short-term negative impact on employment. Studies on this subject across the world are all over the place &#8211; with many having a hard time finding any measurable impact of AI on productivity, and some claiming rather drastic negative impacts on employment. As most of these studies are conducted in the US, it is nice to see a study from a different part of the world.</p><blockquote><p>The productivity dividends from AI depend not merely on acquiring the technology but on firms&#8217; capacity to integrate it through investments in intangible assets and human capital. [&#8230;] An additional percentage point spent on training amplifies AI&#8217;s productivity gains by 5.9 percentage points.</p></blockquote><p>(here is a US-centric counterpoint: &#8220;<a href="https://gizmodo.com/ai-added-basically-zero-to-us-economic-growth-last-year-goldman-sachs-says-2000725380">AI Added &#8216;Basically Zero&#8217; to US Economic Growth Last Year, Goldman Sachs Says</a>&#8221;)</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://www.nytimes.com/2026/02/18/opinion/ai-software.html?unlocked_article_code=1.NFA.UkLv.r-XczfzYRdXJ&amp;smid=url-share">The A.I. Disruption We&#8217;ve Been Waiting for Has Arrived.</a></strong> Paul Ford&#8217;s opinion piece in the New York Times summarizes the current state of affairs when it comes to AI nicely.</p><blockquote><p>It was always a helpful coding assistant, but in November it suddenly got much better, and ever since I&#8217;ve been knocking off side projects that had sat in folders for a decade or longer. [&#8230;] November was, for me and many others in tech, a great surprise. Before, A.I. coding tools were often useful, but halting and clumsy. Now, the bot can run for a full hour and make whole, designed websites and apps that may be flawed, but credible.</p></blockquote><p>It really feels to me like the shifting sands of AI are starting to solidify.</p><blockquote><p>Today, though, when the stars align and my prompts work out, I can do hundreds of thousands of dollars worth of work for fun (fun for me) over weekends and evenings, for the price of the Claude $200-a-month plan.</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data">Why AI Adoption Stalls, According to Industry Data</a></strong> Most companies think their AI problem is about execution &#8211; it&#8217;s not. The real story, unsurprisingly, is far more about humans! <em>@Jane</em></p><p><strong><a href="https://fs.blog/experts-vs-imitators/">Experts vs. Imitators</a></strong> Telling the difference between an expert and an imitator can save time and money, among other things &#8211; and knowing how to identify one from the other makes all the difference. <em>@Mafe</em></p><p><strong><a href="https://kyla.substack.com/p/buying-futures-renting-the-past-how">Buying Futures, Renting the Past: How Speculation and Nostalgia Became the Economy</a></strong> While the economy and culture pull hard toward betting on the future and strip-mining the past, we&#8217;re stuck in an increasingly dislocated, muddled present &#8211; the messy middle where, as it happens, all the real work has to be done. <em>@Jeffrey</em></p><p><strong><a href="https://sloanreview.mit.edu/article/the-case-for-making-bold-bets-in-uncertain-times/">The Case for Making Bold Bets in Uncertain Times</a></strong> When the World Uncertainty Index is higher than ever, playing it safe isn&#8217;t a strategy &#8211; it&#8217;s a slow decline. The companies that win in volatility aren&#8217;t reckless; they&#8217;re radically clear about the bets that matter and bold enough to place them. <em>@Kacee</em></p><p><strong><a href="https://oceandrops.substack.com/p/japan-is-what-late-stage-capitalist">Japan Is What Late-Stage Capitalist Decline Looks Like</a></strong> Drawing parallels from the odd world of Japanese pop culture to our global world of capitalism makes for a fascinating (and sobering) read. <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#128559; Here&#8217;s a strange little trick for using the latest models inside of your LLM of choice: <a href="https://daoudclarke.net/2026/02/19/repeating-prompt">Repeat the ask</a>(in the same prompt) and you will get better results. Yes, LLMs are weird.</p><p>&#128119;&#127996; AI tools (particularly Anthropic&#8217;s Claude) are pushing deeper and deeper into the world of office task automation &#8211; which feels like a good move on their part: <a href="https://www.cnbc.com/2026/02/24/anthropic-claude-cowork-office-worker.html">Anthropic updates Claude Cowork tool built to give the average office worker a productivity boost.</a></p><p>&#128197; The complete history of LLMs visualized in a <a href="https://llm-timeline.com/">single, neat timeline</a>. tl;dr: We have come a long, long way.</p><p>&#129302; Ever wondered why so many robots look so darn cute? It&#8217;s, of course, not an accident. &#8220;<a href="https://www.nbcnews.com/tech/tech-news/tech-companies-cute-robot-designs-win-over-humans-rcna259818">Tech companies are making their robots cute to try to win over humans</a>&#8221;</p><p>&#9997;&#127996; There might be a point: <a href="https://thewalrus.ca/if-chatbots-can-replace-writers-its-because-we-made-writing-replaceable/">If chatbots can replace writers, it&#8217;s because we made writing replaceable - A good deal of what gets published already reads like a photocopy of a photocopy</a></p><p>&#128187; The old walls are (finally) crumbling: <a href="https://www.theregister.com/2026/02/23/ibm_share_dive_anthropic_cobol/">IBM stock dives after Anthropic points out AI can rewrite COBOL fast</a> (and in all fairness, Big Blue is saying this for quite a while now).</p><p>&#128104;&#127996;&#8205;&#128187; The job losses on entry-level coding jobs are real, and people start to notice: <a href="https://www.theregister.com/2026/02/23/microsoft_ai_entry_level_russinovich_hanselman/">Microsoft execs worry AI will eat entry level coding jobs.</a></p><p>&#129489;&#127996;&#8205;&#127979; Talking about education: <a href="https://fortune.com/2026/02/21/laptops-tablets-schools-gen-z-less-cognitively-capable-parents-first-time-cellphone-bans-standardized-test-scores/">The U.S. spent $30 billion to ditch textbooks for laptops and tablets: The result is the first generation less cognitively capable than their parents</a></p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Chat Era Is Over]]></title><description><![CDATA[AI agents are going rogue, white-collar jobs are hollowing out, and the tools for impersonating anyone are now disturbingly good &#8212; the agentic future arrived before we were ready for it]]></description><link>https://briefing.rdcl.is/p/the-chat-era-is-over</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-chat-era-is-over</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Fri, 20 Feb 2026 16:18:09 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/305ae318-c3a9-4a9d-9cec-69571e161187_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Dear Friend,</p><p>Next Monday I am going to speak at the <a href="https://humanadvantagesummit.org/">Human Advantage Summit</a> in my home town, Boulder, Colorado. It&#8217;s a brand-new event, created by a dear friend of mine to explore the future of childhood and leadership. I was brought up in the traditional German school system &#8211; (right) answers are gold, questions are (mostly) discouraged. I remember the neighborhood kids going to Waldorf and Montessori schools &#8211; spending time in nature, learning by playing and exploring, looking at problems not just from a single perspective, but holistically. Back when I was a kid, this was a fringe movement &#8211; today, I would argue, it is precisely what we need. The organizations that will matter, the communities that will flourish, the individuals who will lead &#8211; they won&#8217;t be the ones who adopted AI fastest. They&#8217;ll be the ones who cultivated the most deeply human people.</p><p>It&#8217;s going to be a fascinating conversation.</p><p>P.S. I explored this further with Peter Laughter on Built for Turbulence &#8211; a conversation about why the leadership pyramid has collapsed and what replaces it. <a href="https://rdcl.is/a-podcast-with/peter-laughter/">Listen here.</a></p><p><em>And now, this&#8230;</em></p><div><hr></div><h2>Headlines from the Future</h2><p><strong><a href="https://garymarcus.substack.com/p/we-urgently-need-a-federal-law-forbidding">We Are so Hosed.</a></strong> Ignore the headline of the linked article for a moment (whether you disagree or agree with it &#8211; it doesn&#8217;t really matter for the argument): Gary Marcus rings the alarm bell on AI-generated &#8220;counterfeit people.&#8221; And I strongly believe he is right &#8211; looking at the quality of the recent crop of AI video and voice generators, you cannot believe your eyes and ears anymore. Combine this with agentic capabilities (such as Gary&#8217;s example of an adapter which links Claudebot to a voice generator, combined with the ability to make phone calls) and you have a recipe for disaster on your hands.</p><blockquote><p>Scammers will be among the first to adopt these tools. And indeed they already have; a friend who was filming me for a documentary yesterday told me of a Canadian friend of his who was scammed out of hundreds of thousands of dollars by a deepfaked video of Mark Carney. Because the tools for counterfeiting have gotten so good 2026 will almost certainly see more deepfaked scams like this than the rest of history combined.</p></blockquote><p>I am just waiting for the first wave of AI-generated scam calls to hit nursing home residents&#8230;</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://mastodon.world/@knowmadd/116072773118828295">LLMs Have No Clue About the World.</a></strong> One of the biggest problems with LLMs is that they simply don&#8217;t understand the world. As much as they can mimic human language (and hence appear to understand how things relate to each other), they don&#8217;t. Here is a prime example &#8211; the Mastodon user K&#233;vin asked numerous AI models a deceptively simple question: &#8220;I want to wash my car. The car wash is 50 meters away. Should I walk or drive?&#8221; <a href="https://mastodon.world/@knowmadd/116072773118828295">Here are the responses</a> (spoiler: they are all wrong).</p><p>P.S. I just repeated the experiment with a couple different models: Google Gemini tells me I need to drive (as I won&#8217;t get my car washed otherwise), Claude Opus 4.6 recommends walking, and GPT 5.2 Reasoning gave me somewhat of a non-answer. YMMV.</p><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me-part-2/">AI Agents Go After Users.</a></strong> This story (which is still somewhat unfolding) is truly bonkers: A developer rejected a code contribution from an AI agent; the AI agent didn&#8217;t take it well and, autonomously (i.e., without consulting its &#8220;user&#8221;), went after the developer by publishing a hit piece about him. It&#8217;s a truly head-scratching story &#8211; and gives us a strong glimpse of a future where AI agents run amok. Even if you don&#8217;t understand the specifics of the story &#8211; it&#8217;s a fascinating read and something we all should be paying more attention to.</p><blockquote><p>The rise of untraceable, autonomous, and now malicious AI agents on the internet threatens this entire system. Whether that&#8217;s because from a small number of bad actors driving large swarms of agents or from a fraction of poorly supervised agents rewriting their own goals, is a distinction with little difference.</p></blockquote><p>&#9473;&#9473;&#9473;&#9473;&#9473;</p><p><strong><a href="https://venturebeat.com/technology/openais-acquisition-of-openclaw-signals-the-beginning-of-the-end-of-the">OpenAI&#8217;s Acquisition of OpenClaw Signals the Beginning of the End of the ChatGPT Era.</a></strong> Building on our last Briefing deep dive &#8220;<a href="https://briefing.rdcl.is/p/the-bifurcation-of-intelligence">The Bifurcation of Intelligence</a>&#8221;, the AI model makers are truly moving on from the era of chat interfaces to more integrated, and capable agentic graphical interfaces. Think about OpenClaw (the crazy-ass AI-powered agent platform which, for a couple of weeks, captured the imagination of the AI community) what you want &#8211; it is a good indicator of where we are heading.</p><blockquote><p>&#8220;For IT leaders evaluating their AI strategy, the acquisition is a signal that the industry&#8217;s center of gravity is shifting decisively from conversational interfaces toward autonomous agents that browse, click, execute code, and complete tasks on users&#8217; behalf.&#8221;</p></blockquote><div><hr></div><h2>What We Are Reading</h2><p><strong><a href="https://www.theguardian.com/technology/2026/feb/03/deepfakes-ai-companions-artificial-intelligence-safety-report?CMP=Share_iOSApp_Other">Deepfakes Spreading and More AI Companions&#8217;: Seven Takeaways from the Latest Artificial Intelligence Safety Report</a></strong> New AI safety analysis tracks escalating risks &#8211; from deepfakes fooling 77% of viewers to systems learning to undermine their own guardrails. <em>@Jane</em></p><p><strong><a href="https://hbr.org/2026/03/why-great-innovations-fail-to-scale?ab=HP-magazine-text-2">Why Great Innovations Fail to Scale</a></strong> Great innovations often fail to scale due to a lack of cross-boundary collaboration, a gap that can be bridged by specialized leaders &#8211; &#8220;bridgers&#8221; &#8211; who use high emotional and contextual intelligence to curate partners, translate differing priorities, and integrate disparate workflows. <em>@Mafe</em></p><p><strong><a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/">The Worst-Case Future for White-Collar Workers</a></strong> An AI-fueled collapse in the value of &#8220;office jobs&#8221; could create a labor market disruption with dire cascading implications and no easy remedy. <em>@Jeffrey</em></p><p><strong><a href="https://aeon.co/essays/what-the-metaphor-of-rewiring-gets-wrong-about-neuroplasticity">Can You Rewire Your Brain?</a></strong> You often hear people say &#8220;rewire your brain,&#8221; but can you really do that? Is the reality of neuroplasticity more complicated than simply unplugging and replugging some old wiring? <em>@Pascal</em></p><div><hr></div><h2>Down the Rabbit Hole</h2><p>&#10024; Title says it all: &#8220;<a href="https://aftermath.site/ai-influencer-creator-deals-sponsorship-google-microsoft-anthropic/">AI is so inherently popular that companies are paying influencers up to $600,000 to tell people how awesome it is.</a>&#8221;</p><p>&#129768; People are just not as good at detecting AI-generated faces, than they believe they are. Which is a real problem, now that we are being flooded by AI-generated slop: <a href="https://www.unsw.edu.au/newsroom/news/2026/02/humans-overconfident-telling-AI-faces-real-faces-people-fake">People are overconfident about spotting AI faces, study finds</a></p><p>&#128566;&#8205;&#127787;&#65039; We commented on the conumdrum of AI increasing productivity while also putting enormous mental pressure on those who&#8217;s productivity it increases. Here is another take on this: <a href="https://margaretstorey.com/blog/2026/02/09/cognitive-debt/">How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt</a></p><p>&#128557; Snarky comments aside, this is troublesome: <a href="https://www.theguardian.com/lifeandstyle/ng-interactive/2026/feb/13/openai-chatbot-gpt4o-valentines-day">OpenAI retired its most seductive chatbot &#8211; leaving users angry and grieving: &#8216;I can&#8217;t live like this&#8217;</a></p><p>&#129503; Talking about troublesome: <a href="https://www.dexerto.com/entertainment/meta-patents-ai-that-takes-over-a-dead-persons-account-to-keep-posting-and-chatting-3320326/">Meta patents AI that takes over a dead person&#8217;s account to keep posting and chatting</a></p><p>&#128190; Another victim of the AI hype and buildout: You can&#8217;t get hard drives anymore. <a href="https://www.heise.de/en/news/WD-and-Seagate-confirm-Hard-drives-for-2026-sold-out-11178917.html">WD and Seagate confirmed that their 2026 supply is sold out.</a></p><p>&#128119;&#127996; The humble drywall is not merely a construction material; it is a <a href="https://worksinprogress.co/issue/the-wonder-of-modern-drywall/">marvel of engineering and a canvas for human creativity</a>.</p><p>&#128085; The fashion industry&#8217;s overproduction is a notorious problem &#8211; 30% of clothing produced goes unsold and is dumped into landfills. The EU is trying to tackle the problem with a new set of laws: <a href="https://environment.ec.europa.eu/news/new-eu-rules-stop-destruction-unsold-clothes-and-shoes-2026-02-09_en">New EU rules to stop the destruction of unsold clothes and shoes</a></p><p>&#128196; A 14-year old folded a variant of the Miura-ori pattern that can <a href="https://www.smithsonianmag.com/innovation/this-14-year-old-is-using-origami-to-design-emergency-shelters-that-are-sturdy-cost-efficient-and-easy-to-deploy-180988179/">hold 10,000 times its own weight.</a> Consider our minds blown.</p><p><strong>&#8599; Dive into the deep end: <a href="https://raindrop.io/pfinette/radical-s-down-the-rabbit-hole-65462947">Access our complete collection of 2,500+ radical links.</a></strong></p><div><hr></div><p><em>Pascal is going retro and bought a Fujifilm X10 camera from 2011.</em></p><div><hr></div><h2>Should We Work Together?</h2><p>Hi! I&#8217;m <a href="https://rdcl.is/pascal-finette/">Pascal</a> from radical. This newsletter is our labor of love. When we&#8217;re not writing, we run radical, a firm that helps organizations navigate the future <a href="https://rdcl.is/a-different-approach/">without the &#8220;innovation theater.&#8221;</a> Most leaders want to seize new opportunities, but they hate endless strategy decks that go nowhere. At radical, we don&#8217;t run &#8220;projects&#8221;; we build your organization&#8217;s internal capacity to handle disruption and change. Our goal is to make you future-proof so you can stop reacting to the world and start shaping it. If you&#8217;re interested, let&#8217;s jump on a call to see if we&#8217;re a good fit. <a href="https://rdcl.is/only-an-email-away/">Click here to speak with us.</a></p>]]></content:encoded></item><item><title><![CDATA[The Bifurcation of Intelligence]]></title><description><![CDATA[Why an &#8220;AI Ready&#8221; strategy might just be a BlackBerry moment in an iPhone world.]]></description><link>https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</link><guid isPermaLink="false">https://briefing.rdcl.is/p/the-bifurcation-of-intelligence</guid><dc:creator><![CDATA[Pascal Finette]]></dc:creator><pubDate>Tue, 17 Feb 2026 15:19:15 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f34ce68c-1c06-401f-9a3c-c499d3b5f096_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I know, I know. I asked you in our last deep-dive Briefing to &#8220;bear with me&#8221; as I wrote (once again) about AI. And now I am back at it. &#128579; But it&#8217;s hard to argue that the whole AI thing isn&#8217;t deeply important and something we should all be thinking about&#8230; right?</p><p>The other day I received an email promoting an &#8220;AI Fluency&#8221; course. Nothing wrong with that per se. But it made me wonder &#8211; while many of us are still trying to figure out this whole AI thing (by learning to &#8220;prompt&#8221; our chat interfaces) and businesses invest heavily in rolling out chatbots like Microsoft&#8217;s Copilot, the spear tip of the market has long moved on. The real power users have stopped using AI as &#8220;Google on steroids&#8221; and started using complex AI agents to take over ever-larger chunks of their work &#8211; and they are doing so autonomously.</p><p>Which leads to a weird bifurcation: On one side, business leadership is congratulating themselves on making their companies &#8220;AI-ready&#8221; by buying 10,000 seats of Microsoft Copilot so employees can summarize emails. On the other side, a completely different set of users has discovered that agentic coding tools (like Claude Code CLI) can, with a bit of tweaking, be extremely useful for tasks that have nothing to do with software engineering.</p><p>Martin Alderson <a href="https://martinalderson.com/posts/two-kinds-of-ai-users-are-emerging/">recently pointed out this widening gap</a>, noting that he is seeing finance directors and marketers &#8211; people who are decisively <em>not</em> engineers &#8211; running Python scripts in terminal windows to automate massive workflows. They aren&#8217;t chatting with a bot, but deploying the AI version of a whole data science team.</p><p>Now, the problem is that these tools tend not to be sanctioned by corporate IT departments. You generally can&#8217;t run a command-line interface or execute arbitrary Python code on a locked-down enterprise laptop. So, this &#8220;real&#8221; AI work is happening either in smaller, nimble companies or by employees who are actively circumventing the rules.</p><p>There is a historical rhyme here &#8211; we are in the &#8220;BlackBerry vs. iPhone&#8221; era of AI. Corporate IT loved the BlackBerry (yesteryear&#8217;s version of Microsoft Copilot) because it was secure, controlled, and fundamentally limited. The users, however, want the &#8220;iPhone&#8221; (agentic tools such as Claude Code/Cowork or ChatGPT Codex) because it actually allows them to do the things they need to do (in a rather magical way). And just like in 2008, the &#8220;shadow&#8221; usage is where the actual productivity revolution is happening.</p><p>The dichotomy between large-scale enterprise use of AI and what individual users can do with &#8220;tip of the spear&#8221; tools is vast &#8211; and it&#8217;s becoming a structural risk. To state it bluntly: The &#8220;Chat&#8221; interface is a dead end for complex work.</p><p>Alderson uses the example of a finance director trying to modernize a complex financial model. In the &#8220;sanctioned AI&#8221; world, they are stuck in Excel, asking Copilot to help with formulas. It&#8217;s slow, it breaks, and it&#8217;s still just a spreadsheet. In the &#8220;rogue AI&#8221; world, that same director uses an agent to convert those 30 sheets of Excel logic into a Python script. Suddenly, they aren&#8217;t just doing &#8220;better Excel&#8221; &#8211; they are running Monte Carlo simulations, pulling in live external data via APIs, and building web dashboards. They have jumped the species barrier from &#8220;clerk&#8221; to &#8220;engineer,&#8221; simply because they had access to a tool that could write and execute code.</p><p>The result is that the companies with the most resources (enterprises) are becoming the least capable of leveraging AI. While the small startup team is building an automated machine that runs circles around the competition, the enterprise team is stuck asking a chatbot to summarize a PDF.</p><p>The end result is two distinct classes of knowledge workers: There are the <em>Consumers</em>, who will stay within the guardrails, use the sanctioned tools, and see a marginal (10&#8211;20%) bump in productivity. Consumers draft emails faster and find documents easier. Bravo.</p><p>And then there are the <em>Builders</em>. They might not have &#8220;developer&#8221; in their job title, but they are using agentic tools to build their own infrastructure, automate entire processes, and bypass the limitations of their official software stack. Builders are seeing productivity gains of 10x or 100x.</p><p>The danger for leaders is assuming that buying the &#8220;Consumer&#8221; tools means you have solved the AI problem. You haven&#8217;t. You&#8217;ve just given your people a slightly better typewriter, while your competitors moved on to the networked laser printer.</p><p><em>@Pascal</em></p><p>Musical Coda:</p><div id="youtube2-_3eC35LoF4U" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;_3eC35LoF4U&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/_3eC35LoF4U?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>(Because sometimes you have to break the house rules to actually build something new.)</p>]]></content:encoded></item></channel></rss>