The AI Godsend Paradox
Why the 1% improvement rule is changing everything, coding is now a fifteen-minute task, and we face the uncomfortable reality that AI only works if you already somewhat know the answer.
Bear with me as we talk about AI once more. Here on the radical Briefing, we have been talking a lot – perhaps too much, but since AI dominates the headlines, so be it 🤷🏼 – about AI: what it is and what it isn’t, and how it might or might not affect us, our work, and our organizations.
My own stance on AI (and more specifically, LLMs or GenAI) is constantly evolving – as is my personal use of the technology. I do believe that the ground has started to shift recently though. In the last 6–9 months, we have seen the usual flurry of LLM updates – Google launching Gemini 3 Pro, Anthropic launching Claude Opus 4.5, and, of course, OpenAI launching ChatGPT 5.2 in its various incarnations. This, in itself, is nothing new (nor particularly newsworthy), as each iteration of LLMs has been, for a while now, just a tad better than the last. Gone are the days when we went from ChatGPT being a party trick (but otherwise useless) to ChatGPT 3 being a somewhat useful tool. But in the process, which seems to follow the rule of “1% improvements,” we seem to have crossed thresholds that make the current generation of LLMs rather useful. It suggests we are, indeed, navigating shifting ground – at least on a personal use level.
When I wrote Disrupt Disruption at the beginning of 2022, ChatGPT hadn’t launched yet. Ask any author and they will tell you that books aren’t written; they are rewritten. That certainly was true for my book – it took me only two weeks of focused writing to get the first draft done (granted, I did two years of research and organizing beforehand), but another seven months to get it through all the rounds of editing. In the end, I had four editors working on the book – a process I somehow deeply enjoyed (maybe I am a glutton for punishment).
Fast forward to today. Right at this moment, I am knee-deep in writing my next book, “OUTLEARN – The Art of Learning Faster Than the World Can Change.” The process is the same: two-plus years of research and organizing, followed by a four-week sprint of focused writing. But this time, instead of employing four editors, AI is doing most of the heavy lifting. I have created highly customized prompts for developmental editing (the step where we look for holes in logic, argument strength, content clarity, etc.), trained on my previous work (hold that specific thought; we will come back to it), and very specific instructions aligned with the book’s content. I also use equally customized prompts for line editing (style and voice) and copyediting (spelling, grammar, the works). No human editor was needed until I got to the beta draft stage – which we are currently in. Now, I have a whole bunch of humans looking at the book for feedback, suggestions, and catching the odd AI slip-up. And yes, it’s devastating for human copy editors.
And, of course, there is software development. The other day, I wanted to extract all the links we ever published in the radical Briefing and import them into Raindrop to create a searchable archive for the community. It took me a whopping 15 minutes (I counted) to export the Briefings from Substack, ask Claude to write a Python script to extract the links, and import them into Raindrop. The archive is here.
Prior to Claude coding for me, this would have easily taken at least a full day. I am a decent but surely not great coder; because I code very infrequently, I need to look up things all the time.
And then there is Google. I don’t know about you, but I now use Google solely for finding specific webpages or to use one of its shortcuts (e.g., converting a currency). And the only reason I use Google for converting currencies is because it’s currently faster than using AI. The moment AI gets faster (and some smaller models tuned for speed are already fast enough for most of my use cases), Google becomes merely a link directory – not the “answer machine” it used to be before we had AI.
All of this is to say: I use AI every day and for all sorts of things. It’s my go-to tool. But my personal use cases also highlight an interesting paradox. As useful as AI is, it requires the human using it to have a good-to-excellent understanding of the subject matter. I am confident that AI would fail me as an editor if I didn’t give it oodles of prior writing to learn from, as well as highly specific instructions, which require a very solid understanding of what I am trying to achieve. And I have the massive benefit of having gone through the editing process before and know what I am looking for. The same goes for coding – three-plus decades of programming have taught me what to ask for and how to assess the results. And I know not to trust AI blindly. When using AI as a massively improved search engine, I habitually use not just one AI, but at least two, often three; the results tend to be vastly different depending on which AI I use. It truly is a world of “human in the loop.”
All of which makes me wonder: Is AI (at least in its current form) a godsend for people like me – measurably increasing my productivity – but (mostly) a failure for broad, generally applicable use cases? I doubt a generalized prompt does a good enough job of editing any book for any author. We know that coding assistants, in the hands of novices, introduce inefficiencies, bugs, and security vulnerabilities. And, of course, the internet is awash with stories of AI hallucinating and making stuff up – which well-intentioned but ill-informed people then take as gospel. I guess time will tell. Until then, the best advice I can give you (as of today) is to seriously dig into AI as an individual. If you are using it as a business tool, focus on use cases where you have reason to believe that you can generalize enough to make AI work.
@Pascal
Cue the musical coda:


The expertise dependency paradox is spot on. AI ends up being a productivity multplier for people who already know what good looks like, but can generate confidently wrong outputs for novices. Saw this firsthand when junior devs started using copilot and introduced vulns they didn't even recognize as problems.