Curiosity Killed the AI Skill Gap
From DeepSeek’s $6M disruption to protein-optimizing LLMs, this week’s radical insights reveal why boldness – not technical expertise – will define tomorrow’s AI leaders.
Dear Friend –
As the (alleged) Chinese proverb goes: May you live in interesting times. We surely do – and there is no trace that this is actually a Chinese proverb. But Jun Lei, Founder & CEO of the Chinese consumer electronics powerhouse Xiaomi, did say: Even a pig can fly if it sits in the right spot during a whirlwind.
With that… Here is your weekend read.
Headlines from the Future
Will Curiosity Be What Sets People Apart in an Age of AI ↗
Jack Clark, the co-founder of Anthropic, on humans and AI:
You might think this is a good thing. Certainly, it's very useful. But beneath all of this I have a sense of lurking horror - AI systems have got so useful that the thing that will set humans apart from one another is not specific hard-won skills for utilizing AI systems, but rather just having a high level of curiosity and agency.
In other words, in the era where these AI systems are true 'everything machines', people will out-compete one another by being increasingly bold and agentic (pun intended!) in how they use these systems, rather than in developing specific technical skills to interface with the systems.
We should all intuitively understand that none of this will be fair. Curiosity and the mindset of being curious and trying a lot of stuff is neither evenly distributed or generally nurtured. Therefore, I'm coming around to the idea that one of the greatest risks lying ahead of us will be the social disruptions that arrive when the new winners of the AI revolution are made - and the winners will be those people who have exercised a whole bunch of curiosity with the AI systems available to them.
Read the whole thing. It’s good.
—//—
Large Language Model Is Secretly a Protein Sequence Optimizer ↗
We mentioned this here before, but general-purpose LLMs are surprisingly good at specialized tasks. A new research paper shows this in the case of protein sequencing.
We demonstrate large language models (LLMs), despite being trained on massive texts, are secretly protein sequence optimizers. […] In this paper, we demonstrate LLMs themselves can already optimize protein fitness on-the-fly without further fine-tuning.
Outside of this specialized domain and the impact therein, this points toward a further weakening of the moats that specialized AI companies might hope to have.
—//—
Streetlight vs. Floodlight Effects Determine AI-Based Discovery ↗
A lot of excitement exists around the use of tailored AI models to do things such as drug discovery and the development of new materials. It turns out that Ethan Mollick’s Jagged Frontier of the use and application of AI applies here too. As Matt Clancy points out in his deep dive "Prediction Technologies and Innovation”:
We can imagine Kim (2023)’s technology is like a lonely streetlight, only illuminating protein structures that are near to others we already know, while Toner-Rodgers’ technology is a gigantic set of floodlights that illuminate a whole field.
In summary, the streetlight effect leads to a concentration of research efforts on well-trodden paths, while the floodlight effect can promote exploration of more novel and diverse areas. Thus, the former leads to sustaining innovation (at best), while the latter can lead to breakthrough innovation.
—//—
There Is No Moat in AI Models ↗
By now, you surely have heard about the Chinese DeepSeek R1 model – a model that cost a mere $6M to train (on only 2,000 NVIDIA chips) and is as good as ChatGPT’s o1 model (which cost at least 20 times more to train).
It’s a massive problem for the massively overhyped (and overpriced) US-based AI juggernauts – and the market is catching up. This is NVIDIA right now:
NVIDIA Stock Performance
If it was ever a question, the moats in AI models are crumbling…
Here is a good read on the topic: The Short Case for Nvidia Stock
What We Are Reading
🔭 How to Successfully Navigate Long-Horizon Technology There's no way around it; you must methodically monitor the gradual evolution of "slow-cooking" technologies in order to seize narrow windows of breakthrough opportunity. @Jane
🤑 Why the Rich Love Digital Shoplifting Yikes. Digital shoplifting seems to be a thing, even among people who can afford the items they buy. Fifty-five percent of Gen Z and forty-nine percent of Millennials earning more than $100,000 a year said they have done so in the past year. @Mafe
🏃 Silicon Valley Is Raving About a Made-in-China AI Model As OpenAI and Co. are pressured by DeepSeek, the whole situation might best be described by a former OpenAI executive: "Resource constraints often fuel creativity." Its effects on the development and deployment of AI systems, as well as their geopolitical regulation, are difficult to define. @Julian
🏛️ OpenAI Launches ChatGPT Gov for US Government Agencies ChatGPT Gov allows government agencies, as customers, to feed “non-public, sensitive information” into OpenAI’s models while operating within their own secure hosting environments. @Pedro
🔮 Why Steve Wozniak Dismissed PCs and Steve Jobs Didn't History can be a cruel mistress, and even our most brilliant minds get it wrong sometimes. @Pascal
Some Fun Stuff
🏰 Why not rebuild a medieval castle from scratch – using the tools and methods of yesteryear?