Beyond the Algorithm: Human Potential in an AI-Dominated World
As AI development shows signs of slowing, is genetic engineering our next frontier? Plus: what university students are really doing with Claude
Dear Friend –
What a week! If you weren’t convinced that we increasingly live in a world of heightened uncertainty, we surely all feel it now. This is a good moment to remind ourselves of Jeff Bezos’ insight: not only should we ask what’s going to change, but also what is not going to change. As the latter is what we build our business strategies on…
P.S. In case you were wondering what Pascal sounds like speaking German – here is yours truly on Gero Hesse’s excellent SAATKORN podcast, speaking German. 🙃
Headlines from the Future
How University Students Use Claude ↗
Anthropic, the maker of the Claude foundational AI model, just released their fairly in-depth report on the use of their LLM by university students. Outside of the expected (“Students primarily use AI systems for creating (using information to learn something new) and analyzing (taking apart the known and identifying relationships), such as creating coding projects or analyzing law concepts”), the report admits that:
There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over.
and
As students delegate higher-order cognitive tasks to AI systems, fundamental questions arise: How do we ensure students still develop foundational cognitive and meta-cognitive skills? How do we redefine assessment and cheating policies in an AI-enabled world?
These are very legitimate concerns – especially in a world that requires humans to be ever more on their A-game to keep competing with the very tool they use to outsource their learning.
—//—
How to Make Superbabies ↗
Maybe our best bet against the AI apocalypse is to create better humans… With all the attention focused on AI, synthetic biology and genetic engineering are getting much less love – alas, the field is moving fast:
Working in the field of genetics is a bizarre experience. No one seems to be interested in the most interesting applications of their research. […] We’ve spent the better part of the last two decades unravelling exactly how the human genome works and which specific letter changes in our DNA affect things like diabetes risk or college graduation rates. Our knowledge has advanced to the point where, if we had a safe and reliable means of modifying genes in embryos, we could literally create superbabies.
Here’s what that means in practical terms:
Already with just 300 edits and a million genomes with matching IQ scores, we could make someone with a higher predisposition towards genius than anyone that has ever lived. […] Diabetes, inflammatory bowel disease, psoriasis, Alzheimer’s, and multiple sclerosis can be virtually eliminated with less than a dozen changes to the genome.
And yet, we might be running out of time…
Given the current rate of improvement of AI, I would give a greater than 50% chance of AI having taken over the world before the first generation of superbabies grows up.
—//—
When it Comes to AI, Now Might be the Time to Build ↗
Many people, myself included, didn’t try to build a product around a language model because during the time you would work on a business-specific dataset, a larger generalist model will be released that will be as good for your business tasks as your smaller specialized model.
This being said – with generalized models advances slowing down (see our post from earlier today), now might be the time to build:
If your business idea isn’t in these domains, now is the time to start building your business-specific dataset. The potential increase in generalist models’ skills will no longer be a threat.
—//—
Recent AI Model Progress Feels Mostly Like Bullshit ↗
Dean Valentine, co-founder of Zeropath, on using LLMs to conduct security penetration testing (which might serve as a good test case for LLMs’ ability to “generalize outside of the narrow software engineering domain”):
Since 3.5-sonnet, we have been monitoring AI model announcements, and trying pretty much every major new release that claims some sort of improvement. Unexpectedly by me, aside from a minor bump with 3.6 and an even smaller bump with 3.7, literally none of the new models we’ve tried have made a significant difference on either our internal benchmarks or in our developers’ ability to find new bugs. This includes the new test-time OpenAI models.
And on the reasons why AI models (and their makers) are falling short of their advertised claims:
AI lab founders believe they are in a civilizational competition for control of the entire future lightcone, and will be made Dictator of the Universe if they succeed. Accusing these founders of engaging in fraud to further these purposes is quite reasonable. Even if you are starting with an unusually high opinion of tech moguls, you should not expect them to be honest sources on the performance of their own models in this race. There are very powerful short term incentives to exaggerate capabilities or selectively disclose favorable capabilities results, if you can get away with it.
—//—
OpenAI’s Copyright Problem ↗
This doesn’t come as a surprise, nor is it entirely new - but seeing it in such stark light makes you wonder…
OpenAI (and likely everyone else) has a serious copyright problem. It makes me wonder when the copyright holders will start to actually fight back:
It’s a reminder that LLMs of this type and size all train on copywritten material.
Click through to the article from Otakar G. Hubschmann to see some of his findings – if you are not convinced that AI is trained on oodles of copyrighted material, this will make you see it.
What We Are Reading
🧑🏼🏫 The Alarming Reason Why Smart People Can’t Solve Simple Problems Anymore How algorithmic thinking is creating a generation that waits for instructions rather than seeks solutions. @Jane
⭐ How to Stand Out at Work Without Seeming Like a Narcissist There’s a correct way to stand out and many other ways that make you look arrogant and self-serving. @Mafe
🦋 The Transformational Power Of Communal Dreaming A deep and philosophical weekend read on the value of dreaming and shared dreams in a world of markets, branding, stasis, and uncertainty. @Jeffrey
⚖️ Balancing Digital Safety and Innovation In an era where digital innovation outpaces safeguards, the most forward-thinking companies are embedding safety by design into product development. Building safer products from the start doesn’t slow innovation—it supercharges it by aligning with user trust. @Kacee
🧬 Biohacking in the Office A wonderful sketch of what the modern office might look like. While it may inspire some absurd ideas, it certainly lets you reflect on the corporate commitment to well-being. @Julian
🤖 How People Are Really Using Gen AI in 2025 Interesting insight: AI is actually being used for therapy/companionship, organizing life, finding purpose, enhanced learning, and generating code, among other things, in 2025. @Pedro
🧠 The Psychology Behind Why Children Are Hooked on Minecraft Ever wondered why exactly Minecraft seems to be such an addictive game for kids? The game brilliantly taps into and combines multiple deeply ingrained mechanisms in our brains to get kids hooked. @Pascal
Rabbit Hole Recommendations
Happy Distractions
👴🏼 What an amazing story: Middle-Aged Man Trading Cards Go Viral in Rural Japan Town.
🇯🇵 Stunning photos from a bygone era - Photographs of old Japan.
Studies of how university students use Claude are great. The anecdotes can be scarier.
https://x.com/freganmitts/status/1828796730634330593
https://x.com/gregisenberg/status/1869202002783207622