Google Leveled the Playing Field. AI Won't.
OpenAI + Walmart's deal, the 250-document backdoor, influencer obsolescence, and why Meta wants 5x faster coding
Dear Friend,
Let me state it plainly: Small and medium-sized e-commerce businesses are going to have a very difficult time. If you follow developments in the e-commerce world, you will have noticed that OpenAI announced a deep shopping integration into ChatGPT. This follows the lead of Perplexity, which has had a lighter form of integration for some time. Shortly after, Walmart, the world’s largest retailer, announced an integration partnership with OpenAI. If you have ever used a Large Language Model (LLM) to research a product you want to buy, you know the experience is dramatically better than navigating Google Search, product review websites, and user-generated content on forums like Reddit. This means the step from asking “What is the best electric shaver?” (a query I recently researched) to “Cool, buy this for me” is a very short one.
In the old world – the world of Google searches and product review websites – you would encounter ads (both keyword-driven and banner ads) showing options from retailers ranging from Walmart to Pete’s Shaver Shop. One of the most revolutionary features of Google Search advertising was that it leveled the playing field, allowing Pete’s Shaver Shop to compete with Walmart. I doubt that we will see the same leveling with AI. The OpenAIs of this world, acting as new gatekeepers to your purchases, will make deals with those who can pay the most, meaning the biggest retailers, not Pete.
What is Pete (and everyone else) to do? I honestly don’t know, but I do know that we must pay very close attention to this, adjust our strategy in real-time, and become creative with other or new forms of customer acquisition.
P.S. If this topic interests you (perhaps you operate in this space), follow my former boss and friend, Scot Wingo. He writes the undisputed best newsletter on this topic: Retailgentic.
And now, this...
Headlines from the Future
Once upon the time… There were people called “influencers”. Influencers made their money by pouring their heart and soul into creating compelling content for social media platforms, which in turn generated views that the influencer could monetize through marketing. Or so the story goes.
With the advent of Sora 2, OpenAI’s new video-generating model, creating weird (and weirdly compelling) content has become possible for pretty much anyone. While social media stars like MrBeast still spend large amounts of money and effort creating their video content, anyone can now do the equivalent using Sora 2. And influencers are freaking out:
“When AI videos are just as good as normal videos, I wonder what that will do to YouTube and how it will impact the millions of creators currently making content for a living… scary times”
Meanwhile, the platforms are pushing back (in the end, they simply don’t care what their users are watching, as long as they are doing so on their platform):
Mosseri pushed back a bit at that idea, noting that most creators won’t be using AI technology to reproduce what MrBeast has historically done, with his huge sets and elaborate productions; instead, it will allow creators to do more and make better content.
↗ Instagram head Adam Mosseri pushes back on MrBeast’s AI fears but admits society will have to adjust
━━━━━
How a tiny dataset can backdoor any LLM. Another day, another LLM vulnerability: The team at Anthropic (the folks behind Claude) showed that a small number of samples is all it takes to poison an LLM of any size.
As few as 250 malicious documents can produce a ‘backdoor’ vulnerability in a large language model—regardless of model size or training data volume. […] Even though our larger models are trained on significantly more clean data, the attack success rate remains constant across model sizes.
What this means in practical terms is that large language models can be fairly easily backdoored; all it takes is a small stash of malicious documents in the training set. As AI companies are gobbling up data left, right, and center, it is close to impossible to ensure training data isn’t tainted.
↗ A small number of samples can poison LLMs of any size
━━━━━
Online Learning Is Messing With Students’ Heads. A new study in Nature on the impact of online learning on university students found that while it could boost goal-oriented behavior, it negatively affected students’ sense of agency, proactivity, and interpersonal skills.
The study suggests that online learning environments present unique challenges for student identity formation and motivation, highlighting the need for targeted support strategies in digital learning contexts. If you have kids, it might be worth digging in… ↗ College students’ identity differences in offline and online learning environment and their effects on achievement motivation
━━━━━
A Tale of Two Cities. On one hand, there are significant investments in AI in the US, particularly in the development of data centers. On the other hand, investment in traditional industries like manufacturing is lackluster.
“A gulf is opening up in the heart of American business as two industries championed as central to the country’s future — manufacturing and artificial intelligence — appear to be heading in different directions.”
As AI-focused investments create significantly fewer jobs than investments in traditional industries, they could drastically impact the composition of the U.S. economy and its workforce.
↗ Two industries were supposed to drive America’s future. One is booming, the other slumping.
━━━━━
The Other City. Intel’s former CEO Pat Gelsinger on AI:
“We’re displacing all of the internet and the service provider industry as we think about it today — we have a long way to go.”
━━━━━
Forget the 10x coder – all hail the 5x developer. Remember the “10x coder”? Software developers who were so good and top of their game that their productivity was 10x better than the average coder. It is, of course, BS. But Meta is trying again – telling their engineers that they better begin using AI to be “only” 5-times as productive as before. And Meta being Meta, they focus these superhuman engineers’ attention on the only thing that ever mattered: the Metaverse.
“Metaverse AI4P: Think 5X, not 5%,” the message, posted by Vishal Shah, Meta’s VP of Metaverse, said (AI4P is AI for Productivity). The idea is that programmers should be using AI to work five times more efficiently than they are currently working—not just using it to go 5 percent more efficiently.
↗ Meta Tells Workers Building Metaverse to Use AI to ‘Go 5x Faster’
━━━━━
The AI is now trolling us. This is getting ridiculous (or ridiculously funny) – I asked Google’s Gemini AI to create an image for a slide in one of our workshops using the following prompt:
create a fictional movie poster for a movie called “We are in a world where…” - the movie is about envisioning the future.
This is what it came up with:
Solid. But not the style I was gunning for. So I asked Gemini to alter the image with the following prompt:
make this movie poster look more like a poster for a cartoon movie from the 1940s
And this is what I got in response:
🤔
I guess there is a reason why Google calls their image-generating AI “Nano Banana”…
━━━━━
What We Are Reading
🦾 AI Couldn’t Picture a Woman Like Me - Until Now Why AI image generators kept giving this one-armed swimmer two arms @Jane
🌉 How To Be A Great Coach Even When You’re Busy To make the most of your time: Use a bridging structure to connect your guidance to the employee’s own thinking and experience (ask a question, give guidance, then ask how they will implement it). @Mafe
🤖 A New Animism Demystification, in every sense, might be an undervalued key to a successful social transition into our AI-enabled future. @Jeffrey
💡 Why Leaders Fail When They Choose Comfort Over Clarity Really liked this one, especially the equal focus on accountability and kindness. It reminds me a lot of Alex Dorr’s work on ditching drama in leadership. @Kacee
🙈 We Choose Ignorance As We Age – Even When Knowledge Is More Useful Most of us (perhaps all of us) avoid information that makes us feel bad about ourselves. This behavior begins early in life and has far-reaching consequences. @Pascal
Down the Rabbit Hole
🛒 PSA: Prime Day is (mostly) marketing, much less a good deal (for you, anyway): I tracked Amazon’s Prime Day prices. We’ve been played.
🖱️ Even the Pope doesn’t like clickbait: You won’t believe what degrading practice the pope just condemned
🥇 Still thinking about buying that AI pendant? Someone did the work for you: Which AI Companion Actually Works? I Tested Them All. ($ Paywall’ed)
🏗️ The World Trade Center Under Construction Through Fascinating Photos, 1966-1979
🏠 Fascinating data visualization: The World’s 2.75B Buildings
🤔 The real AI risk is ‘meh’ technology that takes jobs and annoys us all
🪰 Thousands of flies keep landing on North Sea oil rigs then taking off a few hours later – here’s why
Pascal is fighting his Sono speaker system – it is remarkable how unstable this whole thing is…
But let's be serious. The "egalitarian Internet" trope has always been a myth. At least after the several months of consumer Internet use it took before Netscape IPO'ed in 1995.
Money always puts its thumb on the scales. That was true with the 1996 Internet as it is today. Just don't mistake the timing gap for original intentions. Even the age of SEO tail-chasing was laden with consulting services sharks and financial influence to weight the Page rankings.