Future Friday: The Great Career Playbook Rewrite of 2025
As AI reshapes industries, sitting on the sidelines isn’t an option — but the new rules of success aren’t what you’d expect
Dear Friend –
We don’t talk or write much about us. I find it much more meaningful and useful to discuss what’s going on in the world, share our analysis and thoughts, and the occasional fun thing we stumble upon during our web excursions. Today, though, I am insanely excited to share that our dear friend and long-time collaborator Kacee Johnson has joined us to dig deep into the rapidly evolving world of FinTech innovation. Kacee has a rich career in the field, was named Accounting Today’s Top 100 Most Influential People in Accounting, a Top 25 Thought Leader by CPA Practice Advisor, and one of the Most Powerful Women in Accounting – I guess you get the idea that she is just 100% amazing! Starting next week, you will see Kacee’s analysis and brilliance here in the newsletter and, of course, in our client programs. Fun times!!
And now… this:
Headlines from the Future
Career Advice in 2025 ↗
Despite this blog post by Will Larson being written from the perspective of, and for, software developers, his insights into the impact of AI on careers (both from the perspective of an individual as well as a company) ring true across the spectrum:
The technology transition to Foundational models / LLMs as a core product and development tool is causing many senior leaders’ hard-earned playbooks to be invalidated. Many companies that were stable, durable market leaders are now in tenuous positions because foundational models threaten to erode their advantage. Whether or not their advantage is truly eroded is uncertain, but it is clear that usefully adopting foundational models into a product requires more than simply shoving an OpenAI/Anthropic API call in somewhere.
In our sessions, we often open with the observation that “we are trying to solve new world problems with old world thinking.” In Will’s words, our playbooks become rapidly obsolete, and in many cases, we haven’t developed new ones quite yet.
Sitting out this transition, when we are relearning how to develop software, feels like a high risk proposition. Your well-honed skills in team development are already devalued today relative to three years ago, and now your other skills are at risk of being devalued as well.
And as this world is moving at a frenzied pace, the above seems to be doubly true. As someone else recently wrote: Now might be the worst time to take a sabbatical.
—//—
AI Search Has A Citation Problem ↗
A damning study from Columbia University, analyzing AI search engines’ accuracy and their ability to cite sources:
Collectively, they provided incorrect answers to more than 60 percent of queries.
“More than 60% of queries” is pretty abysmal. It gets worse:
Most of the tools we tested presented inaccurate answers with alarming confidence, rarely using qualifying phrases such as ‘it appears,’ ‘it’s possible,’ ‘might,’ etc., or acknowledging knowledge gaps.
On top of this, AI search engines also clearly have indexed material they were not supposed to (or more precisely, allowed to) access.
Perplexity Pro was the worst offender in this regard, correctly identifying nearly a third of the ninety excerpts from articles it should not have had access to.
It’s bad. Here is the study.
—//—
Finding Signal in the Noise: Machine Learning and the Markets ↗
Fascinating conversation with Young Cho, head of research at the financial analysis firm Jane Street, on the challenges of using machine learning and LLMs in the context of financial data.
Machine learning in a financial context is just really, really hard... you can think of machine learning in finance is similar to building an LLM or text modeling except that instead of having, let’s say one unit of data, you have 100 units of data. That sounds great. However, you have one unit of useful data and 99 units of garbage and you do not know what the useful data is and you do not know what the garbage or noise is.
Link to podcast and transcript.
—//—
Tell Your Kids to Learn to Code ↗
Quoting Andrew Ng (who knows a thing or two about coding, AI and the future):
Some people today are discouraging others from learning programming on the grounds AI will automate it. This advice will be seen as some of the worst career advice ever given. I disagree with the Turing Award and Nobel prize winner who wrote, “It is far more likely that the programming occupation will become extinct [...] than that it will become all-powerful. More and more, computers will program themselves.” Statements discouraging people from learning to code are harmful!
In the 1960s, when programming moved from punchcards (where a programmer had to laboriously make holes in physical cards to write code character by character) to keyboards with terminals, programming became easier. And that made it a better time than before to begin programming. Yet it was in this era that Nobel laureate Herb Simon wrote the words quoted in the first paragraph. Today’s arguments not to learn to code continue to echo his comment.
As coding becomes easier, more people should code, not fewer!
—//—
Wondering What to Automate With AI? Wonder No More! ↗
Ever wonder if you’re doing manual tasks that AI could handle for you? Try this:
Jot down everything you need to do today.
Upload the list to ChatGPT and use this prompt:
“Analyze these tasks and categorize them: (1) AI can do this, (2) AI can assist, (3) Delegate, (4) I should do this myself. Explain why for each.”
Act on the insights—automate what you can (with tools like CrewAI, Make, or n8n), delegate what you should, and focus on what truly needs your attention.
Work smarter, not harder. Let AI take some of your most miserable work off your plate!
(via The Neuron)
What We Are Reading
🚀 The AI Revolution Won’t Wait for Permission Slips Companies that approach AI strategically—balancing excitement about its possibilities with careful management of its risks—will outperform those who treat AI only as a tech project. @Jane
🤖 Digital Therapists Get Stressed Too, Study Finds A new study reveals that ChatGPT shows signs of anxiety when users share traumatic narratives about crimes and war, for example, which may hinder its effectiveness in therapeutic settings. @Mafe
🇨🇳 China’s AI Frenzy: DeepSeek Is Already Everywhere — Cars, Phones, Even Hospitals The massive rollout of DeepSeek integration offers a global opportunity for learning, comparing AI policy frameworks, and maybe even glimpsing the future. @Jeffrey
📈 10 Charts That Capture How the World Is Changing With tons of insights to unpack behind each chart, this is a fantastic curation of 10 strong signals around us. It covers a breadth of areas, ranging from tech hiring in NYC to the surge in sales of Guinness driven by an online trend. You’ll certainly find something interesting! @Julian
🦾 Drones Will Do Some Schlepping for Sherpas on Mount Everest Finally, a solid use case for drones: In the upcoming Everest season, drones will carry loads as heavy as 35 pounds from base camp to Camp I, normally a 7-hour trek by foot, in just 15 minutes. They will transport food, oxygen cylinders, and even help pick up waste. @Pedro
🎟️ What We Lose When Our Memories Exist Entirely in Our Phones In a world where our concert, sports, and event tickets live and die in our phones, one sports fan’s quest for a tangible memory reveals a deeper truth: sometimes the most meaningful souvenirs aren’t stored in our camera rolls, but in the physical tokens we choose to keep. @Pascal
Some Fun Stuff
🐇 Bunnies in honey. Whatever this is, it is mesmerizing.
😎 Discover the best day to catch the sunrise or sunset for any given location.
🎨 Doodle with Gemini – make AI your drawing partner and turn yourself into the next Picasso (maybe).
Though I did appreciate Simon Wardley's take yesterday on the limits of "vibe coding by the masses"
https://www.linkedin.com/posts/simonwardley_conversations-from-the-future-x-i-activity-7308467018154864640-fsyB/
(If you don't know Simon Wardley, you need to!)