FinTech’s AI Moment: What’s Real, What’s Hype, and Why You Should Pay Attention
When Holding Back is Moving Forward: The Ethical AI Paradox.
There’s something fascinating, and a little disorienting, about watching an industry you’ve been around for years suddenly start talking like it discovered the future overnight.
That’s what it’s felt like in FinTech lately. AI is everywhere. Virtual assistants, algorithmic advisors, underwriting engines, compliance bots — you name it. The pitch decks are polished, and the demos are slick. And one thing I keep hearing over and over again: it’s transformative.
But I can’t help but ask: is this real transformation? Or just a prettier front end on the same back-office processes?
And I’m okay with it being an improvement of existing processes, but that isn’t transformation. Because here’s what’s also true: behind some of those AI claims are humans (sometimes entire teams) quietly powering what we’re calling “intelligent automation.” In some cases, those humans are halfway around the world, working behind the scenes to simulate what the model should be able to do.
We had a conversation recently with the CTO of a Fortune 1000 software company who said something that’s really stuck with me. He was talking about their models and how they’re intentionally throttling their AI deployment because users need to be able to trust their systems of record. They are intentionally only deploying about 10% of what’s possible. It’s a wild thing to think about: We’ve built these models that are unbelievably powerful, but we’re holding them back out of caution. Out of ethics. Out of a desire to protect people from a system they might not fully understand. Because the truth is, AI isn’t a binary switch; it’s a spectrum. And somewhere along that spectrum, we have to decide: Are we building with integrity, or just building fast?
Luckily, like with this team and others, there is real innovation happening in FinTech. Take Experian’s AI assistant — it’s won awards for cutting modeling cycles from months to days, even hours. Or Tiger Brokers in China, who integrated DeepSeek’s large model into their chatbot to improve trading analysis. BNY Mellon, quietly scaling OpenAI partnerships to power up their internal AI platform, Eliza.
These aren’t just proofs of concept. They’re signs of what’s possible when AI is implemented thoughtfully and at scale. But even with the promise, there are warnings we can’t ignore. The Bank of England recently cautioned that too many traders leaning on AI could lead to herd behavior; everyone reacting the same way, at the same time, based on the same signals. That’s not just a bug in the system; that’s a systemic risk.
Maybe at this point, you ask: if I’m not in the FinTech industry — why does this matter to me? Financial operations are the backbone of every business, regardless of industry. From payroll and invoicing to forecasting and fraud detection, companies rely on fintech tools to run smoothly, and AI is making those processes faster, smarter, and more reliable — impacting decisions, efficiency, and growth at every level. It’s also quietly reshaping how every consumer saves, borrows, invests, and transacts. From faster loan approvals to smarter fraud prevention and personalized financial advice, its ripple effects touch every wallet.
FinTech’s relationship with AI is still being defined. And like any relationship, it’ll take time, trust, and honesty. As AI continues to permeate the FinTech space, critical questions emerge:
Are we witnessing genuine innovation, or are some solutions merely rebranded traditional methods?
How can companies ensure transparency and build trust with users?
What measures are necessary to balance rapid advancement with ethical considerations?
The intersection of AI and FinTech holds immense promise, but realizing its full potential requires introspection, transparency, and a commitment to ethical practices.
As we continue accelerating toward an AI-powered future, this conversation lingers: What does it mean when a Fortune 1000 CTO says they’re only deploying 10% of their AI’s capabilities — on purpose? In light of Martec’s Law, which reminds us that technology changes exponentially while organizations change logarithmically, the real question becomes: If our technology is advancing faster than our people, processes, and trust can keep up — how do we know when it’s time to push forward versus hold back?
I’m hopeful, but also curious. Maybe even a little protective of an industry I’ve seen evolve over the years. Because in the end, real innovation doesn’t just change what we build; it changes why we build it. I believe AI has the power to reshape fintech, and in many ways, it already is. But real innovation requires more than capability. It demands clarity and accountability.
@Kacee
Thoughtful article from a thoughtful source.
A lot of this is older than you credit.
Nate's use of mechanical Turk-powered human back-ends is a trick that not only included Amazon's "just walk out" technology (https://www.theguardian.com/commentisfree/2024/apr/10/amazon-ai-cashier-less-shops-humans-technology). Back in 2013 there was an angel and light VC rage over mobile apps that could scan photos and identify what is in it.
A kind of souped-up Hotdog/Not-Hotdog (https://www.youtube.com/watch?v=vIci3C4JkL0) for ecommerce. I had heard multiple insider stories of startup CEOs who hired armies of temp staff in India to view the submitted images, search the Internet, and message back their guesses.
And in FinTech specifically, credit card fraud detection has been run by AI/ML for at least a decade now. So AI has long been embedded within FinTech. The challenge is to pull back on Maslow's Hammer and appreciate context. Running a service that requires absolute precision or ethical nuance -- e.g., something that displays a balance or determines whether to award a customer a loan -- is very different from a generative one that leans into generative AI and suggests wedding anniversary gift ideas.
Hearing they're deploying only 10% of the AI capabilities honestly sounds too high. With AI, the costs of going from idea to implementation are plummeting. Building things is no longer the scarcity: it's the ideation, experimentation, and deliberate selection of the best ideas. Get any focus group together to come up with 10 ideas, and only 1 or 2 (~10-20%) are worth anything.
Vet those out further with deeper analysis, data validation, acceptance testing, and customer service training and the figure should be much lower.