A conversation about Sam Altman’s comments to the Federal Reserve, why document fraud is so hard to detect, and how agentic systems promise a new era of fraud prevention.
In this episode of Good Question, host Brianna Valleskey sits down with Inscribe AI co-founders:
Together, they unpack OpenAI CEO Sam Altman’s warning of an “impending significant fraud crisis” and what it means for financial services, risk teams, and the future of trust.
Ronan noted that Altman’s statement echoed what risk teams have already been seeing: AI has increased the volume, speed, and sophistication of fraud.
It’s not just Photoshop edits anymore. Fraudsters are now generating entirely fabricated documents and transaction data.
That means more losses, heavier workloads for operations teams, and poorer customer experiences.
“It’s the beginning of a new era,” Ronan explained. “Whether we call it a crisis is up for debate — but more leaders need to start ringing the alarm bells.”
Conor revealed a striking stat: Inscribe has seen a 200% increase in AI-generated fraud detections in recent months.
These fakes often slip past even manual review teams before being caught — leaving behind subtle clues such as:
These markers provide an initial line of defense, but as fraudsters improve, detectors will need to evolve just as quickly.
The episode also traced Inscribe’s evolution in fraud detection:
Instead of a one-to-one approach (one detector per fraud type), fraud reasoning enables a one-to-many model capable of catching both known and unseen fraud patterns.
“Fraud reasoning means we can finally tackle the long tail of fraud — the kinds we’ve never seen before,” Conor explained.
Documents remain among the hardest artifacts of trust to defend. They vary widely, mix visual and textual signals, and often require contextual reasoning.
An LLM-powered agent can ask:
By embedding reasoning in document authentication, fraud teams can protect one of the most important (and most abused) trust signals in financial services.
Agentic systems are already making breakthroughs outside of fraud. Physicists recently used AI not to label data, but to discover new physical laws with messy, real-world inputs.
Bri drew a parallel: fraud is messy, nonlinear, and constantly evolving. Like in science, reasoning systems can uncover patterns no one has explicitly trained them to find, a powerful shift for fraud detection.
The episode closed with a real-world case study: John Dixon, former head of tax at EY, forged six declarations of trust and a loan agreement during bankruptcy proceedings in 2004. He simply downloaded templates online and added a law firm’s branding — a scheme exposed by unusual language and suspicious timing.
Imagine, the hosts pointed out, how much more convincing such forgeries would be with today’s generative AI tools.
This episode made one thing clear: fraud prevention is entering a new era. It’s not just about faster detection. It’s about smarter reasoning.
The future belongs to teams that embrace AI not as a replacement, but as a partner. Agentic fraud reasoning systems can spot patterns, adapt to new threats, and protect trust in ways that were impossible before.
“We’re embarking on a new way of building fraud detection systems; ones that can catch emergent patterns we’ve never seen before.”
👀 Watch the full episode on YouTube or visit the Good Question playlist.
🎧 Listen on Spotify, Apple Podcasts, or right here on this page. .
📢 Know a fraud fighter whose story should be featured? Let us know at info@inscribe.ai.
Start your free trial to catch more fraud, faster.