Industry Thoughts

How to prepare for the age of GenAI-enabled fraud

March 22, 2024

minute read

  • Brianna Valleskey
    Head of Marketing

Fraud, credit, and compliance teams have the most important function in the most important industry in the world. They work diligently behind the scenes to keep the gears of financial institutions turning smoothly — and make billions of decisions per year worth trillions of dollars.

And today, they face the challenge presented by Generative Artificial Intelligence (GenAI), a tool transforming day-to-day operations for fraudsters and fraud fighters, alike.

Because fraudsters are always evolving their techniques, the solution needed to fight fraud is deeply technical, requires using the most innovative machine learning models, and necessitates ongoing improvements to remain effective instead of once-and-done. 

Risk teams need to embrace the power of AI and partner with vendors that specialize in racing fraudsters to the next vector of fraud detection. 

Why GenAI poses a threat to financial services 

Financial services, real estate, e-commerce, and other industries should be concerned about the potential misuse of generative AI (GenAI) for fraud for several reasons:

First, GenAI enables fraudsters to create highly sophisticated fraudulent materials, such as synthetic identities, fake documents, and realistic-looking transaction data. These materials can be difficult to distinguish from genuine ones, making it easier for fraudsters to deceive financial institutions.

Source

Second, the automation capabilities of GenAI also allow fraudsters to scale their fraudulent activities more easily. They can generate large volumes of fake identities, documents, or transactions in a short amount of time, increasing the overall volume of fraudulent attempts financial institutions must handle.

Finally, financial institutions are subject to various regulations and compliance requirements related to fraud prevention and customer protection. The emergence of GenAI for fraud adds complexity to compliance efforts, as institutions must adapt their fraud prevention strategies to address this new threat landscape.

The evolving landscape of fraud, fueled by advancements in technology like GenAI, underscores the importance for financial services to continuously enhance their fraud detection and prevention measures to mitigate risks and protect their customers and assets.

How can fraudsters use GenAI? 

Fraudsters can potentially use generative AI in several ways to perpetrate fraudulent activities. Here are some examples: 

  • Creating fake identities: Generative AI can be used to generate realistic-looking fake identities, including names, addresses, social security numbers, and even photographs. These fake identities can then be used to open bank accounts, apply for loans, or commit identity theft.
  • Generating fake documents: Generative AI can be used to create fake documents such as IDs, passports, utility bills, and pay stubs. These fake documents can be used to support fraudulent activities such as loan applications, insurance claims, or employment verification.
  • Generating fake transactions: Fraudsters can use generative AI to generate synthetic transaction data that mimics real transactions. This synthetic data can be used to inflate sales figures, manipulate financial statements, or commit insurance fraud.
  • Creating synthetic images/video for phishing: Generative AI can be used to create synthetic images or videos of real people, which can then be used in phishing attacks. For example, a fraudster could use generative AI to create a fake video of a CEO instructing employees to transfer funds to a fraudulent account.
  • Generating fake reviews: Generative AI can be used to generate fake online reviews for products or services. These fake reviews can be used to manipulate consumer perceptions and deceive potential customers.
  • Manipulating biometric data: Generative AI can also be used to manipulate biometric data such as fingerprints or facial recognition data. This could be used to spoof biometric authentication systems or create fake identities.
  • Creating synthetic voice recordings: Generative AI can generate synthetic voice recordings that sound like real people. These synthetic voices can be used in voice phishing (vishing) attacks to impersonate individuals or organizations.

The use of generative AI for fraudulent purposes is illegal and unethical. As generative AI technology advances, it's crucial for organizations and regulatory bodies to develop robust safeguards and detection mechanisms to combat fraud and protect individuals' data and privacy.

How Inscribe is responding to GenAI

We’re often asked whether our technology can defend against LLMs and AI-generated documents. The answer is yes.

Our detectors are built to find anomalies in documents based on their context. We havethis context because of the huge volume and variety of documents we process every month (think: millions). The context also evolves over time as document formats and styles are updated. But this context isn’t publicly available. Therefore, these rogue LLMs will lack the context, and will not adapt as it changes. This means that the documents fraudsters create will be liable to contain anomalies detectable by Inscribe.

Also, we suspect that many examples shown in social media posts and forums online were generated with some human intervention, for which we have detectors for templates and copycat images. Purely AI-generated documents are not realistic just yet.

However, we acknowledge that these models will improve, and we plan to combat this new and evolving fraud technique.

Currently, the models being used to generate fake documents are imperfect. They have several telltales, such as bad spelling, made-up words, numbers not adding up, etc. While we could build a model to identify these anomalies and flag the documents as AI-generated, we know the rogue models will improve further over time.

We expect that the outputs of the GenAI models will be indistinguishable from real documents in the future. That means that effectively combatting this fraud risk requires incorporating the broader context of the document into decision-making — which we plan to achieve in the same way we always have: using the latest advancements in deep learning and LLMs.

Partner with a trusted AI vendor to scale safely 

According to a survey taken during our webinar last month, 84% of risk leaders plan to use LLMs — but most don't know where to start. Inscribe is here to help. We are uniquely positioned to bring AI to risk teams at scale.

Our team of experienced AI/ML engineers and data scientists work hand-in-hand with risk teams at leading financial services companies, deploying safe and usable AI models to protect against increasingly sophisticated fraudsters.

Inscribe's performance consistently improves, and we continue to push the boundaries of what's technically possible, ensuring our customers always have state-of-the-art risk management. 

Want to learn more? Find out how our Risk Intelligence platform can help you defend against the fraud tactics of today by requesting a demo with our team

Ready to get started?

See how Inscribe can help you reduce risk and grow revenue.