Menu

Cyber Security

A New Generation of Fake Documents Is Outrunning Conventional Anti-Fraud Capabilities

Look to artificial intelligence to detect and manage risks emerging from the technology’s dark side.

Friday, August 2, 2024

By Joe Lemonnier

Advertisement

In a short amount of time, developments in generative AI have leapt forward. Today there are numerous different forms: large language models (LLMs), diffusion models covering language and image generation, and now new multi-model AIs. There are private versions – ChatGPT, Claude AI, etc.; and an increasing amount of open source AI, including platforms such as Meta.

Many of us are adopting this new technology to automate our work. Unfortunately, so too are criminals who are using open source AI and removing the guardrails to be able to train it for specific tasks.

The next generation of fake documents is here, and we should all be prepared.

Document Generation Today

While it’s not possible to generate a document from a single prompt at this moment in time, this technology is coming. These AIs currently struggle with consistency. If you generate a document and one element is off, editing that one errant element is near impossible. You most likely need to generate the whole document over again, which creates inconsistencies, slowing down the process.

However, what GenAI can currently do is create individual parts of a document, which we see in both ID and non-ID documents. There might be a generated portrait, signature or background – even the texture of crumpled paper for bank statements or utility bills, individual layers which are generated independently before being assembled into the final image. This provides an authentic feel to the document.

Three Threat Factors

Three factors make generative AI an imminent threat: quality, speed and scale.

The quality of available fakes is already past the point of being detected by the naked eye.

The speed of creating a fake document is also hitting an inflection point. Whereas this kind of quality might have taken weeks in the pre-digital world, and days in the pre-AI world, it now takes only seconds.

jlemmonier-150x210Joe Lemonnier: Real, leaked data an imminent threat.

Regarding scale, if detecting one document of this caliber would test the most trained fraud fighter, services like OnlyFake allow the creation of documents in batches of hundreds with a simple upload of an Excel sheet of personal data, completely changing the risk profile.

Just as every company out there is harnessing the power of AI and LLMs, criminals are doing the same. Fraud automation is far more accessible than at any other time, enabling the creation of bots and scripts which let them manipulate and submit documents concurrently. The quality is too great for the human eye; the speed at which documents are created and the scale they are produced means that companies not backed by their own AI protections are wholly out of the loop.

LLMs Supercharged by Data Leaks

Right now, text is easy to generate using AI. The big threat we see coming down the line is not the generation of fake information, but that AIs can now be fed the content of data leaks experienced over the last decade, and added to AI-generated documents.

Of course, data leaks don’t usually include visuals of the actual ID or non-ID documents, but their content is all that is needed to populate a document from scratch: a revolution in synthetic ID fraud.

This frustrates a company’s ability to spot a fake, when relying on current tactics. The traditional way of verifying documentation was to look at a Know Your Customer (KYC) document, extract the data, and check it against a third-party database, e.g. credit bureaus, Companies House, etc.

The problem is that now the data is real, coming directly from a leaked database; therefore, we cannot rely on the security of cross checking. As these database checks are no longer viable, companies really need to look to the document to detect whether it is real. This poses a very real and imminent threat to any company handling KYC documents in the world today.

Who Is Most Threatened?

Making this particularly dangerous is that this new kind of AI-powered synthetic fraud technology is being used by third-party fraudsters. It’s not a case of an individual tinkering with their personal bank statements to enable them to take out a larger loan. This is the type of systematic fraud which is most damaging and keeps organizations up at night.

Of course, the type of damage depends on the industry. We’ve witnessed this activity a lot in payment firms and banks. There is great value in opening numerous accounts, as it’s a way for any nefarious outfit to receive funding allowing them to run their fraudulent activities. They can circulate money across multiple accounts that they control, increasing their credit ratings to get larger loans, and then take all the money and disappear in a coordinated bust-out.

It’s also possible to sell the use of these accounts to other criminal organizations seeking to launder money.

There is no money mule to catch – the police can’t question the person responsible, as they are effectively a ghost. And it’s a numbers game – not every attempt to open a bank account has to be successful. The speed and scale of AI-powered document generation allows criminals to iterate, working out the best approach to break through a bank’s defenses.

AI Good versus Evil

Generally speaking, we see in around 2% of documents submitted to our customers some form of “serial fraud,” which means they come from template farms or are artificially generated. Over one weekend, a single customer encountered 68 instances of OnlyFake being used to submit documentation – and that is only one of the many GenAI platforms available.

Even the best-trained team of investigators would struggle under that onslaught. The good news is that you don’t need to be a high-tech company to leverage available tools. Specialist anti-fraud AI like document forensics can be used as an API to protect automated workflows. They can also be used manually by dragging and dropping suspected documents into a browser interface, giving front-line fraud fighters bionic eyes to detect fraud in real time.

But the arms race is on and there is no going back. To fight this battle against high-scale, high-velocity generative AI fraud, you need good AI to back you up.

 

Joe Lemonnier is product marketing director at Resistant AI. The company enhances automated financial risk and compliance systems with its document, transaction and identity forensics products.




Advertisement

We are a not-for-profit organization and the leading globally recognized membership association for risk managers.

weChat QR code.
red QR code.

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals