Menu

Cyber Security

Beyond Deepfakes: Synthetic Fraud’s Next Alarming Evolution

Artificial intelligence powers potentially more pervasive shallowfakes – and holds a key to solutions.

Friday, September 30, 2022

By Martin Rehak

“Deepfakes,” near-perfect but synthetic still images or video footage created with the intention of impersonating an individual’s identity, have gained recent notoriety. They are a type of synthetic fraud where pieces of fabricated information are packaged to replicate a legitimate identity, such as to deceive creditors or insurers.

The financial services and insurance industries are rightly taking notice of the deepfake threat, but this type of impersonation tends to be reserved for targeted attacks promising significant gain for the criminal. At the same time, we are seeing an increasing incidence of an iteration in the form of “shallowfakes.”

Both deepfakes and shallowfakes have appeared on the fraud landscape via leveraging of technology which first emerged from social media experimentation with open-source face-swapping technology.

Deep versus Shallow

Where the two diverge is in the level of skill and technology required to execute a successful attack.

“Shallow” doesn’t imply a lesser threat, Martin Rehak writes.

Deepfakes, created with sophisticated AI, are more costly to generate and far more complex; shallowfakes can be produced with far simpler manipulation using basic photo or video editing software that is accessible to most people. “Shallow” does not automatically imply that the resulting documentation or IDs are less sophisticated or less likely to be successfully used in a fraud attack. The distinction is in the techniques used. Shallowfakes don’t use deep learning in their creation.

Although less complex to build than deepfakes, and possibly implying a lesser degree of danger, shallowfakes and their potential to do damage should not be underestimated.

The threshold to commit fraud has been lowered, and the threat actors have changed from dedicated criminal groups to low-skilled everyday individuals. This makes shallowfakes more pervasive and easily scalable for direct fraud threats – adding to the challenges already facing ill-equipped organizations.

Insurance Industry Exposure

Fraud in the insurance industry costs U.S. consumers at least $308 billion a year, making fraud a significant concern for those consumers and their insurers alike. Opportunities for criminals to perpetrate fraud increased during the recent COVID-19 pandemic, adding to the challenges faced by insurers in the fight to protect legitimate customers from fraudsters.

Insurance fraud manifests itself in different ways, such as deliberate inaccuracy in disclosing information to achieve better cover terms, or faking insurance claims for items which aren’t even owned, or inflating the value of the claim. To improve shallowfake detection, insurance providers need to recognize the signs of tampered or synthesized documents and identities.

Falsified documents in fraudulent claims include forged state drivers’ licenses with fake addresses (these are the most common offenders, particularly when crafting false identities), and invoices or bank statements that are easily manipulated with text editors. These documents can be used in many instances across many insurers with slight alterations to the name, address or other data to avoid detection. Businesses that rely on automated customer approval processes are more susceptible to these types of fraud.

Fighting Back

Human ingenuity always designs new solutions in response to the latest and emerging threats, and advances in AI technology have proven to be critical in detecting shallowfakes. The ability of AI algorithms to detect anomalies within information datasets, along with faster validation speeds, empower insurance providers with the tools necessary to sort the vast sets of data they collect in the course of their daily business.

AI-powered “document forensics” complete deep and full scans of any type of document involved in insurance underwriting and claims management to detect manipulations or inconsistencies imperceptible to the human eye.

As well as conducting forensic examinations of documents at a scale humans cannot ever achieve, the application of AI can also assist in reducing the workload expected of insurance analysts. Via the application of learning from past claims investigations, AI models can be tuned automatically to further increase the precision of the decisions they are responsible for.

More and more insurers are coming around to the use of AI to combat various types of fraud while simultaneously delivering efficiency savings within their analyst pool. The alternative is sticking with legacy systems that are unequipped to handle the volume and scale of claims that insurance providers are faced with today.

Recognize the Threat

Prevention technologies, which assess and validate documents at the point of creation or capture, are another solution touted to prevent the creation of shallowfakes entirely. In this situation, the responsibility falls on software providers to program safeguard rails into their products to make sure users are not utilizing their tools for unscrupulous reasons like manipulating documents. While the efforts of these providers are part of the solution, insurers need to proactively defend themselves with internal detection tools, as these safeguard rails are not the end solution.

Insurance companies need to first recognize the threat shallowfakes pose in order to devise their course of action and employ effective automated fraud technologies to monitor for suspicious behavior.

Shallowfakes are but another example of innovation by criminals in their quest to commit financial crime. All entities need to be aware that the fight against fraud is a constant case of evolution.

Shallowfakes are undeniably a threat to the credibility and accuracy of insurers’ business, and to the safety of their security. As new products come to market and technology becomes more accessible, shallowfakes will become more prevalent as finance and identity become intrinsically linked online.

But perhaps the greatest risk associated with shallowfakes is companies’ inertia to keep moving like business as usual. It’s very easy to downplay the risks and level of sophistication of shallowfake technology, but companies can come out ahead with a little creativity and the right technology in place, leaving the fraudsters empty-handed.

 

Martin Rehak is CEO and founder of Resistant AI, an artificial intelligence/machine learning company specializing in identity forensics and financial crime prevention.




BylawsCode of ConductPrivacy NoticeTerms of Use © 2022 Global Association of Risk Professionals