Culture & Governance Risk | Insights, Resources & Best Practices

AI Adoption in Risk and Compliance: Revolution or Evolution?

Written by Cristian deRitis | November 14, 2025

For years, artificial intelligence has held the promise of revolutionizing risk and compliance, a bold claim that has sparked extensive debate at industry conferences. With adoption rates now accelerating, we finally have sufficient evidence to assess whether the impact is genuine or merely marketing rhetoric.

The verdict: It’s evolution, not revolution. Instead of completely overhauling existing systems, AI is driving targeted operational enhancements, with risk professionals guiding the way to ensure AI’s use is safe, ethical, and effective.

Limited Effect Despite Growing Adoption

A recent Moody’s study revealed that 53% of risk and compliance professionals are actively using or trialing AI solutions – a dramatic leap from 30% just two years ago. This acceleration reflects not only technological advancement, but a fundamental restructuring of how financial institutions approach regulatory challenges and risk management.

Consider fraud detection, where AI is expected to deliver breakthrough results. Early adopters report genuine improvements in pattern recognition and anomaly detection, yet few survey respondents experienced the radical transformation vendors promised.

At this point in the development cycle, it is clear that AI excels at augmenting existing processes but struggles to fundamentally reimagine them. While the technology identifies suspicious transactions faster and more accurately than manual review, it hasn’t eliminated the need for human investigation and judgment.

The Know Your Customer (KYC) and customer due diligence (CDD) space tells a similar story. While many cite meaningful impact from AI, the benefits cluster around efficiency gains rather than groundbreaking capabilities. AI accelerates document verification and entity matching, reducing processing time from days to hours. Yet, complex cases such as those involving politically exposed persons, beneficial ownership structures, or ambiguities in sanctions screening still require human judgment and expertise.

Where Reality Exceeds Expectations

Large language models (LLMs) represent one area where reality may have surpassed initial skepticism. Their rapid adoption and widespread use have exceeded expectations in areas such as processing unstructured data, including emails, contracts, and regulatory documents. This addresses a long-standing challenge in compliance operations by making it easier to process large amounts of unstructured information efficiently.

Cristian deRitis

The dramatic shift in organizational stance is telling. Just two years ago, few companies were using LLMs. Today, most actively encourage it, suggesting that the practical benefits outweigh the theoretical risks and initial skepticism. Organizations report LLMs excel at tasks that previously consumed significant analyst time, including initial document review, regulatory change analysis, and drafting reports.

The strong correlation between data quality and AI success confirms what practitioners long suspected: Sophisticated algorithms cannot overcome poor data foundations. Organizations with a mature data infrastructure report far higher successful adoption rates than those with fragmented systems.

For all of the algorithmic sophistication that machine learning models bring, the old "garbage in, garbage out" principle still rings true. Without well-organized data, AI investments are largely a waste of resources.

Where Hype Persists

Survey results showed several persistent gaps between the promise and the delivery of AI solutions:

Full automation remains elusive. Despite vendor claims, complete automation of compliance processes hasn't materialized. The near-universal belief that human oversight remains essential suggests the industry recognizes AI's limitations. Rather than replacing compliance officers, AI redistributes their work toward exception handling and complex decision-making.

ROI uncertainty continues. Many organizations still don't measure AI performance systematically, meaning implementations proceed on faith rather than evidence. Without clear metrics, both successes and failures become anecdotal rather than empirical.

The “black box" problem persists. Widespread concerns about AI transparency indicate that model interpretability remains unresolved. Regulators and risk managers struggle to accept systems whose decision-making processes remain opaque, limiting deployment in high-stakes areas.

Sector Disparities Reveal Structural Realities

The dramatic variation in adoption rates across sectors demonstrates how institutional factors influence the outcomes. Fintech firms, unencumbered by legacy infrastructure and cultural inertia, are rapidly implementing AI. Traditional banks, managing decades-old core systems and stringent regulatory requirements, face structural barriers that are difficult to overcome.

This disparity suggests that the AI "revolution" will proceed at vastly different speeds across industries and sectors. What might appear to be a transformative technology in agile fintech companies may seem merely evolutionary in established institutions. Both perspectives reflect legitimate experiences rather than misperceptions.

Regulatory Reality

The fragmented global regulatory landscape adds another dimension to the hype-versus-reality debate. While most practitioners support AI-specific regulation, the current patchwork – ranging from the comprehensive EU-wide framework embodied in the AI Act, to state-level rules in the U.S., to country-specific approaches in Asia – creates implementation complexity that vendors rarely acknowledge.

Regulatory uncertainty can introduce practical limitations that sharply restrict AI’s theoretical capabilities. For example, a solution that works in one jurisdiction may violate requirements in another, forcing multinational organizations toward lowest-common-denominator implementations that sacrifice innovation for compliance certainty.

Expert Capability Gaps

The persistent lack of internal expertise cited in surveys reveals an uncomfortable truth when it comes to AI adoption: The technology has advanced faster than the human capacity to implement it effectively. This skills gap means many organizations are deploying AI sub-optimally and achieving only a fraction of its potential benefits.

Moreover, the expertise shortage extends beyond pure technical capabilities. Effective AI implementation requires professionals who understand algorithmic capabilities, regulatory requirements, and organizational design. This combination remains exceedingly rare. Until this talent gap is closed, the AI revolution will remain incremental.

Evolution Disguised as Revolution

Most organizations expect widespread adoption within three years, but the trajectory is more evolutionary than revolutionary. Practitioners anticipate that AI will change their roles rather than eliminate them entirely. The changes will be more subtle than anticipated, with risk managers shifting toward strategic responsibilities, technical collaboration, exception handling, and AI supervision.

This evolution mirrors previous technological shifts in the financial services industry. Just as spreadsheets didn’t eliminate accountants but changed the way they worked, AI will reshape rather than replace compliance functions. For those willing and able to adapt, the future will be different but bright.

Parting Thoughts

The Moody’s survey suggests AI in risk and compliance exists somewhere between hype and revolution. The technology delivers genuine benefits, including improved detection rates, faster processing, and enhanced scalability, but falls short of the immediate and radical transformative promises proposed by some vendors.

Risk manager actions echo the survey results. While most practitioners see significant advantages from AI adoption, their measured implementation approach and persistent concerns about over-reliance suggest healthy skepticism. They recognize AI is a powerful tool that requires careful deployment, rather than a magical solution that can resolve all their compliance challenges.

The evolution of risk and compliance will emerge through incremental improvements rather than dramatic disruption. As organizations build expertise, refine use cases, and develop governance frameworks, AI's contribution will steadily grow. But the human element, including judgment, ethics, and accountability, won’t be replaced anytime soon.

The real question isn't whether or when AI will revolutionize risk and compliance, but how practitioners will navigate between technological possibilities and operational realities. How can we best extract value while managing new risks? In this journey lies the true transformation: Not the wholesale automation of compliance, but its evolution into a hybrid discipline that combines human expertise and judgment with algorithmic capabilities.

Risk management is at a crucial stage where doing nothing about AI isn't really an option anymore. At the same time, rushing into AI adoption without proper preparation can lead to serious problems. Successful leaders will find a way to balance embracing innovation with the careful, skeptical approach that effective risk management requires, guiding their teams and companies through this exciting yet challenging time.

 

Cristian deRitis is Managing Director and Deputy Chief Economist at Moody's Analytics. As the head of econometric model research and development, he specializes in analyzing current and future economic conditions, scenario design, consumer credit markets, and housing. In addition to his published research, Cristian is a co-host of the popular Inside Economics Podcast. He can be reached at cristian.deritis@moodys.com.