For decades, the mantra of risk practitioners has been that better predictive power equals better risk management. The EU AI Act, however, turns this assumption upside down. Indeed, your most accurate models might now be your greatest liabilities.
Cristian deRitis
Picture this: You're two years into developing what could be the most sophisticated credit decisioning model your institution has ever deployed. The algorithm is amazing — predicting defaults with 30% greater accuracy than your current system. You and your team have poured hundreds of hours into this effort. Then, just three weeks before going live, your risk committee shelves the project. The opacity and limited explainability of the model rendered it in non-compliance with the EU AI Act.
While the European Union has taken the lead in implementing strict regulations around the use and implementation of artificial intelligence, it is a wake-up call for risk managers across the globe, as other countries are likely to develop their own rules and regulations. To be clear, this isn't just an incremental increase in compliance requirements, it's a major shift with far-reaching implications for how institutions leverage and deploy AI models.
Let’s now explore the impact of new AI regulation and discuss ways risk managers can minimize regulatory risk exposure while still garnering all of the benefits that more accurate AI-driven models have to offer.
The EU AI Act categorizes AI systems into different risk levels — from minimal risk to unacceptable risk. It prohibits certain AI applications deemed too dangerous, like social scoring systems and AI that exploits the vulnerabilities of specific groups. High-risk AI systems, such as those used in critical infrastructure, education or law enforcement, face strict requirements – including risk assessments, data governance and human oversight.
High-risk applications for financial institutions include credit decisioning, fraud detection and automated underwriting. AI models covering these tasks face stringent requirements – including continuous risk assessment, comprehensive data governance, human oversight and exhaustive technical documentation.
Defending model performance purely on statistical grounds is no longer sufficient. We must be able to explain not just what our AI models predict but also how they “think,” why they think this way, and whether this thinking introduces unfair bias.
For financial institutions, AI regulations will primarily affect their ability to model credit risk, market risk and operational risk in the following ways:
Credit Risk. Alternative machine-learning modeling strategies that opened new avenues in risk management in recent years will now require fairness assessments beyond traditional demographic analysis. Every feature selection, hyperparameter choice and training dataset will become a potential regulatory decision that needs to be examined and defended. Model risk assessments that once focused purely on statistical separation of credit events must now address questions of algorithmic fairness.
Market Risk. Historically, given their limited impact on individual customer credit decisions, high-frequency trading and automated market-making models and systems were excluded from intense regulatory oversight. But the EU AI Act demands human oversight in AI systems that are designed to run at lightning speeds. This requirement amounts to more than just adding a legal disclaimer or a human acknowledgment step: systems will need to be re-architected to introduce meaningful human oversight and intervention, without diminishing the competitive advantage offered by these hyper-optimized systems.
Operational Risk. Fraud detection systems now must be scrutinized not just for their effectiveness but also for their fairness across demographic groups. False positives that disproportionately affect protected groups may become compliance violations. Authentication systems and transaction monitoring will require comprehensive bias assessment and regular disparate impact analysis.
The EU AI Act's treatment of general-purpose AI is particularly challenging for firms that have integrated foundational AI models into their day-to-day processes, such as document analysis, regulatory reporting, synthetic data generation and model development.
The compliance and governance aspect of these tools cannot be outsourced. Firms will therefore need to subject these processes to the same governance standard as internally developed systems. Moreover, this may require additional communication and transparency from generalized AI model vendors.
Synthetic data generation techniques used in AI model development present another thorny challenge. These approaches evolved to improve performance, enhance privacy and confidentiality, and accelerate the development of risk models. Under the Act, however, synthetic data generation itself qualifies as a high-risk activity, requiring extensive validation.
The financial impact of the Act extends beyond direct compliance costs. High-risk models could have shorter lives, as models based on near real-time data require more frequent redevelopment. Development costs will increase as explainability, fairness assessment and documentation requirements expand project scope. These additional requirements could offset some of the cost savings that AI-assisted models are intended to deliver.
What’s more, these pressures will force difficult prioritization decisions. Marginal models that may have been viable under old cost structures may no longer make economic sense. Given ongoing compliance and monitoring costs, risk managers and modelers will need to be judicious in choosing to develop and deploy new AI models.
EU AI Act prohibitions on unacceptable risk systems took effect in February 2025. General-purpose AI model requirements, meanwhile, go into effect in August 2025, with full, high-risk compliance starting in August 2026. For financial institutions, this represents perhaps two or three development cycles at most to transform their entire AI governance frameworks.
As daunting as this sounds, all is not dark. The Act provides the business case for responsible investment and governance enhancements that risk managers have sought for years. While competitors struggle with compliance, forward-thinking institutions that master AI governance first will not just avoid penalties but gain decisive advantages in an increasingly AI-driven marketplace.
Although the regulations add compliance costs and complexity, they also provide clearer guidelines for AI development, potentially reducing regulatory uncertainty and supporting responsible innovation in areas like personalized financial services and risk management.
Institutions that operate entirely outside of Europe can ignore the EU AI Act at their peril. The move to regulate AI is global, with the EU establishing a de facto international standard.
Countries such as the UK, Canada, and Singapore are developing similar frameworks. While U.S. regulators have taken a more laissez-faire approach to AI to encourage innovation, calls for governance are increasing and will undoubtedly lead to increased oversight and regulation in the future.
This regulatory movement creates opportunity. Non-EU businesses that move toward EU AI Act compliance will have a head start when similar regulations eventually arrive in their countries.
The Act does not just change compliance — it redefines competitive advantage. As the financial industry moves towards more commoditized, AI-driven services, trust will become the ultimate differentiator.
Institutions showing responsible and transparent use of AI will enjoy enhanced customer confidence, regulatory credibility and stakeholder assurance. These intangibles will eventually translate into tangible business value through improved customer acquisition and retention, as well as enhanced regulatory relationships.
As challenging as compliance with the EU AI Act may sound in the short term, it doesn’t need to represent an inevitable or permanent trade-off between innovation and compliance. With strategic foresight and robust governance structures, institutions can satisfy regulatory requirements while still advancing AI capabilities.
The EU AI Act marks the start of an era where good governance and competitive advantage are intertwined. Institutions that recognize and respond to this trend first will not only survive the regulatory transition but thrive in the era of AI-powered financial services. The question is not whether to adapt — it is whether you will lead and influence the transformation or if you will follow and struggle to catch up.
The path forward is not to abandon innovation but to embrace a framework where strong governance and technological advancement become reinforcing.
Cristian deRitis is Managing Director and Deputy Chief Economist at Moody's Analytics. As the head of econometric model research and development, he specializes in analyzing current and future economic conditions, scenario design, consumer credit markets and housing. In addition to his published research, Cristian is a co-host of the popular Inside Economics Podcast. He can be reached at cristian.deritis@moodys.com.