Article

From Black Boxes to Boardrooms: How Banks Must Govern Artificial Intelligence

February 20, 2026 | 4 minutes reading time | By Arun Maheshwari

“AI is no longer an innovation story. It is a governance story.”

For decades, financial institutions operated under a clear division of responsibility: Humans made decisions, models provided analytical support. That boundary has now collapsed.

Across banking, payments, capital markets and insurance, artificial intelligence systems are no longer advisory tools. They now:

  • Approve or decline customers
  • Trigger fraud interdictions
  • Block sanctions payments
  • Freeze accounts
  • File suspicious activity reports
  • Allocate capital
  • Execute trades
  • Automate regulatory surveillance

At scale, these decisions carry direct financial, regulatory and societal consequences. A biased model can create systemic financial exclusion. A poorly governed fraud engine can cause mass customer harm. A hallucinating generative AI system can fabricate regulatory submissions. A weak sanctions model can expose institutions to significant enforcement risk.

In this new paradigm, algorithms are not merely analytical assets. They are regulated decision-makers.

AI Is Now a Regulated Model

Supervisors globally have made it clear that AI does not sit outside the perimeter of prudential regulation.

amaheshwari - 160 x 190Arun Maheshwari: A “best practice” architecture.

In the United States, supervisory guidance under SR 11-7 is now applied in practice to machine learning models used in credit underwriting, fraud detection, transaction monitoring, and sanctions screening. The Office of the Comptroller of the Currency, Federal Reserve, and FDIC routinely review AI models under model risk examinations.

In the United Kingdom, the Prudential Regulation Authority’s SS1/23 model risk framework explicitly includes machine learning and advanced analytics, requiring firms to demonstrate explainability, performance stability, governance, and independent validation.

The European Central Bank’s TRIM framework similarly captures AI-driven risk models, while the EU AI Act introduces direct legal obligations around algorithmic transparency, fairness and control.

From a supervisory perspective, the principle is simple: If a model influences a regulated decision, it is itself regulated.

The Boardroom Blind Spot

Despite the rapid deployment of AI, most bank boards remain structurally underprepared for algorithmic governance.

Boards are well-versed in capital adequacy, liquidity stress testing, credit concentrations and financial crime risk. However, few directors are equipped to challenge how an AI model was trained, whether its features introduce bias, how it behaves under economic stress, whether its outputs can be explained to regulators, or what happens when it fails.

This creates a dangerous asymmetry. AI systems now sit at the core of institutional decision-making, while board oversight remains anchored in pre-AI risk frameworks.

The result is an emerging governance blind spot that will eventually surface through regulatory findings, enforcement actions or public failures.

Reframing AI as an Enterprise Risk Class

The first step toward effective governance is conceptual. AI must be treated as an enterprise risk class, not a technology asset.

AI introduces distinct and material risk categories:

  • Model risk from instability, overfitting and concept drift
  • Operational risk from automation failures and control breakdowns
  • Compliance risk from regulatory breaches and reporting errors
  • Conduct risk from unfair or biased outcomes
  • Reputational risk from algorithmic scandals

These risks cut across business lines and jurisdictions. They do not belong solely to data science teams or innovation labs. They fall squarely within the remit of the CRO, CCO and board risk committee.

In practice, this means AI governance must be embedded within the same enterprise frameworks that govern capital, credit, liquidity and financial crime.

A Governance Framework for Enterprise AI

Leading institutions are converging around a five-pillar governance architecture that reflects emerging regulatory expectations and supervisory best practice.

1. Board-Level Accountability

AI oversight should reside with the board risk committee rather than the technology committee. Boards should approve the institution’s AI risk appetite, set boundaries on automated decisioning, review material AI deployments, receive periodic AI risk reporting, and review model incidents and failures.

2. Model Risk Integration

AI models should be fully integrated into the model risk lifecycle, including inventory and tiering, independent validation, performance monitoring, outcome testing, stress testing and periodic revalidation. This applies equally to compliance models, fraud engines, sanctions screening platforms, credit underwriting systems and trading algorithms.

3. Explainability and Auditability

Black-box models are incompatible with regulatory accountability. Institutions must be able to explain why a customer was declined, why a transaction was flagged, why a payment was blocked, and why an alert was escalated. This requires explainability layers, challenger models, and outcome-testing frameworks that translate algorithmic logic into regulator-ready narratives.

4. Human-in-the-Loop Controls

No AI system should operate without human accountability. This includes override mechanisms, escalation thresholds, kill switches, decision review processes and exception management. AI should accelerate human judgment, not replace it.

5. Continuous Monitoring and Containment

Unlike traditional models, AI systems degrade over time. Customer behavior shifts, fraud typologies evolve, economic regimes change, and sanctions programs expand. AI governance therefore requires drift detection, bias monitoring, outcome surveillance, performance thresholds and automated alerts.

The Rise of the Algorithmic Risk Officer

As AI reshapes financial services, it is also reshaping risk leadership.

A new class of executive is emerging, one who understands not only capital and credit, but also data, models and algorithms. The modern CRO must be as fluent in machine learning as in Basel III. The modern CCO must understand model governance as deeply as regulatory policy.

Institutions that fail to evolve their risk leadership will find themselves governed by algorithms they do not fully understand.

The Governance Imperative

AI is no longer an innovation story. It is a governance story.

It is about accountability, transparency, control and trust. It is about whether financial institutions can harness the power of algorithms without surrendering their regulatory obligations. It is about whether boards can govern systems that operate at machine speed and global scale.

The institutions that succeed will be those that move AI from black boxes to boardrooms, embedding algorithmic decision-making within the core architecture of enterprise risk management.

In the age of algorithmic finance, governance is not optional. It is existential.

 

Arun Maheshwari is a senior risk and compliance professional with over 17 years of experience across quantitative finance, model risk management, financial crime compliance, credit risk and operational risk. He currently serves as Head of Model Risk Control, Legal and Compliance, at Morgan Stanley, where he oversees governance and monitoring of various models and manages risks around those models.

The views expressed in the article above are those of the author and do not necessarily reflect those of his employer or any affiliated institutions.

Topics: Enterprise, Model Risk, Innovation, Tools & Techniques

Share

Related Insights