Skip to content
Article

AI Regulation by Geography: How Jurisdictional Differences Have Emerged

July 12, 2024 | 1 minutes reading time | By Donald McElligott

Whether the varied approaches and emphases are seen as confusing or as healthy experimentation, financial firms must take their compliance obligations seriously.

There’s no doubt that artificial intelligence has proven to be groundbreaking. Several decades ago, the widespread use of AI still seemed like a far-fetched concept. Now, it’s more common than not to talk to a customer service chatbot, get “recommended for you” ads when online shopping, or use GPS to detect traffic patterns.

While it has presented tremendous benefits, AI remains a polarizing topic. Beyond the ways in which it has helped, it is a technology that is still evolving. A Pew Research Center study found that in 2023, 52% of Americans were more concerned than excited about the integration of AI into daily life.

Likewise, it seems that financial regulators share a similar feeling. While cautiously optimistic and thoughtfully critical, regulators across the board have made clear that they are aware of the shifting industry and are preparing to ensure that firms are using AI securely.

Learning the AI Ropes

Due to AI’s novelty and ongoing transformation, defined guidelines are still developing. Regulators are defining their expectations, creating broad outlines, and devising best practices to manage potential risks and establish standards that enable firms to harness AI’s abilities.

Per regulatory discussions and cumulative frameworks that are being created and revamped, governing agencies globally are chiming into the conversation and collaborating to enhance general knowledge about the evolving technology.

dmcelligott-170x170Global Relay’s Donald McElligott: A mixed bag across jurisdictions.

By building on collective insights and research across the field, regulators and governments seem to be aligning on how to gauge, design, and implement comprehensive parameters that allow firms to critically think about how they deploy AI models. To do so, they must anticipate challenges, embrace advancements and stay alert in preparation.

There seems to be a similar theme of evaluation and exploration throughout all financial jurisdictions, though certain regulators are opting for a more assertive, hands-on approach, while others are delivering guidance from a higher level as to prevent the stifling of innovation.

The U.S. Approach Takes Shape

We have seen intensified action from the U.S. in navigating the opportunities and obstacles that AI presents. Regulators like the Commodity Futures Trading Commission and Securities and Exchange Commission have acknowledged the unparalleled ways that firms are using machine learning technology to analyze information and streamline business operations.

Through a succession of speeches, statements, and roundtable discussions, U.S. financial regulators’ approach to AI has taken the form of understanding how the technology works to establish parameters that shelter market integrity and consumer protection.

The CFTC has declared that it is “technology neutral” and focusing on AI evolution – particularly in relation to fairness, transparency, safety, security, and explainability. The regulator has held multiple meetings with the Technology Advisory Committee to exchange ideas about how different regulatory bodies are maneuvering AI use, evaluating its benefits, and advising on threat areas.

During an advisory committee AI Day on May 2, Federal Reserve System Chief Innovation Officer Sunaya Tuteja spoke about how the agency is advancing responsible innovation, underscoring that in addition to minimizing risks, it is important to interrogate AI and ponder how it can reshape the industry:

“Are we looking at this new technology in the context of solving gnarly problems? Are we designing meaningful optionality and solutions that can help us level up the institution for the present and future?”

NIST Frameworks

Another critical piece of guidance is the National Institute of Standards and Technology (NIST) Cybersecurity Framework, which was created to bring clarity to concepts like information security, risk and trustworthiness, now accompanied by an AI Risk Management Framework. NIST states that a trustworthy AI system should be valid and reliable, safe, secure and resilient, privacy enhanced, interpretable and explainable, fair with harmful bias managed, and transparent and accountable.

During the AI Day, NIST AI Advisor Elham Tabassi explained that instead of taking a prescriptive approach, its framework is risk-based and puts emphasis on outcomes:

“In order to be able to improve the trustworthiness of the AI system – the safety, the security and the privacy – you need to know what they are . . . and how to measure them.”

Similarly, the SEC has been vocal about its view on AI integration, highlighting the impact it can have on a micro and macro level. Last year, it proposed rules aiming to manage AI in investor interactions. Although Chair Gary Gensler has described AI as “the most transformative technology of our time,” he cautioned that AI’s ability to accumulate data to make predictions could lead to herding behavior due to firms’ overreliance on the same base models. On the other hand, the SEC has leveraged machine learning (ML), deep learning, and data review to oversee and surveil markets, Gensler acknowledged.

More broadly, the Biden-Harris Administration issued an October 2023 Executive Order on AI development and use that laid groundwork for governance. Through NIST, an Artificial Intelligence Safety Institute will be focused initially on the executive order’s priorities.

Canada: Distinct Expectations

Canada’s Office of the Superintendent of Financial Institutions is developing distinct expectations around model risk management in relation to AI and ML. Following a consultation period, the OSFI revised Guideline E-23 to reflect the advancing technologies, adding aspects of AI and ML to the definition of model.

The OSFI defined model risk as “the risk of adverse financial, operational and/or reputational consequences arising from flaws or limitations in the design, development, implementation and/or use of a model.” Alongside this description, the agency outlined the range of situations that could bring about model risk, such as inappropriate specifications or flawed hypotheses.

Per the revised guidelines, financial institutions are expected to maintain a secure model risk management framework by monitoring, testing, risk-reviewing and troubleshooting AI systems to remain proactive in the face of technological evolution.

As opposed to other jurisdictions that are seeking to maximize the potential of AI while controlling risk areas, OSFI seems to have a warier view, narrowing in on the repercussions of misuse as summarized in a report on an industry forum:

“Often the focus is on the ‘mean’ of the outcome distribution to justify use and improve the validity of AI applications; however, risk thinkers must assess the ‘tails’ of those same distributions, with their peripheral vision and creative minds, to be able to mitigate any unforeseen, disastrous consequences.”

U.K.: Principles and Outcomes

Though the Financial Conduct Authority and Prudential Regulation Authority have released statements recognizing AI’s growth and possible challenges, the general consensus is to take a hands-off approach so as to foster competitiveness and support innovation.

The U.K. government’s pro-innovation strategy is to remain forward-thinking in consideration of transformative technologies. In response, the FCA declared it is a “technology-agnostic, principles-based and outcomes-focused regulator,” will be accepting the integration of AI into markets, but will be taking a closer look at the risks to ensure that its main regulatory objectives are not violated.

Instead of focusing on AI specifically, the FCA is considering technology and data overall and has stated that the tools for AI management are contained within existing guidance:

“Many risks related to AI are not necessarily unique to AI itself and can therefore be mitigated within existing legislative and/or regulatory frameworks. Under our outcomes-based approach, we already have a number of frameworks in place which are relevant to firms’ safe use of AI.”

Similarly, the PRA’s main objective is to ensure safety and soundness when monitoring firm operations. Ancillary objectives aim to facilitate effective competition between firms and the overall competitiveness of the U.K. economy. The PRA responded to the U.K. government’s pro-innovation position by lining itself up with the following objectives:

  1. Safety, security and robustness: Risks should be continuously identified, addressed and managed.
  2. Transparency and explainability: The PRA and FCA do not define machine learning’s interpretability or explainability, but expect regulated banks to do so.
  3. Fairness: AI models should not violate individual or organizational legal rights, discriminate unfairly against individuals, or bring about unjust market outcomes.
  4. Accountability and governance: Governance measures could be utilized to oversee AI models and ensure accountability, such as those covered in the Senior Managers and Certification Regimeor Model Risk Management framework (SS1/23).

In addition, the PRA and FCA will continue to run surveys compiling industry responses to ML in U.K. financial services to ensure regulatory practices remain up to date.

The European Union Acts

The most noteworthy move from European regulators and across AI governance worldwide has been the EU AI Act, which was passed in March 2024. The act takes a “risk-based approach” to AI governance and prevents certain practices while balancing innovation.

Organizations will have to comply with heightened requirements where their use of AI models is considered high-risk, such as AI-based creditworthiness. These guidelines have been implemented to enhance safety and ensure that fundamental rights and ethics are preserved.

The act also addresses large language models and generative AI models, obliging organizations that are utilizing them to self-assess and mitigate systemic risks, conduct model evaluations, and remain mindful of cybersecurity requirements.

The European Central Bank also recognized AI’s ability to make supervisory processes more efficient.  ECB Supervisory Board member Elizabeth McCaul summarized the bank’s future-facing outlook:  

“The role of ECB Banking Supervision is to ensure that banks remain safe and sound. It is not for us to dictate which business models banks adopt . . . What we can do . . . is draw on the power of AI to decipher data, understand risks and speed up proces ses, freeing up more time for human analysis . . . in an increasingly complex world.”

A(I) Matter of Time

These matters go hand-in-hand with other key focus areas on regulators’ agendas, such as cybersecurity and operational resilience, which have also been at the top of the regulatory agenda as risks become more palpable in reflection of the transforming industry.

Past the stages of “if” and “when,” it is evident that AI will only become further ingrained in the way that financial firms operate. Since it is here to stay, regulators must determine how to actively manage mutating risks while allowing firms to harness new technologies and optimize the ways they conduct business.

AI innovation has already had a significant impact on finance and will continue to transform the industry as we look ahead. While regulators establish more-defined frameworks, it is important that firms ensure they are approaching new technologies with a compliance-first attitude.

 

Donald McElligott is vice president, compliance supervision, at communication compliance solutions provider Global Relay. With over 25 years of IT experience in the regulation and compliance arena, he has deep product expertise on compliance supervision within Global Relay Archive. Before Global Relay – which recently released its Compliant Communications 2024 report – McElligott worked for CA on Data Minder, formerly known as Orchestria, where he led the technical sales team.

This article is adapted from a previously published blog post.

Topics: Regulation & Compliance

Advertisement

Share

Trending