Disruptive Technologies
Friday, July 26, 2024
By Jeffrey Kutler
“Too critical to fail.”
Is that what regulators mean – more precise semantically if not quantifiably – when they regard a bank as being too big to be allowed to fail? Might “too critical” widen the policy lens to encompass not only the biggest banks, not only those and nonbank institutions whose collapse would jeopardize market and economic stability, but also operations and technologies on which those entities and qualities are increasingly dependent?
Bring artificial intelligence, cloud computing, cybersecurity, technology interdependencies and the ubiquity of “Big Tech” into the conversation, and the issues and implications reach beyond the domain of any given regulatory agency, not to mention geographical borders.
These complexities, however, are not lost on authorities tracking the effects of digital transformation and disruption on market concentration and competitiveness – and potentially on global financial stability.
The massive, global operational disruptions of July 19, caused by a faulty CrowdStrike cybersecurity software upgrade to Microsoft Windows systems, underscored the challenges and vulnerabilities of technological interconnectedness that are on the regulators’ minds.
Concerns were aired, for instance, at the end of June in the Bank for International Settlements Annual Economic Report: “The widespread use of AI applications in the financial sector” brings new challenges pertaining to “cybersecurity and operational resilience as well as financial stability. The reliance on AI heightens concerns about cyberattacks, which regularly feature among the top worries in the financial industry.”
BIS’s Agustín Carstens: “New forms of systemic risk.”
Also touching on risks arising from “when AI tools are provided by Big Techs” and from reliance on “just a few providers of AI models, which increases third-party dependency risks,” the BIS was building on several years of multilateral-agency “literature” on the Big Tech/fintech nexus. Publications include Big Tech in Finance: Market Developments and Potential Financial Stability Implications (Financial Stability Board, 2019); and Big Tech Interdependencies – a Key Policy Blind Spot (BIS Financial Stability Institute, 2022). [See also How to Regulate Big Tech? The BIS Has Some Ideas (GARP, 2021)]
In a February 2023 speech, “Big Techs in Finance: Forging a New Regulatory Path,” at a conference on that subject in Basel, Switzerland, BIS general manager Agustín Carstens outlined “new challenges for policymakers” in the areas of data governance, “concentration dynamics” tied to Big Tech-related network effects, and “important financial stability considerations which fall squarely within the mandates of central banks and financial regulators.” Among the last: Big Techs’ potential systemic importance.
Widespread dependency on critical services from Big Techs, given their tendency toward market concentration, “is forming single points of failure and hence creating new forms of systemic risk at the technology services level,” Carstens said, pointing particularly at the cloud offerings.
“As a consequence, disruptions in the operations of one Big Tech could have a substantial impact on the financial system,” the BIS executive warned. “In other words, greater operational risks can translate into greater financial stability risks, especially when critical services are highly concentrated.” The concerns “are aggravated by shortcomings in the current regulatory approach, which is not fully fit for purpose to deal with the unique set of challenges arising from Big Techs’ entry into financial services.”
A June 2021 presentation by International Monetary Fund economist Tobias Adrian was thus neither the first nor the last to address Big Tech in Financial Services. Adrian, then and still the IMF’s financial counsellor and director of the Monetary and Capital Markets Department, who presides over the fund’s twice-yearly Global Financial Stability Reports, laid out the logic of “too critical to fail.”
IMF’s Tobias Adrian: Risk to markets, consumers and financial stability.
The range of cloud users – from the biggest banks and asset managers down to small startups – “is wide, yet there is a strong dependency on only a few critical providers,” Adrian noted. A Bank of England survey indicated more than 70% of banks and 80% of insurers were then relying on just two cloud providers for infrastructure as a service (IaaS). Two Big Tech entities had a combined 52% of global cloud-services market share, while four garnered more than two-thirds.
“This concentration highlights the reliance of the financial sector on the services provided by Big Tech,” Adrian asserted. “Ultimately, failure of even one of these firms, or failure of a service, could create a significant event in financial services, with a negative impact on markets, consumers and financial stability. The importance of these services” led him to the conclusion that in at least some respects, Big Techs may already have been too critical to fail.
Back in 2021, Amazon Web Services, Google Cloud, IBM and Microsoft Azure were ramping up sales and partnership strategies targeting banks, capital market operators and infrastructures, and vendors to the industry. Those bore fruit in the form of numerous customer relationships and/or strategic alliances. Nasdaq, Goldman Sachs, S&P Global and London’s Aquis Exchange linked up with AWS, for example. Citadel Securities, CME Group, Deutsche Börse Group and MSCI aligned with Google, London Stock Exchange Group with Microsoft.
[Microsoft’s Azure had an outage on July 18 that provided a cautionary glimpse into cloud dependence and resilience. Besides grounding some U.S. airlines, it disabled London Stock Exchange news feeds; securities trading was not affected. That occurred the day before the far more serious meltdown of Microsoft Windows, affecting an estimated 8.5 million devices worldwide and attributed to the CrowdStrike cybersecurity software update.]
Early last year, State Street Corp. chose both AWS and Microsoft “as strategic providers of cloud and infrastructure solutions in connection with its multi-year technology transformation journey.”
This past February, Bank of New York Mellon Corp. announced a “global alliance” of its data and analytics platform with Microsoft Azure. BNY followed in May with a deal to deliver a cloud-native “transformative data operating model” for Abu Dhabi-based alternative investment manager Lunate Capital.
“Big Tech in Finance” author Igor Pejic
Modernizing its historically siloed and hardware-centric data storage “in a relatively short period of time,” Depository Trust & Clearing Corp. “has transitioned to data being largely stored on the cloud via cloud providers such as Amazon, Google or Snowflake,” DTCC chief IT architect Neelesh Prabhu wrote in an article last year.
Finance is being transformed by – and is reliant on – Big Tech innovations and core competencies, technologist and trend-watcher Igor Pejic, author of Big Tech in Finance and The New Frontier newsletter, observed in a July 16 Z/Yen Group FS Club webinar. In view of the cloud and outsourcing prevalence of Amazon, Alphabet (Google) and Microsoft, and payment initiatives by the likes of Amazon, Apple, Google, Meta (Facebook) and China’s Alipay, Pejic believes that licensing and systemically-important designations should be in the cards.
Prompted by the popularity of Amazon, Apple, Google and PayPal services, the Financial Conduct Authority and Payment Systems Regulator are jointly soliciting public comments through September 13 on Big Tech and digital wallets. Says the U.K. agencies’ call for information: “Digital wallets may affect financial resilience and systemic risk within the financial system – for instance, if they were to suffer an operational failure or outage. We would like to understand both the potential benefits and the risks arising either now or in the future.”
“Potential impact of market concentration in cloud service offerings on the financial sector’s resilience” was one of six financial services sector “thematic challenges” listed in the December 2023 annual report of the Financial Stability Oversight Council, the multi-agency U.S. monitor of systemic risk and arbiter of the “systemically important” designation that tightens supervisory reins on the biggest banks.
The Department of the Treasury, which houses the FSOC, launched a Cloud Executive Steering Group. One of the group’s workstreams was to define what the sector views as cloud concentration risk. New “resources on effective practices for cloud adoption” were published on July 17.
The BIS Annual Economic Report homed in on market concentration and third-party dependency risks in AI modeling, where Big Techs are important providers of technology tools.
“Market concentration arises from the centrality of data and the vast costs of developing and implementing data-hungry models,” according to the report. “Heavy up-front investment is required to build data storage facilities, hire and train staff, gather and clean data and develop or refine algorithms.
“However, once the infrastructure is in place, the cost of adding each extra unit of data is negligible. This centrality leads to so-called data gravity: Companies that already have an edge in collecting, storing and analyzing data can provide better-trained AI tools, whose use creates ever more data over time.” The upshot is “that only a few companies provide cutting-edge LLMs [large language models]. Any failure among or cyberattack on these providers, or their models, poses risks to financial institutions relying on them.”
AI bias, privacy and security are consumer trust issues, “especially in high-stakes areas such as banking and public policy and when AI tools are provided by Big Techs,” according to the BIS Annual Economic Report.
“The existing regulatory framework was not formulated with closely connected digital platform ecosystems in mind and may miss the risks arising from interdependencies,” said the 2022 Financial Stability Institute paper by Juan Carlos Crisanto, Johannes Ehrentraud, Marcos Fabian and Amélie Monteil.
Big Techs certainly do not go scot-free. As public companies they have governance, reporting and disclosure obligations. Under the U.S. Securities and Exchange Commission, those were recently extended into the realm of cybersecurity incidents.
SEC Chair Gary Gensler has said that a regulatory system geared toward individual-firm oversight could fail to detect nascent signs of contagion or crisis triggered by AI, or the risk of “many institutions relying on the same underlying base model or underlying data aggregator.”
In a similar vein, model risk management was baked into banking supervision and due diligence for years before the advent of generative AI and large language models. But now the FSOC – of which Gensler is a member – cautions that the use of AI can introduce “safety-and-soundness risks like cyber and model risks.”
Third-party risk management is another supervisory staple. The Basel Committee on Banking Supervision has issued a consultative document, Principles for the Sound Management of Third-Party Risk, saying in its July 9 announcement that amid “ongoing digitalization . . . banks have become increasingly reliant on third parties for services that they had not previously undertaken. This increased reliance on third parties beyond the scope of traditional outsourcing, coupled with the expansion of supply chains and rising concentration risks, has necessitated an update to the 2005 Joint Forum paper Outsourcing in Financial Services, specifically for the banking sector.”
Antitrust and competition authorities, and their power to sue for breakups or other remedies, cause some angst for Big Techs. In a July 23 joint statement on “AI competition risks,” the U.S. Federal Trade Commission, the Department of Justice’s Antitrust Division, and U.K. and European Union counterparts committed to safeguarding against unfair competition and deceptive practices. They said principles such as fair dealing, interoperability and choice would help enable competition and foster innovation.
Data protection and privacy laws in the EU, as well as the AI Act, Digital Markets Act and Digital Operational Resilience Act, could compound the scrutiny on Big Tech and big data.
U.S. Senators Elizabeth Warren of Massachusetts, Peter Welch of Vermont and Ron Wyden of Oregon in a letter this month urged top Federal Trade Commission and Justice Department officials to crack down on “undue consolidation in the emerging generative artificial intelligence industry.”
Flagging “potential harms to consumers, innovation and national security,” the Democratic lawmakers expressed support for current “investigations into the investments, partnerships and dominance of firms like Microsoft, Google, Amazon and Nvidia as a first step.” They called for “swift and resolute enforcement action against any company that engages in anticompetitive practices, at each layer of the generative AI industry stack.”
But such measures differ from, and are blunter instruments than, activity-based financial regulation.
Some activities of diversified companies may fall under banking, payment or insurance regulation, but these “sectoral frameworks were not designed to mitigate the risks created by interdependencies inherent in Big Tech business models,” the Financial Stability Institute report said. Addressing risks arising from the resulting blind spots “may require the development of specific entity-based rules for Big Tech operations in the financial sector.”
“Without a doubt, a regulatory rethink is warranted,” Carstens said in 2023. Seeking to strike the right balance between benefits and risks, “We at the BIS have argued for some time now that we have to go one step further and regulate Big Techs directly. More concretely, we need to consider how best to complement existing activity-based rules under sectoral regulations with group-wide entity-based requirements that would allow authorities to address financial stability risks emerging from the interactions between the different financial and commercial activities that Big Techs perform.”
One possible approach, which Carstens termed “restriction,” would bar Big Techs from regulated financial activities. This would preserve the traditional separation of banking and commerce – but also deprive the market of “the numerous benefits” of the Big Techs’ services.
An alternative, “segregation,” would bundle a Big Tech’s financial services within a holding company, ring-fenced from the parent’s nonfinancial activities and having to meet prudential and other requirements. Although “conceptually simple,” the segregation approach would prevent realization of synergies, economies of scale and cross-sector data insights, Carstens said. “In all likelihood, this would lead at least some Big Techs to exit financial services altogether.”
A third way, “inclusion,” would have Big Techs with significant financial activities face group-wide governance, conduct and operational resilience requirements “and, only when appropriate, financial soundness. This is because most Big Tech risks are not strictly related to their financial soundness, but their data-driven business model,” Carstens explained.
Google’s David Weller: Regulate AI uses, not the science.
Segregation and inclusion “are to some extent mutually compatible, and in practice a combination of both may be desirable,” he contended. “Such a holistic approach could combine a prudential sub-consolidation of the financial part of a Big Tech group (as under the segregation approach) with group-wide requirements on governance, conduct of business and operational resilience (as under the inclusion approach). Importantly, it would avoid efficiency losses in the use of data that (too) tight ring-fencing measures could cause.”
Commenting on the desirability of balancing innovation and regulation, David Weller, senior director of emerging tech, competitiveness and sustainability policy at Google, suggested such principles as taking a risk-based approach, and regulating uses of AI “where it meets the world, not the underlying science and technology.”
Privacy and competition rules would apply differently to, say, a GPS map service than to “regulated sectors,” Weller said during a Brookings Institution panel discussion. And international alignment is necessary to avoid ending up with “100 different approaches . . . and policy cacophony.”
Weller quoted a mantra at Google, that “AI is too important not to regulate, and too important not to regulate well.”
Kent Walker, Google and Alphabet president of global affairs, spoke at the Aspen Security Forum on the day the CrowdStrike crisis hit, as reported in the New York Times: “We are optimistic that AI is actually allowing us to make significant – not transformative yet, but significant – progress in being able to identify vulnerabilities, patch holes, improve the quality of coding.”
A congressional proposal in 2022, aimed at strengthening cybersecurity standards and practices across the U.S. economy, would have taken a page from the financial sector by authorizing the designation of SIEs, or systemically important critical-infrastructure entities.
The provision failed to make the cut in that year’s National Defense Authorization Act. But even without that codification, the Cybersecurity Infrastructure and Security Agency (CISA) maintains a roll of 16 critical infrastructure sectors, including financial services and information technology, promoting best practices and information sharing. The sectors are also subject to an incident reporting law and to presidential executive orders in such areas as cybersecurity and artificial intelligence.
Adam Conner of CAP
With an eye on the existing regulatory patchwork, and not expecting comprehensive legislation to emerge soon from Congress, the liberal-leaning Center for American Progress (CAP) released a report in June on statutory authorities that are applicable to regulating AI. Produced jointly with Governing for Impact, the report in one of its chapters – credited to Todd Phillips of Georgia State University and CAP vice president for technology policy Adam Conner – mapped out the latitude currently available to financial regulatory agencies.
Two of the recommendations:
-- Designate major providers of AI services to financial institutions as systemically important if they reach an adoption level that creates vulnerability . . . The FSOC should monitor which AI systems are relied on by significant players in the markets and consider designating them as systemically important if their failure could threaten the stability of the U.S. financial system.
-- Designate the cloud service providers to those firms designated as systemically important . . . This is not a new idea; members of Congress and advocacy organizations have previously called for such designation. However, the rise of AI gives this proposal new urgency.
“I’m glad the report showcases the systemic financial risk issues that can crash the whole economy,” Rohit Chopra, director of the Consumer Financial Protection Bureau, said in a “fireside chat” at CAP.
CFPB Director Rohit Chopra
Cloud services and foundational AI models come from “the same guys,” he added, and “the cloud and AI [could] create huge vulnerabilities that we need to address.”
Chopra was also “glad that the report mentions that there are existing tools on the books to potentially designate [the cloud giants] as systemically important utilities . . . We have to be constantly thinking about the ways in which there can be herding effects. When there are only a couple of underlying formulas or models, does that lead the entire market to herd in one direction and maybe crash and burn a lot in its way?”
The FSOC, of which Chopra is a member, “should be more than just a book report club . . . writing about risk, but doing something about it as well.”
Regarded as pro-consumer and critical of big business, formerly of the Federal Trade Commission and currently also a Federal Deposit Insurance Corp. director, Chopra stressed, “There is no fancy-technology exemption from our civil rights and fair lending laws.”
•Bylaws •Code of Conduct •Privacy Notice •Terms of Use © 2024 Global Association of Risk Professionals