Menu

Strategy

States of Confusion: Despite Much Discussion – and Hundreds of Legislative Proposals – AI Policy Lacks Clarity

Regulatory risk and uncertainty complicate the planning and implementation of high-priority technology initiatives.

Thursday, November 28, 2024

By Jeffrey Kutler and David Weldon

Advertisement

On both the supply and demand sides of artificial intelligence, there is consensus that technology so powerful and transformative needs governing principles and ethics, operational guardrails and sound risk management. Yet corporate leaders must try to chart their courses while legal clarity and certainty are still lacking.

The open regulatory and governance questions may be contributing to some mixed messaging in executive ranks. In a Boston Consulting Group multi-industry survey of 1,400 senior executives, 81% ranked AI and generative AI as a top-three tech priority for 2024.

Although recognizing the need for increased investment, “too many organizations are slow to embrace the revolution,” BCG concluded; 66% were ambivalent or dissatisfied with their AI and GenAI progress, and of those, 62% cited a shortage of talent and skills, 47% unclear investment priorities and 42% the absence of a strategy for responsible AI.

“Excitement over this technology is palpable, and early pilots are compelling,” said a 2023 McKinsey & Co. report on GenAI. “But a full realization of the technology’s benefits will take time, and leaders in business and society still have considerable challenges to address” in risk management, reskilling the workforce and rethinking core business processes.

Nation and the States

The four Biden administration years did not bring definitive federal AI legislation. A National AI Advisory Committee was empaneled; an October 2023 Executive Order on Safe, Secure and Trustworthy Artificial Intelligence sought to promote innovation, competition and protection of citizen rights; and the National Institute of Standards and Technology produced an AI Risk Management Framework.

abradford-160x170Anu Bradford, Columbia Law School

There is a pattern in AI reminiscent of data protection and privacy. The European Union’s General Data Protection Regulation (GDPR), which took effect in 2018, provided a template for state-level action in the U.S. led by the California Consumer Privacy Act (CCPA). The 2023 EU AI Act, touted as “the world’s first comprehensive AI law,” had numerous proposed counterparts in the U.S., notably the California bill known as SB-1047 that Governor Gavin Newsom, wary of overly restricting “a technology still in its infancy,” vetoed in September.

Europe has proven adept at promulgating laws and, in effect, exporting them to influence technology policy and business practices worldwide, says Columbia Law School professor Anu Bradford, author of The Brussels Effect: How the European Union Rules the World (2020) and of Digital Empires: The Global Battle to Regulate Technology (2023). That power differentiates the EU from – and can complement or conflict with – contrasting U.S. and China models.

“Europe is now widely seen as the pioneer of the toughest laws against tech,” said a New York Times article reviewing Margarethe Vestager’s 10 years as EU competition commissioner. “U.S. regulators have in recent years followed Europe by bringing antitrust lawsuits against Google, Apple, Meta and Amazon. Regulators in South Korea, Australia, Brazil, Canada and elsewhere are also taking on the tech giants.”

As Bradford explained in a recent Z/Yen Group webinar, the EU chose a “third way” versus the U.S., whose open-market and “techno-libertarian” regulatory approach was deemed “too permissive” and not sufficiently consumer-protective; and China’s state-driven pursuit of “tech super power” status, implementing and exporting infrastructure, surveillance and other strategic technologies.

“The U.S. is at an inflection point,” Mark Kennedy, a former congressman and university president who is now director of the Wilson Center’s Wahba Institute for Strategic Competition, said in a November 25 speech at a Boston Global Forum AI World Society conference. “Depending on our response, we will either retain our edge in AI and other technologies that underwrites our economic leadership and our military superiority, or surrender it to an ascendant China at great risk to our future prosperity and national security.”

Kennedy recommended “balanced regulations that encourage innovation while addressing risks like algorithmic bias and misinformation. Overly restrictive regulations risk allowing [China] to gain the lead, giving autocracies an edge over democracies.”

Proliferation of Bills

In California, Governor Newsom’s stated willingness to work with “the legislature, federal partners, technology experts, ethicists and academia to find the appropriate path forward” may ultimately lead to enactment of a “balanced” AI law. But it doesn’t yet amount to much in the way of legal certainty on a national scale.

rbyates-160x170Ricardo Baeza-Yates, Northeastern University

The California debate is the tip of an iceberg. In 2024 legislative sessions, some 800 AI bills were introduced in 45 states, according to Ricardo Baeza-Yates, director of research at Northeastern University’s Institute for Experiential AI. Only Arkansas was without an AI bill, and four other states’ legislatures did not have a session this year.

Of the proposed bills, 16% passed, 40% failed and 44% – approximately 350 – were recently pending.

“At the federal level,” Baeza-Yates added, “since February 2023 there have been 141 AI bills filed” – 55% in the Senate, 45% in the House of Representatives.

With the preponderance of attention on California, there are some who say that the veto could lead to better and safer AI policies. American Enterprise Institute (AEI) nonresident senior fellow John Bailey emphasized in an October article the need for more research and collaborative discussion toward the goal of responsible AI development. He identified “an urgent need to build policymakers’ understanding of these fast-evolving technologies to ensure smarter legislation and better-informed regulation.”

Bailey considered “an important shift in the AI safety conversation” to be encouraging: “The once-dominant narrative of existential and catastrophic risks is being replaced with a more measured and nuanced dialogue.”

Shane Tews, also of the AEI, contended in a November 11 article that European Union statutes have their share of complexities. The GDPR, AI Act and Digital Markets Act (DMA) are “overlapping and occasionally conflicting mandates [that] present significant compliance challenges for organizations,” Tews wrote.

stews-160x170Shane Tews, American Enterprise Institute

While the GDPR imposes strict data collection requirements to protect personal data and privacy rights, the AI Act “often requires broader data collection to ensure a representative dataset across demographics, sometimes requiring gathering sensitive data (like race or gender),” and “the DMA pushes large tech platforms to share their data with competitors to promote market fairness.

“These three frameworks, each with distinct goals, create a challenging business compliance environment,” Tews asserted.

Clare Walsh, director of education at the Institute of Analytics, mentioned a question regarding GDPR and AI Act applicability to generative AI. “Traditionally, under international data protection laws, judges have not been afraid to use ‘algorithmic disgorgement,’ which requires companies to delete an algorithm that they have trained on illegally obtained data,” Dr. Walsh noted in a Risk Management Magazine article. The lack of rulings on GenAI “is extraordinary considering how many companies have already embedded the technology into their business model.”

Alarmed by Deepfakes

What makes for successful AI legislation?

“A thorough answer is complicated,” Baeza-Yates said. Forty-one percent of the state bills were related to deepfakes, a highly specific though hot-button concern, and 19% of those were approved. Fewer than 15% of other AI bills won approval.

The researcher commented that in the U.S. Senate, more than half of the bills were proposed in the Committee on Commerce, Science and Transportation. Two House committees – Energy and Commerce; and Science, Space and Technology – accounted for nearly half of that chamber’s bills. AI’s impacts on industry might be seen as a common denominator.

Responding in August to a U.S. Treasury request for information (RFI) on AI – an outgrowth of the 2023 presidential executive order – the Republican majority on the House Financial Services Committee supported “a principles-based regulatory approach that can accommodate rapid technological changes more effectively. We caution against horizontal, cross-economy approaches that broadly regulate the use of AI. The government should take a sectoral approach that ensures primary regulators, who understand their respective markets and AI use cases within those markets, retain the regulatory authority to proceed in a technology-neutral manner.

“AI presents an unprecedented opportunity to transform the financial services sector. Committee Republicans are committed to fostering an environment where AI can thrive while protecting consumers and maintaining market integrity.”

swiener-160x170California State Senator Scott Wiener

A bipartisan bill in the House and Senate, the Unleashing AI Innovation in Financial Services Act (H.R. 9309, S. 4951) was part of a bigger package that according to Senator Mike Rounds, Republican of South Dakota, sprang from an AI working group and “helps the United States make strides toward unleashing AI innovation and resulting opportunities, from national defense to health care research to financial services.”

How Expert Are Legislators?

California lawmakers Scott Wiener, the state senator who championed SB-1047, and Assembly Member Rebecca Bauer-Kahan, who was behind bills signed into law on AI image exploitation, the legal definition of AI, and related privacy protections, certainly went to school on the technology and its intricacies.

By and large, however, elected officials’ knowledgeability can vary, observed Ian P. Moloney, senior vice president and head of policy and regulatory affairs at the American Fintech Council (AFC).

“Some come in with a pretty significant amount of sophistication, because they either engage with AI tools or they’re knowledgeable about the underlying technology,” he said. Others need “additional education.”

“What concerns me the most is when [lawmakers] are rushed, do not adequately understand the technology, and don't take that risk-based approach,” Moloney added. They need to recognize “the nuances that come with the actual deployment of that technology, and how it's being used in an industry such as financial services,” to avoid hindering innovation.

To veteran Washington lawyer Thomas Vartanian, executive director of the Financial Technology & Cybersecurity Center, “most legislators have only very rudimentary understanding of AI – not just the underlying technology and science, but even more importantly, the scale and depth of the risks that are being created by new forms of technology.”

Vartanian, whose most recent book is The Unhackable Internet, is also wary of those who “see technology and artificial intelligence as a path to becoming a billionaire,” whose entrepreneurial priority is not necessarily “safety and security of people. It is getting the product out as quickly as possible and making as much money as possible.

“Those are the people talking to Congress, and those are the people who are providing campaign contributions,” Vartanian continued. “That needs to be balanced by regulators and experts in government who understand the risks of AI, and the scope of the problems that are being created.”

Being Pragmatic

Assuming a continuation of AI legal uncertainty, what are banking or financial industry executives to do? Passively wait and see? Actively lobby or support trade associations’ participation in policy discussions? Anticipate that EU or California precedents will prevail?

imoloney-160x170Ian Moloney, American Fintech Council

“Responsible fintech and financial services executives should collaborate with regulators to establish clear guidelines that both protect consumers and allow for continued AI innovation,” AFC’s Moloney said. “We help our members take advantage of RFI periods and submit comment letters. When there are not clear guidelines for policy implementation, consumers are the ones who end up feeling the impact in the form of reduced choices and fewer innovative new products.”

Some of the biggest challenges come when policies are based on legacy technologies, as opposed to recent innovations, Moloney added.

If fintech providers must adapt to meet obsolete standards, “this ends up causing disruptions to the tens of millions of American consumers using responsible fintech products for everyday access to critical financial services. AFC advocates for a pragmatic approach to regulation that creates sound public policy and settled expectations in the market.”




Advertisement

We are a not-for-profit organization and the leading globally recognized membership association for risk managers.

weChat QR code.
red QR code.

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals