Menu

Disruptive Technologies

For When Artificial Intelligence Goes Awry: Incident Response Plans

A boutique law firm develops a framework akin to cyber incident response plans but attuned to AI's inherent characteristics and risks

Friday, July 31, 2020

By Katherine Heires

Advertisement

Many organizations are experimenting with, if not purposefully deploying, artificial intelligence. But what happens when those systems or the decision models they power go haywire, decay, or are compromised? Or when they cause errors, discriminatory outcomes such as biased credit decisions, or results that do not meet compliance or “explainability” standards?

Such problems and their resolution are not merely technological, but rather require insights and expertise from risk managers, compliance professionals and legal counsel, says Andrew Burt, co‐founder and managing partner of bnh.ai, a boutique Washington, D.C.-based law firm focused on AI and analytics.

Burt, who is a former adviser for policy to the head of the Federal Bureau of Investigation's Cyber Division, points out that there are 1,000 incidents - instances of either AI failure or an attack on an AI model - recorded in the Partnership on AI's database.

That number is expected to climb, in part because many AI‐powered models are trained on data that predates the COVID‐19 pandemic, thus resulting in flawed predictions and conclusions, which in turn can expose firms to legal liabilities.

Burt and bnh.ai co‐founder Patrick Hall, principal scientist at the firm, set out to address the risk with a six‐part AI Incident Response plan. Open for public inspection and collaboration on GitHub, the plan, Burt says, can be a template for ensuring proper risk management of industry-specific implementations of AI and machine learning.

Tailored Solution

“The more important AI is to your organization, the more important it is to know what to do when it's misbehaving, generating liabilities, or real harms,” says Burt, whose work and research in the field awakened him to the legal exposure and to the need for dedicated incident response planning.

Another resource Burt suggests, which his firm and the Future of Privacy Forum released in June, is Ten Questions on AI Risk: Gauging the Liabilities of Artificial Intelligence Within Your Organization.

Andrew Burt headshot
“At the heart of this exercise is having a holistic picture of what can go wrong,” says bnh.ai co‐founder Andrew Burt.

The AI Incident Response plan is structured similar to cyber incident response plans, which many financial firms currently employ.

“There is a level of maturity in finance that differs from other sectors, both in terms of understanding of the risk and legal implications of AI incidents and the role that both risk managers and lawyers need to play in developing these plans,” Burt says.

He explains that an incident plan specific to the nature and tendencies of artificial intelligence and machine learning is necessary because of three factors: AI models can decay overtime; AI systems are more complex than traditional software; and predictive models can be wrong, a reflection of their probabilistic nature, and errors can worsen rapidly.

Unique Properties

“When AI technology displays security vulnerabilities, causes privacy harms or discriminatory behavior, the effects of the problem can scale up very quickly and be magnified,” Burt says. Taking all these issues into account, he adds, “AI is a high‐risk technology.”

He warns that while cyber plans might address the possibility of outside attacks that may or may not harm artificial intelligence operations, they are not designed to address incidents that relate to AI's inherent properties. There can be root causes other than outside attacks.

The AI incident plan consists of a short overview checklist, and separate checklists for each of the six stages of incident response. There are sections devoted to preparation, identification, containment, eradication, recovery, and lessons learned.

Patrick Hall headshot
“Risk and compliance practitioners have a strong role to play” in understanding regulatory violations and how to address them, says co‐founder Patrick Hall.

While the specifics of how individual incident plans play out may differ, one constant, says Burt, is the need for teamwork among technologists, lawyers, compliance and risk officers.

“Technologists understand how the technology works but don't understand where all the liabilities lie,” he says. “On their own, they will not do a sufficient job” in resolving an AI incident.

He sees risk managers as central to the creation and execution of an AI incident plan, as they “are among the few personnel who are best positioned to mediate between the technology folks and lawyers.” He adds that “at the heart of this exercise is having a holistic picture of what can go wrong and getting ahead of it. Risk managers are in a good position to help with that.”

Risk and Compliance Alignment

Ultimately, Burt says, an AI incident plan simply has to be aligned with an overall risk management framework.

According to Hall, “Risk and compliance practitioners have a strong role to play in containment, in understanding which regulations may have been violated and how to address violations.” In the aftermath of an incident, “their role is to make sure risk and compliance are first‐order considerations in future machine learning endeavors.”

The law firm founders are in agreement on at least one other point: Don't wait until an incident happens to start on a plan to deal with it. The last thing you want to do is try to decide who does what, what policy to employ and what steps to take while smack in the middle of a crisis.

Katherine Heires is a freelance business journalist and founder of MediaKat llc.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals