Menu

Disruptive Technologies

To Maximize Artificial Intelligence’s Value, We Have to Let It Do Its Thing. What Could Possibly Go Wrong?

Technology subject to real-world laws, contracts and company policies needs elements of human judgment, subject-matter expertise and legal counsel

Wednesday, November 23, 2022

By Alan Brill

Advertisement

There is no question that artificial intelligence is a vital and growing topic that many businesses are looking at and implementing. From evaluating potential business deals to defending networks against attacks, companies large and small are moving in the direction of AI. Whether they are custom-building applications or using cloud-based AI services, the potential advantages of using AI make it worth the attention it receives.

AI is not perfect, though. It makes real-world errors. It can be designed incorrectly and can violate laws. It can be thought of as objective but can turn out to be highly biased.

Real-World Examples of Problems

Looking first at safety, autonomous vehicles have failed to notice emergency vehicles stopped in the “fast” lanes of highways and slammed into them. One AI-based autonomous vehicle system was not designed to always stop at stop signs, but rather roll through them at up to 5.8 miles per hour – when the law requires an actual full stop.

AI systems can showcase bias, such as when a major corporation had to abandon an AI-supported system doing initial evaluations of applicant resumes when it was discovered that the system was discriminating against women applicants. Additionally, a system used by law enforcement agencies that compares images taken during crimes to libraries of digital photographs to identify perpetrators turned out to be racially biased based on the data used for training the system.

There are also instances when AI systems deployed in the business world have experienced challenges.

For example, an AI-powered transaction monitoring system watching for fraud in payment card activity wasn’t initially tuned to expect growth in online (versus in-person) card use at the start of the COVID pandemic. As a result, it flagged more valid transactions than usual, bogging down operations until the system’s parameters were updated.

“If it’s AI, it has to be OK” is not a viable position, Kroll’s Alan Brill writes.

Additionally, a real estate valuation system resulted in a reported write-down of $300 million when the system’s AI component responsible for predicting property valuation wasn’t able to cope with rapid changes in the marketplace and over-valued properties.

At the military level, Naval Close-In Weapons Systems (CIWS, a radar/gun system that can fire up to 3,000 rounds per minute against a target) have had a few incidents. In one, the bridge of a U.S. Navy ship in the line of fire was hit by CIWS rounds, resulting in two deaths. In another case dating back to 1996, a Navy A-6 Intruder Attack Aircraft towing a target was engaged and shot down by a Japanese Navy Destroyer’s CWIS during a multinational exercise. The A-6 crew ejected and safely recovered.

Human Involvement

Whether the issue with an AI system is a violation of laws and regulations or simply making wrong decisions, it should be evident that assuming “if it’s AI, it has to be OK” is not a viable position. It should be evident that in the overall development and implementation of AI systems, there needs to be some level of human oversight.

This is not to say that it’s appropriate, for example, for a cybersecurity system to require human approval before acting – that would be detrimental when it comes to defending a network against hackers with advanced technology. But having a process for oversight during the design, testing and operation of AI systems should be mandatory. The oversight must be appropriate, effective and carried out according to plan.

Unfortunately, while AI continues to grow in importance and impact, making sure that AI systems are appropriately supervised is not happening. According to a report published by McKinsey Analytics in December 2021, The State of AI in 2021, most companies using AI don’t have human oversight mechanisms in place. That should scare everyone.

The need for human oversight is not just the view of experts in the field. In October 2022, the White House Office of Science and Technology Policy published a white paper (meaning that it is not official U.S. policy), The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, which makes the point that having human oversight is vital.

The European Union agrees. Its AI strategy points out the need for “human-centric AI” to benefit all people and to engender the public’s trust. It seems likely that the European Parliament will consider regulations relating to AI.

A Blueprint for Oversight

While governments are developing policies related to the explosive growth of AI and the need for appropriate controls over this technology, systems are being developed and placed into live operation. While solutions are never 100% effective, and gazing into the crystal ball of future legislation and regulation is uncertain at best, there are operating principles that make sense and should be considered for implementation.

There are four categories of personnel who should be involved during the design and development phase of AI systems.

First, there is a need for specialists in the subject matter that the AI function is expected to perform. It is likely that the AI system was built by AI technology experts (potentially external to the company) who lack certain background knowledge on the subject. For example, the people building an AI-based job applicant scoring system might not have relevant experience in the field of human resources.

Second, attorneys should be involved during the design and development phase. AI systems are often thought of as operating in “cyberspace” – particularly those systems that are cloud-based. But cyberspace is nothing more than a convenient concept. Systems operate in the real world, in real-world locations, and are subject to real laws, regulations, contractual requirements and company policies. Systems are obligated to follow applicable laws and regulations as well as requirements set in contracts and company-generated policies. For example, a U.S.-based system must comply with state and federal laws relating to the system’s functionality.

Systems used for loan scoring or personnel decision-making must not discriminate against individuals or groups based on characteristics protected by law, such as race, religion or gender identity. Vehicle self-driving systems must obey traffic laws, which can differ based on where the vehicle is operating, leading to the potential need for geolocation to set the right rules in place for a particular vehicle at a particular time. Law firms have dedicated units of attorneys who specialize in cyber-related functions, and in-house law departments may also have one or more individuals with such expertise who can be assigned as part of an AI project oversight team.

Third, cybersecurity specialists should be present during design and development. Experience proves that essentially all systems are subject to targeting by hackers, other civilian criminals and nation-state actors. Not all attacks are high-tech. Sometimes attacks are carried out through social engineering, involving emails, messages, chats or other forms of interpersonal communication. A cybersecurity specialist can look at a system to render an opinion on risks and necessary countermeasures. By taking into account the range of expected attack types, which has to be constantly updated as threats quickly evolve, we can try to ensure that the right security features and controls are baked into the system.

Finally, individuals with experience in compliance testing should also be involved in this process. Compliance officers know that there can be gaps between what system documentation says is happening in a system and what is actually happening. Sometimes, the controls and operations of a system are more aspirational than actual. Over time, people can take “shortcuts” to ease their workloads or “improve” the system, but they don’t document those changes, so there is no reliable documentation of how the system is actually being controlled.

Absolute Limits, Guardrails . . .

When it comes to controls to limit exposure to AI-related problems, there are three categories of limitations that should be considered.

The first is “absolute limits.” These provide assurance that the AI system won’t go completely outside of reasonable control limits. One example would be a requirement that the AI system be built and implemented to comply with applicable laws. For instance, an automobile system that allows turns in a direction prohibited by signage (e.g., no left turns) would be an immediate problem and should be designed to not violate that law.

A system designed to carry out payments to foreign entities by determining the most advantageous exchange rates in real time, but which ignored the possibility that the payee is on a sanctions list, is another example. In the U.S. alone, the Office of Foreign Assets Control publishes at least six lists in addition to the well-known Specially Designated Nationals (SDN) list, which are collectively known as the “Non-SDN Consolidated Sanctions List.” Other countries such as the United Kingdom publish their own lists, as does the United Nations (The Security Council Consolidated List). Ignoring relevant sanctions laws cannot be excused by saying, “It was an AI system – it didn’t know about the lists.”

Organizations may also have rules that are promulgated in-house. For example, a company may direct that a specific portion of purchases be made from minority-owned, woman-owned or veteran-owned organizations. If the AI system isn’t designed to know this, the company’s policies may be ignored.

A second category of controls can be characterized as “guardrails.” There are instances when a transaction may be appropriate but is sufficiently unusual, requiring human intervention or approval. Some fund transfers, while generally routine, may have characteristics that are suspicious and should be reviewed. For example, a payment made to a vendor where the vendor suddenly changed wire transfer information to a bank in another country might fall into this category. It could be correct, but because of the risk of fraud, it needs to be reviewed. Essentially, this is a warning level in which going beyond a defined boundary requires human intervention to approve or disapprove of the action.

. . . and Out-of-Lane Warnings

The third category can be called “out-of-lane warnings.” Many new vehicles now have an out-of-lane warning system, which relies on optical sensors to “see” lane markings and set off an audio, visual or haptic alarm when a lane change is detected.

In the case of AI systems, this would represent an action that is within the permitted boundary but is unusual enough to require special logging and forwarding to a person or unit responsible for oversight. This is designed to provide early warnings about potential problems.

An example might be an AI-powered resume evaluation system that detects the proportion of top-ranked candidates is skewed by certain characteristics – such as gender or gender identity – and issues a notification. A system that has a 50-50 division in gender of applicants, but only 15% of top-ranked applicants identifying as female, might be correct from the system’s viewpoint but unusual enough to require proactive notification for someone to review the system’s operations and operating parameters.

AI Systems and Evidence

Another motivator for involving counsel in the development of AI systems concerns the concept of evidence. For an AI system, this involves recording a transaction but may also involve preservation of sensor data and sufficient information on the state of the system to provide an indication of why the system acted as it did.

So, if an AI autonomous driving system fails to recognize a situation and a traffic accident ensues, the evidence from the vehicle’s sensors and the basis on which the vehicle made decisions would be extremely relevant in any subsequent litigation or regulatory action. If the question is asked, “Why did the system do what it did?” the correct response should never be, “No idea, the system modifies rules in real time but we don’t know the exact state of those rules when the incident occurred.”

Experience suggests that AI systems often capture substantial information during system testing and installation, but that detailed logging is turned off when the system is in full operation, often on the basis that having enough data storage would increase costs and possibly impact performance. One remedy that has been suggested is to adapt the concept of the “black box” used in the aviation industry. Aviation black boxes (there are usually two of them) record flight, control, engine and other parameters of flight dynamics, as well as cockpit voice/sound recordings. But these recordings are only retained for a period of minutes or hours before the data is overwritten by new data. The rationale is that if an accident occurs, the recorders will have sufficient data to support the incident investigation.

A similar concept might work well in the AI world, but what is recorded and how long it is retained must be identified and documented for each system.

Conclusion

AI systems will continue to grow in popularity and capabilities, but their capabilities are not unlimited and they should not be thought of as completely autonomous. Some form of oversight is always needed.

Making that happen may require the skills of subject matter experts, legal counsel, cybersecurity specialists and compliance officers. Simply building a system or using one in a software-as-a-service model does not mean that you can ignore potential problems. Working with your own experts in these fields can help to assure that the AI systems are operating appropriately and are not placing you in a position where you may face litigation or regulatory sanctions.

 

Alan Brill is Senior Managing Director in the Kroll Cyber Risk Practice.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals