Artificial Intelligence Poses Difficult Questions for Corporate Boards. It Can Also Improve How They Work.

As with any technological breakthrough, AI carries risks along with payoffs. Both sides of that ledger can benefit from AI-driven governance, advocates say.

Friday, January 19, 2024

By David Weldon


Technology governance and risk management have long been tough subjects in corporate boardrooms predominantly populated by non-technologists. Artificial intelligence, and particularly the rocketing popularity of generative AI (GenAI) over the last 12 to 15 months, raised new alarms about that competency gap and the urgency to close it.

But even as technology and digitization have been incorporated in board training and education programs, including those of organizations like the Corporate Governance Institute and National Association of Corporate Directors, AI has begun to emerge as not just another strategic challenge for businesses and for board-level oversight, but also as an aid in governance.

According to Gartner director analyst Lauren Kornutick, GenAI is being programmed into governance, risk and compliance software at such a pace that GRC tools without these capabilities will quickly become obsolete.

That is what it will take “to remain viable solutions in the marketplace,” says Kornutick, who focuses on compliance risk, technology and analytics in Gartner’s Legal & Compliance group. Its research finds that “51% of GRC vendors either already have AI capabilities, which may include GenAI; they will continue to invest in the AI and machine learning domain; or they have plans of adopting AI in the next three years.”

The application of AI to governance – enterprise-wide and/or for board purposes, and including management of AI systems and their risks – has been described thematically by analytics leader SAS as “AI to govern AI.”

 marion-lewisMarion Lewis, CEO, Govenda

In other examples, IBM has put forward its watsonx AI and Data Platform for, among other functions, model risk governance for generative AI; of Canada designed AI governance education and guidance into its Machine Trust Platform (MTP) dashboards for board oversight; and Govenda touted its director portal as “the first AI created for governance management.”

Govenda’s Gabii was designed to “ensure quick access to data and good communication among executives and directors, prioritization of board activities, and improved decision-making,” said company co-founder and CEO Marion Lewis. She believes nothing less than “the sustainability of board governance” is at stake, and AI-driven software must therefore be incorporated into board processes.

Stress Tests and Warnings

Friso van der Oord, senior vice president, content, with the National Association of Corporate Directors, sees AI assisting and advancing the quality of governance through, for example, streamlining board-meeting preparation by synthesizing vast amounts of data; and helping to absorb and provide feedback on strategic plans and their underlying assumptions.

 friso-van-der-oordFriso van der Oord of the NACD

“As the business environment becomes increasingly dynamic, companies are moving away from fixed, multi-year strategic plans and adopting a more continuous approach to strategy development, using scenario planning to assess whether existing plans need to pivot,” van der Oord observes. “In the coming years, AI can become another instrument to help boards and management teams stress-test the validity of key assumptions and spot early warning signals in the external environment that may undermine their current strategic direction.”

A possibility not yet fully explored, he adds, is relying on AI to improve and accelerate boards’ internal and external audit reviews, along with financial reports and disclosures.

Anticipating Questions

Board-level decision-making is gradually being transformed from opinion-based to data-driven, and AI can minimize the information latency, according to Patrick Bangert, senior vice president of data, analytics and AI at technology consulting company Searce.

“Generative AI can help trawl through and synthesize libraries of documents to summarize information and look for anomalies,” Bangert states. Questions such as “What actions have led to this result?” or “Where are we profitable versus not, and why?” can be anticipated, instead of raised in hindsight, with attendant delays in interpreting results.

Efficiency in managing large quantities of data can impact quality as well, through discovery of errors and unproductive or nefarious activity in the data trail.

Humans in the Loop

“AI is also effective at contextual learning from large knowledge bases, which can help compliance officers get reliable answers to their questions much more quickly,” says Terisa Roberts, global solutions lead for risk modeling and decisioning at SAS.

Vrushali Sawant, data scientist and AI expert in the SAS Data Ethics Practice, says GenAI can help to prepare code for risk analytics, reducing dependence on human experts to analyze complex datasets. Information transmitted dynamically to senior stakeholders can be more effective than static reports.

That said, humans still must be in the loop to watch for hallucinations or other AI-byproduct inaccuracies.

 terisa-robertsTerisa Roberts of SAS

In the context of ethics and responsible innovation, says Roberts, “With new tools, we can detect biases against various groups of people. We can track accuracies as well as model failures such as false positives, false negatives, or hallucinations. There is also an increased focus on getting an ethical review from the use case, over the data, to the model itself.”

“We observe stronger integration between governance tools and those used for the development, deployment and usage of AI systems to aid governance tasks through automated capabilities, like automated documentation, dynamic retraining of AI systems within compliance parameters, and the ability to summarize information ready for board-level decision making,” Roberts continues.

Aligning the Governance Framework

Strategic oversight and accountability, supported by policies and controls, are key to an effective governance framework, but any gaps in AI oversight at the board level, and in alignment with company objectives and values, have to be addressed.

“Organizations have in place model risk management frameworks that are often extended to handle the additional model risks introduced by AI,” SAS’s Roberts points out. But she adds that the scope is broader than models alone: “Not all AI systems are models, and not all models use AI.”

Even aside from, or predating, new generations of AI, are issues around model transparency, explainability and reliability, along with algorithmic bias, cyber and privacy risks, and the reputational damage that can ensue from failures.

 patrick-bangertPatrick Bangert of Searce

“All the central risks remain, but the automation potential of AI scales those risks because less human time is in the loop,” Searce’s Bangert asserts. “This makes it easier to exploit loopholes, and harder to find and plug them. New security strategies must therefore also be increasingly automated and be operated at scale.”

Learning to Trust

Accompanying GenAI is “the ambiguity about the data used to train the large language models,” Roberts says. “This can call into question the reliability of the results and opens the possibility of hallucinated outputs. The management and mitigation of these new types of risks are uncharted territory for many organizations.”

“The technology is largely unproven, and it appears vendors reacted quickly” to the GenAI hype, says Gartner’s Kornutick. “As such, it will take time for these new features to mature and for customers to deploy them and determine which ones are truly useful.”

With that in mind, risk professionals need to think about what is and is not acceptable in terms of standard governance, and what checks and balances need to be in place over new AI activity.

“You need to make sure that AI is working within what’s acceptable to your stakeholders,” says co-founder and CEO Niraj Bhargava. “High-impact versus low-impact use cases can have different guardrails. So you have to have an informed roadmap of what is acceptable. You monitor it and modify it if things drift outside of what’s acceptable.”



BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals