Culture & Governance Risk | Insights, Resources & Best Practices

AI and Governance Gaps, from the Boardroom on Down

Written by David Weldon | November 14, 2025

Artificial intelligence’s evolving impacts and implications create conundrums not only for economists, policymakers and business strategists, but also for the occupants of corporate boardrooms.

Taking seriously their oversight responsibilities, and following advice to become conversant with AI and other advanced technologies, many directors have worked to better understand and challenge their organizations’ approaches and strategies. But across the businesvs landscape, this increasingly critical aspect of governance is uneven.

In the National Society of Compliance Professionals (NSCP) and ACA Group 2024 AI Benchmarking Survey, 32% of responding firms had established an AI committee or governance group; 12% of those using AI said they had an AI risk management framework; and 18% established a formal testing program for AI tools.

More recently, Deloitte experts observed a need for board members “to up their AI IQ.” A survey showed two-thirds of board members and executives had limited-to-no knowledge of or experience with AI. Although 31% deemed their organizations not ready to deploy AI, that was improved from 41% in October 2024. More than 50% believed their pace of AI adoption should be accelerated.

According to security and compliance vendor Vanta’s 2025 State of Trust report, released October 29, governance and controls are not keeping pace with the technology and its risks. Khush Kashyap, Vanta’s senior director, governance, risk and compliance, told Cybersecurity Dive that “while AI is clearly viewed as a force multiplier for productivity, organizations haven’t yet built the governance structures, guardrails or incident response playbooks to match the speed of adoption.”

In a survey from data management and reporting platform provider Workiva, 65% of finance, sustainability, audit and risk professionals said no “AI governance/security policies” were in place.

AI-related resources or policies in place, based on responses to the Workiva 2025 Global Practitioner Survey.

Such concerns rise somewhat differently, but in parallel, to the level of the world’s central banks. An October Bank for International Settlements report stressed that “despite AI’s significant potential to enhance policymaking, the effective use of gen[erative] AI requires a number of challenges to be addressed. These range from data governance (e.g., the use of internal versus external data) to investing in human capital and information technology infrastructure.

“A key lesson is that collaboration and the sharing of experiences emerge as important avenues for central banks, in particular by exploiting economies of scale and reducing the demands on IT infrastructure and human capital.”

Get Beneath the Surface

“We’re seeing widespread interest in using AI across the financial sector, yet there’s a clear disconnect when it comes to establishing the necessary safeguards,” Lisa Crossley, then NSCP’s executive director, said last year. “Our survey shows that while many firms recognize the potential of AI, they lack the frameworks to manage it responsibly. This gap not only exposes firms to regulatory scrutiny, but also underscores the importance of building robust AI governance protocols as usage continues to grow.”

Cary Grigg of UHY Consulting

A surface-level understanding of AI not sufficient for effective governance and controls, asserts UHY Consulting director G. Cary Grigg. “Knowledge has to be continuously refreshed as AI is evolving rapidly.  What was cutting-edge six months ago is now viewed as old news. The conceptual knowledge of how AI works will aid in understanding both present and emerging risks and opportunities.”

“They don’t need to be engineers, but they need fluency in use cases, risks and governance implications” – enough understanding of cybersecurity or financial instruments “to ask the right questions, challenge assumptions, and avoid blind trust in black-box AI,” states Srikrishnan Ganesan, CEO of professional services platform provider Rocketlane.

If not deep technical knowledge, corporate directors do need a strategic grasp of what AI is, where and how it is being used, and its potential for harm, says Terisa Roberts, global solutions lead for risk modeling and decisioning at SAS.

Joe Pearce of RecordPoint

“This begins with understanding the broad capabilities of generative AI – systems that create text, images or code at scale; and agentic AI –autonomous decision-making agents that act on enterprise data or processes,” relates Joe Pearce, head of product at RecordPoint, which introduced an AI governance tool called RexCommand.

“These technologies can reshape business models, supply-chain operations, and risk exposure,” Pearce adds, “but they also blur the line between human and machine accountability.”

Vendor and compliance risks can arise from deployed software, products with embedded AI features, and third-party providers.

Framework for Trust

Pearce enumerates six pillars of AI trust to anchor directors’ understanding: accountability, AI policy, risk and compliance operations, AI-ready data, AI development, and AI deployment.

That is a framework for ensuring AI is transparent, reliable and compliant with evolving regulations such as the EU AI Act, Pearce says.

In view of risks such as model bias and shadow AI – unvetted tools that may be put to use by employees or partners – the pillars help ensure and protect risk reporting, enterprise value, reputation, and an organization’s standing with its regulators.

Training from the Top

According to Deloitte’s 2024 Governance of AI: A Critical Imperative for Today’s Boards, they were making strides (see graph below). In this poll, 45% said AI was not yet on board agendas; 46% were either concerned or not satisfied with board time spent on AI topics; and 44% saw a need to accelerate adoption.

SAS’s Roberts notes that regulators stress the need for a top-down AI risk management culture, and AI literacy must extend from the board level on down to ensure a common language, culture and understanding of the risks.

What boards are doing to enhance AI fluency, from Deloitte AI/genAI board governance survey, June 2024.

The most effective way to develop directors’ AI literacy is through hands-on engagement with technology, Pearce says. That implies exposure to the likes of ChatGPT, Claude or Google’s agentic tool Opal.

But the broader context is at least as important at a time of what Pearce considers the “peak of inflated expectations.” Failed pilots and disappointing returns on investment are common, so sober analysis may have to trump unbridled optimism.

Many firms are now running training programs that help board members better understand AI mechanics – how AI systems are built, where risks can emerge and how those risks are managed/ Roberts sees dashboards as one potential, practical aid in directors’ oversight and engagement, a way to improve visibility into both opportunities and vulnerabilities.

Regulation and the Three Lines

Pearce points out that AI policies in highly regulated sectors are cautious or restrictive to begin with, yet “compliance gaps are glaring,” as surveys show some 40% of employees bypass or disregard those rules by using ChatGPT and other tools for a business purpose.

Terisa Roberts of SAS

“From discussions with risk leaders, two central issues emerge,” Pearce says. “First, many organizations have not yet established formal AI use policies, leaving employees without clear guidance on safe engagement. Second, most lack a comprehensive inventory of AI systems in play, whether developed internally, embedded in vendor solutions, or deployed informally by staff.”

This “shadow AI” represents an unmanaged risk vector, which can undermine regulatory compliance and operational and cyber resilience, Pearce continues.

For boards, the message is clear: AI risk is not theoretical. It is reshaping the scope of enterprise exposure, demanding that risk professionals evolve quickly from passive observers into proactive stewards of AI governance.

Roberts recommends that boards understand the three lines of defense – the business, the risk management function, and internal audit – how each “interacts with the others, and how AI governance assures that this powerful technology is used effectively, precisely and responsibly.

“Effective AI governance starts with strategic oversight and clear accountability, backed by strong policies and controls. Boards should understand how these elements work together to ensure AI aligns with organizational values, mitigates emerging risks like bias and explainability, and supports responsible innovation.”