Menu

Disruptive Technologies

Risk Management and Generative AI: A Matter of Urgency

Identifying safeguards as risks come into view.

Friday, March 15, 2024

By Jim Wetekamp

Advertisement

Only 9% of companies believe they are adequately prepared to manage the risks of generative AI.

Organizations are still figuring out what generative AI safeguards are needed. In fact, only 17% of organizations have formally trained or briefed their entire company on generative AI risks. But it’s in organizations’ best interests to take control of AI governance and risk management – and soon.

Over half of workers currently using AI at work are doing so without their employer’s approval. Employees crave the efficiency that these tools deliver. In many cases workers aren’t waiting for employers to figure out their policies before using these tools at work, which opens organizations up to risk. Another 32% of employees expect to incorporate generative AI into their workflow soon, which suggests employee adoption will continue regardless of company oversight.

Now is the time to put guardrails in place to ensure your company can use the emerging technology as a value and efficiency driver instead of fearing it as a source of risk.

There are several steps companies can take several steps now to enable staff to embrace generative AI while protecting the business from potential dangers.

Data Privacy and Security

Sixty-five percent of companies say data privacy and cybersecurity issues are a top concern with generative AI tools. These tools often gather sensitive information like IP addresses and browsing activity, which could lead to the identification of individuals. This can cause considerable damage if the data is mishandled or included in a data breach.

The rise of deepfake technology raises additional concerns given its power to create lifelike images and voices of people without their consent.

AI tools also enable criminals to create more sophisticated phishing emails and malware and accelerate cyberattacks. Regularly assess your cybersecurity strategy to ensure it stays on pace with the AI landscape and prioritizes robust data privacy measures such as data anonymization and encryption.

Jim WetekampRiskonnect’s Jim Wetekamp: Understand the tools and decide margin of error.

Inaccuracies and Misinformation

Some generative AI tools are programmed to respond to the prompt even if there is not enough information or content available to provide an accurate answer. In these instances, the AI algorithm makes up an answer but still responds with a voice of certainty, which means you can’t take what you see at face value.

If the user of the tool doesn’t keep this in mind and actively check the validity of the response, they can end up basing decisions off inaccurate information, which 60% of companies say is a top generative AI concern. This can lead to reputational issues and other consequences.

Developers of AI algorithms can set how many answers are made up or whether answers are made up at all. Understand how the tools your company is using are developed and decide the margin of error your organization is willing to accept. It’s often best to set the margin to zero so the model doesn’t make up answers at all. No information is better than misleading information.

Also, prioritize generative AI tools that cite sources so users can easily verify the accuracy and reliability of the information provided. Make sure your AI policies set the expectation that users always fact check any AI-generated response before passing along the information or making decisions based on it.

Bias and Ethics

AI models can be inherently biased or not fully inclusive of the groups they serve. The algorithms leverage historical data to inform decisions and generate answers and content. This can create issues because what was acceptable in 1970, 1990, 2010 or 2018 is different than what is acceptable today. For instance, if an AI model uses historical data from decades past to make contemporary decisions, such as who qualifies for a loan, it might inadvertently reflect discriminatory practices.

Thoroughly assess the AI models you are using. Know how the models are programmed and the calibration mechanisms. Actively question and understand the underlying training datasets. Make sure the data is recent and reflective of today’s societal standards.

It's important to continuously monitor the outputs of generative AI and ensure responses are still relevant from an ethical and moral point of view – and in line with your organization’s policies and culture.

Copyright and Intellectual Property

Thirty-four percent of companies are concerned about copyright and intellectual property (IP) risks related to generative AI. The models can be trained on legally protected materials and produce content that resembles existing works, which leads to potential copyright, trademark or patent infringement if users don’t give proper credit. Courts are wrestling over how to apply IP laws to AI-produced content, but it will take a while to sort out.

Protect your IP. Train employees to think critically about the information they are putting into AI tools. Know which employees and partners have access to your IP and sensitive information and set clear guidelines for what materials and data are off limits for inputting into AI models.

Make sure that employees are also mindful of how they use AI-generated content commercially to avoid copyright infringement. Stay up to date and inform staff of any legal developments around using AI-produced content.

A New Era of Risk

The emergence of generative AI presents both new challenges and opportunities. The efficiencies generative AI delivers enables workers to focus on high value work.

The technology is expected to add $4.4 trillion in value to the global economy annually. Reaping these benefits requires companies to get a handle on the risks of AI and govern its use effectively. Start now to protect and advance your business.

 

Jim Wetekamp is chief executive officer of Riskonnect, a leading provider of integrated risk management software. He is a recognized expert on enterprise risk, supply chain and third-party risk management.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals