Coming Up Short: Risk Management Deficiencies of the UN’s Interim Report on AI

The United Nations’ proposed governance of artificial intelligence is welcome, but it lacks risk management necessities. A more structured and comprehensive approach is therefore needed.

Friday, February 16, 2024

By Alessandro Mauro


The United Nations wants to maximize the opportunities created by artificial intelligence (AI) and minimize its risks. However, the suggestions made toward this end by the recently created UN AI Advisory Body do not meet this objective – particularly with respect to containing and diminishing the threats that this innovative, constantly-evolving technology presents.

The Body is now seeking feedback from the AI community for its December 2023 interim report, “Governing AI for Humanity,” which examines the risks of AI from the perspective of “technical characteristics, human-machine interaction, and vulnerability.”

AI risk assessment, however, is not thoroughly addressed in that report. Moreover, it does not take advantage of standards and guidelines already developed by experienced risk management associations, and the 37-member Body lacks financial risk management expertise.

Asking for critiques of the interim report from the broader AI community is a step in the right direction. The deadline for comments is March 31, 2024, and anyone can submit feedback through an online form.

Before offering feedback on steps the Body can take to improve its final report on AI governance, let’s first discuss the goal of the Body and the parts of risk management the interim report actually does address.

Risks and Preliminary Recommendations

In October 2023, the UN Secretary-General, António Guterres, launched what was described as a “high-level, multi-stakeholder” AI Advisory Body. The idea was to create a global, multi-disciplinary “conversation on the governance of AI” that could potentially maximize the benefits of the technology to humanity while simultaneously containing and diminishing its risks.

The Body recognizes the need to identify, classify and address AI risks, as part of an effort to overcome technical, political and social challenges. To meet this goal, it has created seven institutional functions for AI governance.



The seven institutional functions are a shared responsibility of international organizations, governments and the private sector. Two of the seven explicitly address risk.

Function 3 calls for the development and harmonization of “standards, safety, and risk management frameworks.” The interim reports states that there is currently a lack of harmonization and alignment in risk management frameworks, and the Body aspires to play a critical role “in bringing states together, developing common socio-technical standards, and ensuring legal and technical interoperability.”

Function 6 examines how firms can monitor risks, report incidents and coordinate an emergency response. AI-enabled cyber tools increase the risk of attacks on critical infrastructure,” the report states. “AI can be used to power lethal autonomous weapons. which could pose a risk to international humanitarian law and other norms. … Bots can rapidly disseminate harmful information, till reaching the concrete possibility of rogue AI.”

Shaping the Final Report: How to Improve Risk Management

There are several references to risk in the interim report. In addition to the previously mentioned AI governance functions, the report also cites 15 sub-functions – including three related to risk management.

Sub-function three covers risk classification and envisions further research and analysis to be conducted to “assess existing and upcoming AI models on a risk scale of untenable, high-level, mid-level, and low-to-no risks.” Sub-function seven requests the “participation of all stakeholder groups and all countries and regions in collective governance and risk management.” Lastly, sub-function 11 calls for policy harmonization and norm alignment, with the objective of “surfacing best practices for norms and rules, including for risk mitigation and economic growth.”

However, for the UN’s final report on AI governance to be based on organized, organic and modern risk management, the Body should consider leveraging frameworks and standards that have previously been developed by international risk organizations.

For example, the International Organization for Standardization (ISO) has developed a helpful set of risk management guidelines called ISO 31000:2018. That document identifies the principles, framework and process for the management of any source of risk.

Risk assessment, which is only briefly referred to in the UN’s interim report on AI governance, is a central part of the ISO’s risk process (see Figure 2). Risk treatment, the phase in which possible risk responses are analyzed and selected, can only be determined after a thorough risk assessment.



Risk identification, risk analysis and risk evaluation are the core components of risk assessment, according to the ISO. When addressing corporate risk management, specifically, identifying and classifying the risk sources is a vital first step. If a risk source is neglected, the rest of the process is limited and flawed.

It is reasonable to expect AI risk sources to be numerous. Consequently, it is necessary to develop a structured risk taxonomy for AI.

It is also crucial to examine the ways in which AI-related risks should be monitored, recorded and communicated. Indeed, at this early stage of AI’s evolution, accountability must be established. To achieve this, the Body may want to turn to the Committee of Sponsoring Organization of the Treadway Commission (COSO) for inspiration.

COSO has created a comprehensive process for proper enterprise risk management (ERM), and a COSO-based AI risk strategy for ERM would ensure the assignment of responsibilities for each of the five components cited in Figure 3. One reason to think that COSO’s ERM framework can be adapted for AI governance is because it has already been successfully applied to other risk management tools – including, for example, the GARP Sustainability and Climate Risk (SCR) certification.



The AI risk appetite and risk tolerance of governments, groups, society and other relevant stakeholders must, of course, be factored into comprehensive ERM. So, we should add these to the issues that should be addressed in the UN’s final report on AI governance.

To determine risk appetite and tolerance targets, suitable key risk indicators (KRI) should be selected. KRIs could, for example, alert financial institutions when AI seems to be getting excessively close to independently hacking other machines and potentially launching cyberattacks.

The Need for More Risk Management Expertise

In its call for experts for its AI advisory body late last year, the U.N. developed a list of more than 1,800 nominees, seeking a broad range of perspectives. Eventually, the Body was whittled down to two co-chairs and 37 members, with an emphasis on gender balance and geographic diversity.

Current members have “deep experience across government, business, the technology community, civil society and academia,” according to the UN’s website. However, based on the published biographies, only two members have specific risk experience. Jaan Tallinn is an expert in existential and catastrophic risk, while Ian Bremmer is well-versed in political risk.

One of the five AI Body working groups is labeled “Risks and Challenges,” but its composition is not disclosed. The Body states it reviewed, among others, the functions performed by existing institutions of governance with a technological dimension, including FATF, FSB, IAEA, ICANN, ICAO, ILO, IMO, IPCC, ITU, SWIFT and UNOOSA. However, no mention is made of consulting with leading risk-centric organizations.

The good news, as we’ve mentioned, is that the Body is currently seeking feedback from everyone in the AI community. It’s therefore critical for financial risk managers and risk management associations (like COSO and GARP) to provide expert comments before the March 31 deadline. Hopefully, this feedback will help the U.N. advisory board create a more structured and comprehensive risk management approach to the governance of AI.

The Body is expected to release its final report on AI governance by mid-2024.

Alessandro Mauro (FRM, SCR) is a risk management professional specializing in building and managing risk functions in trading companies. He serves as director for the GARP chapter in Geneva, Switzerland. His areas of expertise are commodity markets, financial derivatives, climate risk, CTRM software, and project management.


BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals