Menu

Disruptive Technologies

Managing the Risk of Large Language Models Like ChatGPT

The fast-moving new technology requires risk assessments and policies that impose necessary control without stifling innovation

Friday, June 9, 2023

By Aaron Pinnick

Advertisement

Large language models (LLMs) like OpenAI’s ChatGPT and Google’s Bard are becoming increasingly popular among individuals and firms looking to take advantage of the incredible power and efficiency they offer for language-based tasks. These tools collect substantial amounts of text from various sources – including information entered by users into the tool – learn from that text, and then respond to user prompts with human-like responses.

Because these tools collect and learn from an incredible amount of data – often billions of parameters – they can be used for a wide range of tasks, from creating a simple email to writing complex programming code.  

Aaron PinnickACA’s Aaron Pinnick: Risk-awareness in responding to unique opportunities.

However, as a leak in March of ChatGPT logs demonstrated, these tools’ retention of user inputs creates potential privacy and security risks for firms. And with the novelty of these tools and the excitement around them, employees are less likely to be aware of and think through the potential risks before using these tools for business purposes.

To mitigate these risks, cybersecurity leaders should take several steps, including:

  1. Assess the risk LLMs create. 
  2. Update the firm’s acceptable use policy.
  3. Provide employees with training and communications on LLMs. 

Assessing the Risk 

Before taking a stance on whether or when LLMs will be permitted for business purposes, a firm should understand the risks these tools pose to the organization. While this assessment will vary from firm to firm, the risk centers primarily on how employees choose to use LLMs and the information they include in them.  

The following potential risks should be considered:

  • Privacy Risk – The most common risk LLMs present is that employees will enter sensitive information into the tool (e.g., client names and information), which is then exposed to the public. This risk will be high for most firms, as the novelty of LLMs means that employees are likely experimenting with the tools and may not exercise necessary caution when entering information. If this information is exposed, it may create reputational harm for the firm, and regulatory risk for companies in specific industries or jurisdictions. And regardless of a leak, uploading of certain types of data into an unapproved third-party tool (e.g., private health information) could be considered a privacy violation in certain jurisdictions.     
  • Intellectual Property Risk – Since LLMs are designed to learn from the inputs users provide, any proprietary or non-public information included in a prompt will be stored by the tool and integrated into future responses provided to other users. Even if proprietary information isn’t directly leaked, the LLM could be prompted to respond as if it were an employee at a certain company and, based on what the LLM has learned through past interactions with employees at that company, provide non-public information back to a user. Through this process, individuals could gain insights into the strategic direction of competitors or gain non-public information about the products and services of companies whose employees use an LLM.     
  • Third-Party Risk – Since a core feature of LLMs is their ability to quickly generate extremely large amounts of text, individuals may try to use an LLM as a shortcut in creating client deliverables. Firms should confirm with key vendors if LLMs are used to create any work product or advice they receive. If a third party is using LLMs, the firm should understand what company information is entered into the LLM, as well as how deliverables are screened for quality and accuracy prior to receiving them. Analogously, employees using LLMs in their work for clients may be violating the letter or spirit of agreements with those clients. 
  • Risks Related to the Quality of the Output – LLMs are often used by individuals to help generate ideas or first drafts of documents or code. But despite the impressive performance of many LLMs, they will make mistakes in their output, with tools like ChatGPT having issued warnings about the chance that it may produce incorrect information. These risks can be mitigated by having an expert review the materials created by an LLM to ensure that they are accurate. But if there isn’t sufficient oversight or controls around how information created by an LLM is used in a final work product, incorrect information may be shared with internal or external stakeholders, leading to potentially flawed decisions and compliance issues.   

It is important to note that LLMs pose a broader risk for cybersecurity executives, as cybercriminals can easily use these tools to create compelling dialogue, phishing email language, and code to improve the effectiveness of cyberattacks. Cybersecurity leaders should be aware of this threat, and ensure that the firm’s policies, procedures, and employee training take this into account.  

Update Acceptable Use Policy 

Firms should review and ensure that Acceptable Use Policies (AUPs) specify when and how employees will be permitted to use LLMs on company devices and for business purposes.  

Firms may take several different approaches towards building an AUP for LLMs based on their risk tolerance and on the opportunities these tools present. These options include: 

  • A Total Ban on LLMs – The most conservative approach to LLMs is to simply block the likes of ChatGPT and Bard on company devices, and to update the AUP to make clear that employees’ use of these tools is not acceptable for any reason.
    While a ban may be appropriate for firms that deal with highly confidential client or company data, it may be difficult to maintain as the number of LLMs available is quickly growing. It also may prove challenging as LLMs are increasingly incorporated into products like Microsoft Teams and other productivity tools. The effort of keeping up with this LLM growth is likely not worth it for most firms, as it also means that the company won’t be able to take advantage of the benefits of LLMs. 
  • Restricted Use of LLMs – Firms that are willing to accept some risk from LLMs in exchange for the potential efficiency gains can allow for their use under certain conditions. These could include: 
    • For business purposes only if no sensitive, proprietary or confidential information is included in LLM prompts.
    • For business purposes only with approval from specific individuals (e.g., chief information security officer, business unit head, general counsel). 
    • Only for certain low-risk business activities (e.g., for help writing marketing copy about features and services that are publicly available on the firm’s website).  
    • LLMs cannot be used to create client-facing work products, or to generate guidance for clients.
    • LLM use is allowed for business purposes, but records must be kept of the prompts and outputs from the tool.

      For most firms, some combination of the above clauses and restrictions should help mitigate LLM-related risks without stifling the tools’ innovative potential.
  • Reasonableness Standard – Firms that see the greatest potential in LLMs and are willing to accept the highest level of risk may allow employees to use their best judgment when working with these tools. Those adopting this approach can take a page from their existing training and policies on social media usage to promote good judgment around what information is and isn’t appropriate for LLMs. Employees should be reminded that information entered into LLMs should not be assumed to be private or secure, and nothing that would cause reputational harm to the employee, the firm, or the firm’s clients should be entered into an LLM. 

Employee Training and Communications 

Since the core risk posed by LLMs is rooted in employee behavior, it is critical to provide clear guidance on when or if these tools are allowed to be used for business purposes. Because LLMs are a hot topic, it is likely that employees have already begun experimenting with them at work or on their personal devices, so cybersecurity leaders shouldn’t wait to begin making these updates. 

Firms should take the following steps: 

  • Don’t Wait to Communicate – Even if the firm hasn’t settled on a final AUP, it is critical that employees think carefully about what information is entered into an LLM. Senior leaders should immediately begin notifying employees of the risk these tools pose and remind them of basic standards such as never to enter client information into an LLM.
  • Update Training – Once the firm has settled on its acceptable use standards, the firm’s cybersecurity training should be updated to include guidance on how employees can follow the AUP. This will include ensuring employees are aware of the policy, providing them with clear examples of what is and is not appropriate, and ensuring that employees understand the risks associated with violating the AUP. 
  • Reinforce the Policy – As with all behavioral risks, employees will need to be reminded of the AUP on LLMs. Cybersecurity leaders should begin integrating reminders about appropriate and inappropriate uses of LLMs into their communications calendar to employees, to help keep the risk front-of-mind for employees. Adding an interstitial page that employees have to click through to access LLMs on the web creates another opportunity for policy reminders. 

Conclusion 

Tools like ChatGPT and Bard present firms with a unique opportunity to create efficiencies in their workforce. When used properly, they can automate a wide range of time-consuming and labor-intensive writing tasks, freeing up time for employees to focus on higher-value work. But like all new technologies, LLMs pose additional risks from misuse.  

The good news for cybersecurity leaders is that they likely have experience managing employee-centered risk, and there likely isn’t a need for radical new approaches to dealing with it. Cybersecurity leaders should immediately assess the risk that LLMs pose to their firm and develop policies, training and communication to guide employee behavior. Because of the rapidly evolving nature of LLMs, that guidance may need to be reviewed and updated more frequently, but the program’s approach to this risk should be straightforward.

 

Aaron Pinnick is the Manager of Thought Leadership for ACA’s Aponix Program. In this role, he creates research to ensure clients receive the latest and most critical information they need to manage risk and ESG responsibilities. Before joining ACA Group, he was a Managing Analyst for Ballast Research, providing government affairs leaders with insights into their reputation with policymakers; and a research director for Gartner’s Compliance and Ethics program, creating research and best-practice guidance for compliance leaders at some of the world’s largest companies. Pinnick holds a master’s degree in sociology from Texas A&M University and a bachelor’s in sociology from Minot State University.

A version of the above article was previously published on the ACA website.




Advertisement

BylawsCode of ConductPrivacy NoticeTerms of Use © 2024 Global Association of Risk Professionals