Understanding and Managing the Risks of Generative Artificial Intelligence

artificial intelligence
Lisa Ritter, CPA, CFE, CITP, Partner & Justin Ruiz-Velasco, Staff Auditor

The National Institute of Standards and Technology (NIST) has issued guidance to assist in understanding and managing the risks of Generative Artificial Intelligence (GAI) in NIST AI 600-1. GAI is defined as a type of artificial intelligence (AI) that uses machine learning to create new content. This content can be text, images, music, audio, or video. GAI poses risks that may occur during any stage of its lifecycle, such as during the design, development, deployment, operation, and decommissioning. Risks may stem from both GAI models themselves and human behavior.

Risks may include, but are not limited to:

  • Confabulation: Production of false content that appears correct may mislead or deceive users. Because responses appear confident, users may believe and act upon false information.
  • Dangerous, Violent, or Hateful Content: Production and access to violent and dangerous content is made easier by GAI, and controlling public exposure to this content is difficult. This can result in dangerous or violent behaviors from those who receive or use the content.
  • Data Privacy: Personally identifiable information and sensitive data may be leaked or used without authorization. Outputs may also be confabulated to incorrectly infer other personal data that is not in the system’s training data.
  • Harmful Bias or Homogenization: The outputs of GAI may be influenced by historical, societal, and systemic biases, which could impact decision making.
  • Human-AI Configuration: Humans may begin to view GAI as having human qualities, leading to an over-reliance or automation bias in its outputs. The outputs may appear to be of the same quality as other sources of information when false information may be generated.
  • Information Integrity: GAI outputs may be generated with false information, and GAI systems may not be able to distinguish between correct and incorrect information. GAI can also generate images and propaganda that may contain false information but push the public to believe false claims.
  • Information Security: GAI creates another target for cyberattacks, which may lead to the unauthorized collection of sensitive data. Cyberattacks may include inputting prompts to generate outputs consisting of sensitive information or poisoning training data sets to produce manipulated outputs.
  • Intellectual Property: GAI may produce or replicate copyrighted content. Legal discussions around this risk are still taking place.
  • Obscene, Degrading, And/or Abusive Content: Harmful content of others may be created. Deepfakes refer to highly realistic images of real individuals, and GAI can produce images that show individuals partaking in illegal activities.
  • Value Chain and Component Integration: There may be no way to determine where the inputs of GAI content originated from, which may cause problems for anyone receiving and using those inputs.

To manage these risks, organizations should implement a number of controls to address them. These controls include:

  • Documenting, understanding, and managing legal and regulatory requirements involving AI.
  • Integrating characteristics of trustworthy AI into organizational policies, processes, procedures, and practices.
  • Regularly monitoring the risk management process. This includes defining organizational roles and responsibilities surrounding reviews and their frequency. Employees who were not involved in the development of a GAI system should be involved in regular assessments, and measures for evaluating systems should be documented.
  • Creating mechanisms to inventory AI systems.
  • Putting policies in place to foster a critical thinking mindset when developing and using AI. Practices should be implemented to test AI and identify incidents. If failures or incidents occur, contingency policies should be in place for handling those incidents.
  • Creating policies, procedures, and practices for collecting and integrating internal and external feedback to evaluate any individual and societal impacts related to risks.

Some risks are still unknown, and others may be difficult to evaluate or estimate. Creating procedures to respond to risks when they are identified will limit any negative effects from those risks.

If you have any questions regarding the implementation of policies and procedures for managing the use and risks of AI, please contact a member of your audit team.

Connect With Us

Stay Connected!

Sign up to receive information on the latest government and non-profit industry insights, firm news, and upcoming events & seminars.

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.