Working Group #1: Risk Management for Generative AI - SteveKommrusch/CSU_AISafetySecurity GitHub Wiki

NIST overview

Develop a companion resource to the AI Risk Management Framework (AI RMF) for generative AI Develop minimum risk management guidance geared toward federal agencies. Operationalize the AI RMF.

Materials

Proposals

NIST document open for public comment until June 2nd

  • NIST AI 600-1: AI RMF Generative AI Profile This document can help organizations identify unique risks posed by generative AI and proposes actions for generative AI risk management that best aligns with their goals and priorities. Developed over the past year and drawing on input from the NIST generative AI public working group of more than 2,500 members, the guidance centers on a list of 13 risks and more than 400 actions that developers can take to manage them.

  • Ideas for comments:

  1. Existential risk to humanity should be added as an additional risk to the list of 12 risks. While some of the other risks listed could eventually result in existential risk, not listing it as a separate risk increases the chance that major risk elements of existential risk would be missed if not treated as a top-level risk. It's too important to consider only indirectly.

  2. Either this document should be expanded to cover AI beyond Generative AI, or a separate document should be created to be more inclusive of risks associated with other AI technologies. While the EO specifically refers to Generative AI several places, 4.4 (ii)(A) of the EO, for instance is broader, stating: "assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;" To facilitate the study required by that section of the EO, recommendations cannot be limited only to Generative AI, but must consider AI more broadly.