USAISIC - SteveKommrusch/CSU_AISafetySecurity GitHub Wiki

AISIC

US AI Safety Institute Consortium

Public documents

Workgroups

NIST hosted a workshop on November 17, 2023, to engage in a conversation about artificial intelligence (AI) safety. As a result of the workshop, NIST has an initial list of working groups in which members may participate:

NIST document open to public comments until June 2nd

  • NIST AI 100-5: A Plan for Global Engagement on AI Standards A Plan for Global Engagement on AI Standards ( NIST AI 100-5 ) is designed to drive the worldwide development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing. The publication is informed by priorities outlined in the NIST-developed Plan for Federal Engagement in AI Standards and Related Tools and is tied to the National Standards Strategy for Critical and Emerging Technology. The draft publication suggests a broader range of multidisciplinary stakeholders from many countries participate in the standards development process.

  • Ideas for comments:

    1. Existential risk to humanity should be added as an additional risk to the list of 12 risks. While some of the other risks listed could eventually result in existential risk, not listing it as a separate risk increases the chance that major risk elements of existential risk would be missed if not treated as a top-level risk. It's too important to consider only indirectly.
    2. Either this document should be expanded to cover AI beyond Generative AI, or a separate document should be created to be more inclusive of risks associated with other AI technologies. While the EO specifically refers to Generative AI several places, 4.4 (ii)(A) of the EO, for instance is broader, stating: "assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;" To facilitate the study required by that section of the EO, recommendations cannot be limited only to Generative AI, but must consider AI more broadly.