Working Group #4: Red Teaming - SteveKommrusch/CSU_AISafetySecurity GitHub Wiki

NIST Overview

Establish appropriate guidelines, including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.

Materials

Proposals