Working Group #4: Red Teaming - SteveKommrusch/CSU_AISafetySecurity GitHub Wiki
NIST Overview
Establish appropriate guidelines, including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.
Materials
- National Telecommunication and Information Administration (NTIA) AI accountability report
- Hackers can read private chats: https://arstechnica.com/security/2024/03/hackers-can-read-private-ai-assistant-chats-even-though-theyre-encrypted/