Working Group #2: Synthetic Content - SteveKommrusch/CSU_AISafetySecurity GitHub Wiki
NIST Overview
Identify the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for authenticating content and tracking its provenance; labeling synthetic content, such as using watermarking; detecting synthetic content; and preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals; testing software used for the above purposes; and auditing and maintaining synthetic content.
Materials
Proposals
NIST document open for public review until June 2nd
-
NIST AI 100-4: Reducing Risks Posed by Synthetic Content This publication informs, and is complementary to, a separate report on understanding the provenance and detection of synthetic content that AI EO Section 4.5(a) tasks NIST with providing to the White House. NIST AI 100-4 lays out methods for detecting, authenticating and labeling synthetic content, including digital watermarking and metadata recording, where information indicating the origin or history of content such as an image or sound recording is embedded in the content to assist in verifying its authenticity. Each section of the report begins with an overview of an approach and outlines current methods for using it, concluding with areas where NIST experts recommend further research.
-
Ideas for comments: