Search for:

In brief

The US Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), announced on 20 November 2024 the release of its first synthetic content guidance report, NIST AI 100 4 Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency (“NIST AI 100 4“). “Synthetic content” is defined in President Biden’s Executive Order on Safe, Secure, and Trustworthy AI (“EO 14110“) as “information, such as images, videos, audio clips, and text, that has been significantly altered or generated by algorithms, including by AI.”


NIST AI 100 4 examines the existing standards, tools, methods and practices, as well as the potential development of further science backed standards and techniques to help manage and reduce risks related to synthetic content by: 1) recording and revealing the provenance of content, including its source and history of changes made to the content; 2) providing tools to label and identify AI generated content; and 3) mitigating the production and dissemination of AI generated child sex abuse materials (“AIG CSAM“) and non consensual intimate imagery (“AIG NCII“) of real individuals. It reflects public feedback and consultations with diverse stakeholders who responded to NIST’s Request for Information on 21 December 2023. Although compliance is voluntary, NIST AI 100 4 is expected to inform industry best practices for managing synthetic content risks.

Click here to read the full alert.

Author

Adam Aft helps global companies navigate the complex issues regarding intellectual property, data, and technology in M&A and technology transactions. He is the lead of the Firm's North America Technology Transactions group and co-leads the group globally. Adam also served as a law clerk to the Honorable Leslie H. Southwick of the US Court of Appeals for the Fifth Circuit and the Honorable Theresa L. Springmann of the US District Court for the Northern District of Indiana.

Author

Keo McKenzie is a partner in Baker McKenzie's Intellectual Property and Technology Practice Group (IPTech), based in the Firm’s DC office.
Keo has a neuroscience degree from the University of Cambridge and is dual-qualified as a lawyer in the US and UK, bringing an international perspective to her practice. She is Co-Chair of the California Lawyers Association AI Steering Committee.

Author

Cristina G. Messerschmidt is an associate in the Privacy and Security practice group based in Chicago, advising global organizations on privacy and data security compliance requirements, as well as data security incident response.

Author

Mercedes graduated from Maryland Carey Law and along with her J.D. she was also the recipient of the International Association of Privacy Professionals Westin Scholar Award. During law school, Mercedes interned at the White House Office of Science and Technology. She was invited by the White House Office of the National Cyber Director to speak at its inaugural "Women in Cyber" global event.