The US Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), announced on 20 November 2024 the release of its first synthetic content guidance report, NIST AI 100 4 Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency. “Synthetic content” is defined in President Biden’s Executive Order on Safe, Secure, and Trustworthy AI as “information, such as images, videos, audio clips, and text, that has been significantly altered or generated by algorithms, including by AI.”
Singapore and the European Union (EU) have formalized their collaboration on Artificial Intelligence (AI) safety with the establishment of a new Administrative Arrangement (AA). This arrangement aims to enhance cooperation in promoting technological innovation and the development and responsible use of safe, trustworthy, and human-centric AI. The AA was signed by Mr Joseph Leong, Permanent Secretary of the Ministry of Digital Development and Information of Singapore, and Mr Roberto Viola, Director-General of the Directorate-General for Communications Networks, Content and Technology of the European Commission.
On 12 November 2024, the US Department of Justice Antitrust Division updated its Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations (ECCP). The additions include guidance such as using “managers at all levels” to “set the tone from the middle” by “demonstrating to employees the importance of compliance,” establishing policies that account for the use of “ephemeral messaging or non-company methods of communication,” applying “data analytics tools in . . . compliance and monitoring,” and involving compliance personnel in “the deployment of AI and other technologies to assess the risks they may pose.” Additionally, the ECCP now addresses its application to civil investigations.
EU Regulation 2024/1689 on Artificial Intelligence has the aim to introduce strict rules for the design, implementation and placing on the market of Artificial Intelligence systems, to be applied both to suppliers established in European territory and to suppliers established outside the European Union.
As AI capabilities and applications continue to advance, the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework has emerged as a vital tool for organizations to responsibly develop and use AI systems.
On 23 September 2024, the US Department of Justice Criminal Division issued an updated version of its Evaluation of Corporate Compliance Programs document. DOJ uses the Evaluation Guidance to assess the adequacy of compliance programs in place at companies subject to its criminal enforcement activities. DOJ has updated the Evaluation Guidance periodically since its release in 2017 to align with evolving DOJ policies, priorities, and compliance best practices. This latest iteration reflects current DOJ investigation and enforcement priorities and the increasing relevance of artificial intelligence and other emerging technologies to companies, their compliance programs, and DOJ’s enforcement efforts. DOJ also updated the Evaluation Guidance to encourage companies to: 1) incorporate a lessons-learned approach; 2) focus on compliance due diligence and integration in acquisitions; and 3) properly incentivize internal reporting of wrongdoing.
On 17 September 2024, within the framework of the National Program for Transparency and Protection of Personal Data in the Use of Artificial Intelligence, the Agency for Access to Public Information published the preliminary version of the “Guide for Public and Private Entities on Transparency and Personal Data Protection for Responsible Artificial Intelligence”.
On 1 September 2024, the Saudi Data and AI Authority (SDAIA) published the Regulation on Personal Data Transfer Outside the Kingdom (“Data Transfer Regulations”), which amended the previous Transfer Regulations under the Personal Data Protection Law issued by Royal Decree No. (M/19) dated 9/2/1443 AH and amended by Royal Decree No. (M/148) dated 5/9/1444 AH (“PDPL”). SDAIA also published additional information on Standard Contractual Clauses and Binding Common Rules, two of the appropriate safeguards for transferring data outside of the Kingdom, as well as a number of PDPL-related rules and guidelines. A summary of our initial takeaways can be found below.
The Cyber Security Agency (CSA) has just released Guidelines on Securing AI Systems (“Guidelines”) and a Companion Guide on Securing AI Systems (“Companion Guide”).
The Guidelines advocate for a “secure by design” and “secure by default” approach, addressing both existing cybersecurity threats and emerging risks, such as adversarial machine learning. The aim is to provide system owners with principles for raising awareness and implementing security controls throughout the AI lifecycle.
The Companion Guide is an open-collaboration resource, and while not mandatory, it offers guidance on useful measures and controls informed by industry best practices, academic insights and resources such as the MITRE ATLAS database and OWASP Top 10 for Machine Learning and Generative AI.