The Dubai Centre for Artificial Intelligence has launched a new accreditation known as the Dubai AI Seal (“Seal”), which aims to provide companies with a seal of approval regarding their AI solutions. The Seal is aimed at companies licensed in the Emirate of Dubai and who provide AI-related products and services. The launch of the scheme aligns with the Dubai Universal Blueprint for Artificial Intelligence, a government policy that serves as a roadmap for the acceleration of AI adoption in the UAE.
The AI Act introduces a comprehensive legal framework for companies dealing with AI systems in the EU. From 2 February 2025, companies subject to the regulation must take steps to ensure AI literacy and ensure that no prohibited AI practices are used. Non-compliance could lead to substantial fines.
On 20 January 2025, the first day of his second term, President Trump revoked Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“Biden Order”), signed by President Biden in October 2023. In doing so, President Trump fulfilled a campaign pledge to roll back the Biden Order, which the 2024 Republican platform described as a “dangerous” measure. Then on 23 January 2025, President Trump issued his own Executive Order on AI, entitled Removing Barriers to American Leadership in Artificial Intelligence.
On 17 December 2024, the Bipartisan House Task Force on Artificial Intelligence released a report on “guiding principles, forward-looking recommendations, and policy proposals to ensure America continues to lead the world in responsible AI innovation.” The report focuses on 15 key areas, including intellectual property, data privacy, healthcare and federal preemption of state law. These principles, recommendations and policy proposals are meant to be a tool rather than the final word on AI. As such, it is anticipated that future AI legislators will use the report to craft AI policy.
The US Artificial Intelligence Safety Institute (AISI), housed within the National Institute of Standards and Technology (NIST), announced on 20 November 2024 the release of its first synthetic content guidance report, NIST AI 100 4 Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency. “Synthetic content” is defined in President Biden’s Executive Order on Safe, Secure, and Trustworthy AI as “information, such as images, videos, audio clips, and text, that has been significantly altered or generated by algorithms, including by AI.”
Singapore and the European Union (EU) have formalized their collaboration on Artificial Intelligence (AI) safety with the establishment of a new Administrative Arrangement (AA). This arrangement aims to enhance cooperation in promoting technological innovation and the development and responsible use of safe, trustworthy, and human-centric AI. The AA was signed by Mr Joseph Leong, Permanent Secretary of the Ministry of Digital Development and Information of Singapore, and Mr Roberto Viola, Director-General of the Directorate-General for Communications Networks, Content and Technology of the European Commission.
On 12 November 2024, the US Department of Justice Antitrust Division updated its Evaluation of Corporate Compliance Programs in Criminal Antitrust Investigations (ECCP). The additions include guidance such as using “managers at all levels” to “set the tone from the middle” by “demonstrating to employees the importance of compliance,” establishing policies that account for the use of “ephemeral messaging or non-company methods of communication,” applying “data analytics tools in . . . compliance and monitoring,” and involving compliance personnel in “the deployment of AI and other technologies to assess the risks they may pose.” Additionally, the ECCP now addresses its application to civil investigations.
EU Regulation 2024/1689 on Artificial Intelligence has the aim to introduce strict rules for the design, implementation and placing on the market of Artificial Intelligence systems, to be applied both to suppliers established in European territory and to suppliers established outside the European Union.
As AI capabilities and applications continue to advance, the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework has emerged as a vital tool for organizations to responsibly develop and use AI systems.
On 23 September 2024, the US Department of Justice Criminal Division issued an updated version of its Evaluation of Corporate Compliance Programs document. DOJ uses the Evaluation Guidance to assess the adequacy of compliance programs in place at companies subject to its criminal enforcement activities. DOJ has updated the Evaluation Guidance periodically since its release in 2017 to align with evolving DOJ policies, priorities, and compliance best practices. This latest iteration reflects current DOJ investigation and enforcement priorities and the increasing relevance of artificial intelligence and other emerging technologies to companies, their compliance programs, and DOJ’s enforcement efforts. DOJ also updated the Evaluation Guidance to encourage companies to: 1) incorporate a lessons-learned approach; 2) focus on compliance due diligence and integration in acquisitions; and 3) properly incentivize internal reporting of wrongdoing.