On 30 October 2023, President Biden issued a 63-page Executive Order to define the trajectory of artificial intelligence adoption, governance, and usage within the United States government. The Executive Order outlines eight guiding principles and priorities for US federal agencies to adhere to as they adopt, govern, and use AI. While safety and security are predictably high on the list, so too is a desire to make America a leader in the AI industry, including AI development by the federal government. While executive orders are not a statute or regulations and do not require confirmation by Congress, they are binding and can have the force of law, usually based on existing statutory powers.
Instruction to federal agencies and impact on non-governmental entities
The Order directs a majority of federal agencies to address AI’s specific implications for their sectors, setting varied timelines ranging from 30 to 365 days for each applicable agency to implement specific requirements set forth in the Order.
The actions required of the federal agencies will impact non-government entities in a number of ways because agencies will seek to impose contractual obligations to implement provisions of the Order or invoke statutory powers under the Defense Production Act for the national defense and the protection of critical infrastructure, including:
- Introducing reporting and other obligations for technology providers (both foundational model providers and IaaS providers).
- Adding requirements for entities that work with the federal government in a contracting capacity.
- Influencing overall AI policy development.
Substantial within the Order are new reporting requirements for models trained using substantial computing power (i.e., those models using computing power that exceeds thresholds in the Order, or as such thresholds are re-defined by the Secretary of Commerce). Also, building on existing cyber-related sanctions measures, the Commerce Department is to propose extensive reporting requirements for IaaS providers whenever a foreign person interacts with training large AI models that could be used in malicious cyber-enabled activity (proposed rules are due within 90 days), as well as identity verification requirements on foreign persons obtaining IaaS accounts through foreign resellers (proposed rules are due within 180 days). Another key provision with a quick 90-day turnaround time directs the Secretary of State and Secretary of Homeland Security to streamline visa processes for skilled AI professionals, students, and researchers. The Order also provides the possibility for additional intellectual property protections related to AI (given that the Order directs the US Patent and Trademark Office to provide AI guidance and the US Copyright Office to recommend additional protection for works produced using AI).
The order’s eight key directives
- New Standards for AI Safety and Security
- Developers of powerful AI systems will now be required to share their safety test results and other pertinent information with the US government. Companies developing foundation models that could pose a risk to national security, the economy, or public health and safety must notify the federal government when training the model and must share the results of certain prescribed safety tests.
- Within the next 270 days, the National Institute of Standards and Technology will set rigorous standards for extensive testing to ensure safety before public release, and the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish an AI Safety and Security Board.
- Agencies that fund life-science projects will establish standards for biological synthesis screening.
- The Department of Commerce will develop guidance for content authentication to protect Americans from AI-enabled fraud and deception, particularly in relation to official government communications.
- The National Security Council and the White House Chief of Staff will work together to develop a National Security Memorandum to ensure that the US military and intelligence community use AI safely, ethically, and effectively in their missions.
- The Departments of Energy and Homeland Security will address AI risks related to critical infrastructure, chemical, biological, radiological, nuclear, and cybersecurity. The Order requires secure and responsible development of AI hardware and software that will be used by critical infrastructure. On the heels of the Critical Infrastructure Risk Management Cybersecurity Improvement Act and major supply chain vulnerabilities, safeguarding our nation’s critical infrastructure is a top priority. Also, similar to Biden’s 2021 Executive Order 14028, CISA has been tasked with evaluating how AI can disrupt critical infrastructure by creating vulnerabilities and increasing the risk of cyberattacks and business interruption.
- Protecting Americans’ Privacy: The Order calls on Congress to pass bipartisan data privacy legislation with a focus on protecting children, strengthening privacy-preserving research and technologies, evaluating how agencies collect and use commercially available information, strengthening privacy guidance for federal agencies, and developing guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques.
- Advancing Equity and Civil Rights: The Order recognizes that irresponsible uses of AI can lead to and deepen discrimination, bias, and other abuses in criminal justice, healthcare, employment, and housing and directs actions to mitigate such risks.
- Standing Up for Consumers, Patients, and Students: The Order recognizes that while AI can bring significant benefits to consumers (for example, by making products better, cheaper, and more widely available), it can also risk injuring, misleading, or otherwise harming Americans. The Order directs the Department of Health and Human Services to establish a safety program to receive reports of harmful or unsafe healthcare practices involving AI.
- Supporting American Workers: The Order directs the development of principles and best practices to mitigate the harms and maximize the benefits of AI for workers, including AI-driven job displacement and disempowerment, upskilling for workers, labor standards, employee privacy, and wage and hour considerations.
- Promoting American Innovation and Competition: The Order introduces measures to catalyze AI research across the US by piloting a National AI Research Resource and expanding grants for AI research in specific areas, including healthcare and climate change. The further aims to attract and retain foreign national AI talent in the US by modernizing or expanding existing immigration programs. The Order encourages the Federal Trade Commission to exercise its authority to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harm that may be enabled by the use of AI.
- Advancing American Leadership Abroad: The Order seeks to accelerate the development and implementation of AI standards with international partners and promote the safe, responsible, and rights-affirming development and deployment of AI abroad to solve global challenges.
- Ensuring Responsible and Effective Government Use of AI: Federal agencies will receive guidance on AI use, procurement, and deployment. AI systems can aid the speed of the procurement process and if utilized correctly, could conceivably reduce the amount of bid protests. The Order aims to address the possibility of bias in this process.
Baker McKenzie’s recognized leaders in AI are supporting multinational companies with strategic guidance for responsible and compliant AI development and deployment. Our industry experts with backgrounds in data privacy, intellectual property, cybersecurity, trade compliance, and employment can meet you at any stage of your AI journey to unpack the latest trends in legislative and regulatory proposals and the corresponding legal risks and considerations for your organization. Please contact a member of our team for more.
We thank Melissa Allchin, Maurice Bellan, Caroline Burnett, Cynthia Cole, Alex Crowley, Lothar Determann, Susan Eandi, Paul Evans, Jacqueline Gerson, Brian Hengesbaugh, Teisha Johnson, Mackenzie Martin, Cristina Messerschmidt, Teresa Michaud, Bradford Newman, Justine Phillips, Sara Pitt, Alison Stafford Powell, Elizabeth Roper, Robin Samuel, Jonathan Tam and Cyrus R. Vance Jr. to their contribution to this alert.