Search for:

In brief

On 21 March 2024, the United Nations General Assembly adopted Resolution A/78/L.49 on “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development”. This marks the first-ever resolution adopted by the United Nations (UN) on the matter of artificial intelligence (AI) and is therefore a milestone in its governance. Although the resolution has no immediate binding effect, its content will further guide the regulatory development of AI technologies on the national and international level in the years to come and marks a step in the “race to AI regulation“.

This Client Alert provides crucial insights into the content and importance of this resolution and highlights the implications for businesses in the ongoing regulatory development of AI.


Contents

  1. The first UN resolution on Artificial Intelligence
  2. Centrality of Human Rights Compliance
  3. Implications for companies and the private sector
  4. Current implications and outlook

The first UN resolution on Artificial Intelligence

In recent months, we have seen a plethora of developments in the international governance of AI. After long and intense debates, the European Parliament adopted the AI Act this March. Moreover, the Council of Europe is currently drafting a Convention on Artificial Intelligence. AI was also discussed by the United Nations Security Council in an Open Meeting in July 2023, highlighting its peace and security implications, in particular when recurrently discussing cyber security threats (most recently in early April 2024). Likewise, the United Nations Open Ended Working Group on Security of and in the Use of Information and Communications Technologies 2021-2025, which in the past has already adopted significant guidelines on the application of international law in cyberspace, has recently touched upon AI-related matters. Furthermore, UNESCO issued Recommendations on the Ethics of AI. Although these discussions in the Security Council, UNESCO and the Open Ended Working Group are still at an early stage, these recent developments eviscerate State’s growing awareness for the relevance and need for international regulation of AI.

21 March 2024 marks a milestone in this process. The United Nations General Assembly adopted resolution A/78/L.49 unanimously. The draft was co-sponsored by 125 States with the United States taking a lead role. As we reported in an earlier Client Alert, the Biden-Administration is currently pushing for stronger governance of AI both on the national and international level. The General Assembly’s resolution shows significant convergence with the Biden administration’s Executive Order on AI from last October.

The resolution prominently acknowledges the potentials of AI systems in accelerating global development and achieving the 2030 sustainable development goals. At the same time, the risks associated with improper or malicious use of AI systems and their detrimental impact on human rights is recognized. The resolution particularly highlights the risks associated with biased data that can reinforce inequalities and discrimination. Therefore, the resolution urges international cooperation and a global consensus on the future development and implementation of safe AI systems. Likewise, the resolution encourages further cooperation, research, and technology sharing among stakeholders. To this end, the resolution envisions both regulatory developments at the national level as well as the international level. This shows that the community of states is currently preparing for a significant push in AI governance at different levels and is eager to streamline these developments.

This endeavor, however, sits uneasy with the competing interests of different groups of states. Whereas the United States have traditionally embraced the opportunities associated with AI, European states are known for their stricter data privacy regulations that enter into friction particularly with large-language models that draw on a great amount of data for training purposes. At the same time, African states are concerned with questions of inclusion in the development of and access to AI technologies. The resolution addresses all of these concerns, highlighting the potential of AI, the necessity of its regulation in adherence to data protection standards and the importance of inclusive development and access. However, going forward these competing interests can be a pitfall for a comprehensive development on the universal level and lead to significant divergences between competing national regulations.

Centrality of Human Rights Compliance

The resolution recurrently refers to international law and international human rights as the central pillar for the regulation of AI. Due to the limited consensus on the specifics of AI regulation, the community of states refers back to the general principles of international law, and particularly human rights law for orientation, firmly affirming that any kind of regulation of AI shall recognize and give full effect to human rights, acknowledging that AI may in particular impact on the right to life, the right to privacy, and the right to freedom of expression, among other international human rights.

In this line, operative paragraph 13 defines the main goal of the United Nations system in relation to the governance of AI as the development of a global framework consistent with international law. AI developers must therefore integrate an international law assessment already when developing their AI technologies, particularly in the defense sector. However, a central problem for developers is the current indeterminate nature of international law on AI. As an emerging field, the precise obligations and restrictions are still vague and imprecise. Therefore, the resolution calls upon states to promote the development of regulatory frameworks at the national level and further specify the international legal obligations. Operative paragraph 6 – the longest and most detailed clause of the resolution – exclusively deals with these governance questions.

International law, and particularly international human rights law, is therefore emerging as a central pillar in regulating AI. The General Assembly’s resolution highlights that the UN will remain active in this field. Therefore, we can expect further developments, including through other UN organs and organizations, such as the Security Council and UNESCO.

Implications for companies and the private sector

The UNGA resolution explicitly also addresses companies and their human rights obligations recognizing that the private industry are the main drivers of the development of AI going forward. In its operative paragraph 9, the resolution stipulates:

“Encourages the private sector to adhere to applicable international and domestic laws and act in line with the United Nations Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework; acknowledges the importance of more inclusive and equitable access to the benefits of safe, secure and trustworthy artificial intelligence systems; and recognizes the need for increased collaboration, including between and within the public and private sectors and civil society, academia and research institutions and technical communities, to provide and promote fair, open, inclusive and non-discriminatory business environment, economic and commercial activities, competitive ecosystems and marketplaces across the life cycle of safe, secure and trustworthy artificial intelligence; as well as encourages Member States to develop policies and regulations to promote competition in safe, secure and trustworthy artificial intelligence systems and related technologies, including by supporting and enabling new opportunities for small businesses and entrepreneurs and technical talent, and enabling fair competition in the artificial intelligence marketplace, through critical investment, especially for developing countries;”

Given the private sector’s central role in the development of AI as well as the protection of human rights, the General Assembly is keen to include the private sector in its endeavor to create a regulatory framework for the safe development and use of AI. It explicitly references the guiding principles on Business and Human Rights, also known as the Ruggie-Principles, and encourages Member States to develop national regulatory frameworks addressing access to AI and competition. Mentioning the private sector in such direct manner is a rather novel approach in UN resolutions, but underscores and reinforces the central role of the private sector in the implementation of human rights. In Europe, the Corporate Sustainability Due Diligence Directive’s recent approval reinforces this trend. We have already informed you on these developments that are significantly interlinked with the ongoing regulatory efforts concerning AI.

Current implications and outlook

Businesses developing AI, implementing AI solutions, and using AI technology must understand the regulatory hurdles ahead. The General Assembly resolution highlights that international law, and particularly human rights law is applicable to AI. The resolution details why this is not only of significance for states, but also for the private sector. In particular, companies in the defense sector and companies using large language models need to ensure that their products contemplate the relevant human rights law and adapt their products accordingly. Moreover, they must navigate the complex governance framework between regulations on the international, EU and different national levels. Although the AI specific requirements under international law and its relationship to national laws have – to date – not been defined clearly, further specification is set to come.

Baker McKenzie’s International Trade Practice has unique insights both with regard to the regulatory developments concerning AI and the implications for businesses under the Business and Human Rights Framework and the business & human rights legislation and ESG legislation of the EU and its Member States. We are poised to provide tailored guidance to ensure compliance with emerging AI regulations.

Author

Anahita Thoms heads Baker McKenzie's International Trade Practice in Germany and is a member of our EMEA Steering Committee for Compliance & Investigations. Anahita is Global Lead Sustainability Partner for our Industrials, Manufacturing and Transportation Industry Group. She serves as an Advisory Board Member in profit and non-profit organizations, such as Atlantik-Brücke, and is an elected National Committee Member at UNICEF Germany. She has served for three consecutive terms as the ABA Co-chair of the Export Controls and Economic Sanctions Committee and as the ABA Vice-Chair of the International Human Rights Committee. Anahita has also been an Advisory Board Member (Beirätin) of the Sustainable Finance Advisory Council of the German Government.

Anahita has won various accolades for her work, including 100 Most Influential Women in German Business (manager magazin), Top Lawyer (Wirtschaftswoche), Winner of the Strive Awards in the category Sustainability, Pioneer in the area of sustainability (Juve), International Trade Lawyer of the Year (Germany) 2020 ILO Client Choice Awards, Young Global Leader of the World Economic Forum, Capital 40 under 40, International Trade Lawyer of the Year (New York) 2016 ILO Client Choice Awards. In 2023, Handelsblatt recognized her as one of Germany’s Dealmaker and “most sought after advisors of the country” in the field of sustainability.

Author

Dr. Alexander Ehrle is a member of the Firm's International Trade Practice in Baker McKenzie's Berlin office. Alexander studied law at the Universities of Heidelberg, Montpellier (France), Mainz, Munich and New York (NYU) specializing in Public International and European Law. He worked as advisor and member of a delegation of a developing country at the United Nations before qualifying for the German bar. He spent his clerkship with the Higher Regional Court in Berlin, the German Ministry of Foreign Affairs in Berlin and Tokyo as well as an international law firm in Frankfurt and Milan. He wrote his doctoral dissertation on the structural changes of public international law and their conceptualization in academic discourse basing his research on the governance of areas beyond national jurisdiction. Alexander is admitted to practice in Germany and New York. 

Alexander co-chairs the Business & Human Rights Committee of the American Bar Association’s International Law Section and has been recognized as one of 40 under 40 lawyers worldwide for foreign investment control by the Global Competition Review.

Author

Kimberley Fischer is a member of the International Trade Practice in Baker McKenzie's Berlin office. She joined the Firm in 2022. Kimberley studied law at the Ruprecht Karls University of Heidelberg and the Universidad de Deusto (Spain), with a focus on public international law and human rights. Prior to joining the Firm, Kimberley completed her legal traineeship at the Higher Regional Court of Frankfurt am Main, the German Federal Foreign Office in Berlin and at an international law firm in Brussels and Frankfurt am Main. She also gained significant experience in public (international) law as a research assistant at the University of Heidelberg and at a reputable law firm.

Author

Caroline Walka is a member of the foreign trade practice in Baker McKenzie's Berlin office. She joined the Firm in 2024.
Caroline studied law at the Freie Universität of Berlin and the Universidad de Granada (Spain) as well as the University of Edinburgh with a focus on public international law and human rights.
Before joining Baker McKenzie as an associate, Caroline completed her legal clerkship at the Higher Regional Court of Berlin, with the Berlin Senate Administration, at the Baker McKenzie office in Berlin and an NGO in Windhoek, Namibia. She gained important experience in (international) public law during her LLM at the University of Edinburgh, where one of her focusses was business and human rights.

Write A Comment