Search for:

Late in the evening on Friday 8 December in Brussels was a historic moment for AI regulation in Europe. After three days of extensive final debate the EU Parliament, Council and Commission finally announced provisional agreement on the EU AI Act, the bloc’s landmark legislation regulating development and use of AI in Europe. It is one of the world’s first comprehensive attempts to regulate the use of AI.

The EU AI Act awaits formal adoption by both Parliament and Council before it will become EU law.

The legislation has been some time in the making, starting with the EU Commission’s Proposal for a Regulation on AI in 2021. Following the explosion of interest in AI large language models in 2023, the nature of the regulation has had to evolve rapidly to keep pace with the technological advancements. Recent delays in the passing of the legislation have related to debates over whether and how the Act should regulate AI foundation models, the advanced generative AI models that are trained on large sets of data with the ability to learn and perform a variety of tasks, as well as over the use of AI in law enforcement.

The Act takes a prescriptive, risk-based approach to the regulation of AI products. AI is defined in line with the approach of the OECD to distinguish it from simpler software systems. Obligations are imposed on technology producers and deployers based on the risk category into which their technology fits. Technologies that pose “unacceptable” levels of danger are forbidden, while “high-risk” technologies face heavy restrictions. The list of prohibited technologies includes biometric identification systems, with narrowly defined law enforcement exceptions, as well as any other systems that use purposely manipulative techniques or social scoring, such as predictive police systems and emotional recognition systems. Untargeted scraping of facial images from the internet and CCTV is banned and AI used to create manipulated images, such as ‘deep fakes’ will need to make clear that the images are generated by AI.

Foundation models have been brought within the scope of the Act. which takes a similar tiered and risk-based approach to the obligations imposed on these models. Whilst details of the legislation are still to emerge, the EU has agreed a two-tiered approach for these models with “transparency requirements for all general-purpose AI models (such as ChatGPT)” and “stronger requirements for powerful models with systemic impacts”. An AI Office within the European Commission will be set up to oversee the regulation of the most advanced AI models.

In terms of obligations under the Act, those looking to provide and deploy AI face specific transparency and safety constraints. To limit threats to areas such as health, safety, human rights, and democracy, providers of high risk AI must utilise protections in stages such as design and testing. This entails assessing and mitigating hazards, as well as registering models in an EU database. Certain users of high risk AI systems that are public entities must also register in the EU database.

Penalties related to prohibited practices are up to EUR 35 million or 7% of a company’s annual global revenue, whilst violation of the Act’s obligations, or the incorrect supply of information, attract penalties of EUR 15 million or 3% of turnover, and EUR 7.5 million or 1.5% respectively. There is provision for more proportionate caps on administrative fines for SMEs and start-ups in the case of breach of the provisions of the AI Act. Exactly how the Act will be enforced is still to be made clear.

The provisional agreement makes clear that the EU AI Act does not apply outside the scope of EU law, which still catches providers of AI systems placed in the EU market irrespective of whether they are established in EU, and does not affect member states’ competencies in national security. Nor does it apply to AI systems used solely for research and innovation or to people using AI for non-professional reasons. The Act will apply two years after it comes into force, with some exceptions for specific provisions.

Some technology groups and European companies have raised concerns with the legislation, fearing that it will stifle innovation in Europe, particularly with respect to foundation models. Technology groups argued that the uses of AI, rather than the technology itself, should be regulated (which more closely reflects the approach currently being taken in many other parts of the world.) However, EU representatives believe that their final negotiations have achieved a better balance between enabling innovation and promoting responsible technology.

If you haven’t already conducted a risk assessment to identify the impact of the EU AI Act on your business, now is the time to get started – assess your AI systems to determine whether they will be subject to the EU AI Act once it enters into force and becomes applicable, and in which risk category your AI systems will fall.

Of course, compliance with the EU AI Act will be only one part of your Responsible AI governance programme. The EU AI Act may be heralded by the EU as the first comprehensive AI law, but there are many AI related developments being introduced by lawmakers across the world and, of course, regulators are already scrutinizing organizations’ compliance with existing laws when it comes to AI (including with respect to data privacy, consumer protection, and discrimination).

Accordingly, we recommend that you:

  • audit your development and use of AI within the organization and your supply chain;
  • decide what your AI principles and redlines should be (likely to include ethical considerations that go beyond the law including parameters set by the EU AI Act);
  • assess and augment existing risks and controls for AI where required (including to meet applicable EU AI Act requirements), both at an enterprise and product lifecycle level;
  • identify relevant AI risk owners and internal governance team(s);
  • revisit your existing vendor due diligence processes related to both (i) AI procurement and (ii) the procurement of third party services, products and deliverables which may be created using AI (in particular, generative AI systems);
  • assess your existing contract templates and any updates required to mitigate AI risk; and
  • continue to monitor AI and AI adjacent laws, guidance and standards around the world to ensure that the company’s AI governance framework is updated in response to further global developments as they arise.
Author

Ben is a partner in Baker McKenzie's IP, Data and Technology team based in London. He is a much sought-after industry specialist, with a particular emphasis on digital media and intermediary platforms. Ranked in the major directories, clients say of Ben that he "has a tremendous amount of experience advising tech and media companies, he is a star of the industry" (Chambers UK, Media & Entertainment, 2022, Band 1); "is a star who knows copyright inside out" (Chambers UK, Intellectual Property, 2022, Band 2); "is incredibly attuned to our business goals and IP risk, and has built a strong team which provides clear, pragmatic advice incorporating legal analysis as well as industry insights" (Chambers UK, Intellectual Property, 2022, Band 2); "is a real expert and has his finger on the pulse on legal developments. He understands the business and how we approach risk. He gave very practical advice and was unflappable in a particularly adversarial matter" (Chambers Global, Intellectual Property, 2022, Band 2); and that he "focuses on complex IP advisory and litigation work, particularly in digital music distribution and artificial intelligence mandates" (Legal 500, TMT, 2022, Tier 2). A Rhodes Scholar, Ben has twice been named in The Lawyer's "Hot 100" lawyers (in 2019 and 2012), along with being named E-Commerce Lawyer of the Year (UK) in the ILO Client Choice Awards 2011 and Assistant Solicitor of the Year in the British Legal Awards in 2009. He was named a "Change-Maker" in the Financial Times; European Legal Innovation Awards 2021 and is ranked by Managing Intellectual Property 2022 as a Copyright Star and a Transactions Star. Ben is also Baker McKenzie's Chief Innovation Officer, in charge of the Firm's Reinvent innovation arm.

Author

Vin is well regarded and considered to be a ‘long-standing recognised leader in data privacy and regulatory matters stemming pre-GDPR to present day, arguably making him the go-to person’. Vin leads our London Data Privacy practice and is also a member of our Global Privacy & Security Leadership team bringing his vast experience in this specialist area for over 22 years, advising clients from various data-rich sectors including retail, financial services/fin-tech, life sciences, healthcare, proptech and technology platforms.

Author

Elisabeth Dehareng joined Baker McKenzie's Brussels office in 2003 and is a partner since 2014 in the Information Technology & Communications and Intellectual Practice Groups. She was admitted to the Brussels bar in 2003. She is a member of the EMEA IP Tech Steering Committee. Elisabeth is regularly mentioned in publications such as Legal 500 and Chambers.

Author

Dr. Lukas Feiler, SSCP, CIPP/E, has more than eight years of experience in IP/IT and is a partner and head of the IP and IT team at Baker McKenzie Rechtsanwälte LLP & Co KG in Vienna. He is a lecturer for data protection law at the University of Vienna Law School and for IT compliance at the University of Applied Science Wiener Neustadt. Prior to joining the Firm, Lukas was an associate at the Austrian headquarter of an international law firm, vice director at the European Center for E-Commerce and Internet Law, and an intern at the European Commission, DG Information Society & Media. Having worked at IT companies in Vienna, Leeds, and New York, he has experience as a system and network administrator. In April 2014, Lukas has been named as Cyber Security Lawyer of the Year for Austria in the 2014 Finance Monthly Law Awards. In 2011, he received the Jus-Top-League Award from Die Presse and the Academy for Law, Taxes & Business as one of the five most promising up-and-coming lawyers.

Author

Francesca Gaudino is the Head of Baker McKenzie’s Information Technology & Communications Group in Milan. She focuses on data protection and security, advising particularly on legal issues that arise in the use of cutting edge technology. She has been recognized in Chambers Europe’s individual lawyer rankings from 2011 to 2014. Ms. Gaudino is a regular contributor on international publications such as World Data Protection Review, DataGuidance, and others. She routinely holds lectures on data privacy and security at post-graduate courses of SDA – Manager Direction School of the Milan Bocconi University and Almaweb – University of Bologna. She regularly speaks at national and international conferences and workshops on the same topics.

Author

Sue is a partner in Baker McKenzie's IP, Data and Technology team based in London. Sue specialises in major technology deals including cloud, outsourcing, digital transformation and development and licensing. She also advises on a range of legal and regulatory issues relating to the development and roll-out of new technologies including AI, blockchain/DLT, metaverse and crypto-assets. Her IP and commercial experience includes drafting, advising on and negotiating a wide range of intellectual property and commercial agreements including IP licences and assignment agreements, long-term supply and distribution agreements. She also assists clients in preparing terms of business and related documentation for new business processes and offerings and coordinating global roll-outs. Sue is also a key member of our transactional practice, providing strategic support on the commercial, technology and intellectual property aspects of M&A transactions and joint ventures, including advising on transitional services agreements and other key ancillary IP and commercial agreements. Sue is ranked as a leading lawyer in Chambers for Information Technology & Outsourcing and Fintech Legal and in Legal500 for Commercial Contracts, IT & Telecoms, TMT and Fintech. Clients say of Sue "Sue is outstanding", "She is a really good and very committed lawyer", "Excellent…. Very capable, wouldn’t hesitate to use on IT/TMT/Outsourcing matters." Sue was named in the Standout 35 of the Women in FinTech Powerlist 2020.

Author

José María Méndez is head of the Intellectual Property, Tech and Media department at Baker McKenzie Madrid and head of the EMEA IPTech practice. Mr. Méndez is recognized as a leader in his field by the most prestigious legal directories. According to Chambers Europe, José María Méndez "was born for copyright law" and “his style is oriented to being pragmatic and offers clear and easy to implement solutions." Jose María is hailed as an “expert in media and production” and considered “the king in audiovisual matters.” Clients describe Jose María as “very specialized and has unsurpassed knowledge of the audio-visual industry.”

Author

Eva-Maria Strobel is a partner in Baker McKenzie's Zurich office. She is a member in the Firm's global IPTech Practice Group, chairs the EMEA IPTech Practice Group and heads the Swiss IPTech team. Eva-Maria is admitted to the bars in Switzerland and Germany, and worked in the Firm's Frankfurt office prior to relocating to Zurich. Legal 500, Chambers, WIPR, Managing IP and WTR 1000 praise Eva-Maria as one of the leading trademark lawyers in Switzerland.

Author

Florian Tannen is a partner in the Munich office of Baker McKenzie with more than 10 years of experience. He advises on all areas of contentious and non-contentious information technology law, including internet, computer/software and in particular data privacy law. Before joining the Firm, Florian worked for two major law firms and a large US-based technology company.