Search for:

In brief

Recent regulatory developments underscore the growing scrutiny of professional uses of generative AI. On 13 January 2026, the Spanish Data Protection Authority (“Spanish DPA”) issued a formal notice warning of the legal and privacy risks involved in uploading, transforming or generating images of individuals through AI tools. At the same time, the European Commission has published the first draft of its voluntary Code of Practice on Transparency of AI-Generated Content (“Code”). While adherence to the Code is optional, it is intended to support providers in meeting the mandatory transparency obligations set out in Article 50 of the AI Act, which will apply from August 2026 to providers and deployers of AI systems. These developments reinforce the need for robust safeguards, internal controls, and transparent labelling when deploying generative AI.

Key takeaways

What companies need to consider now:

  • Treat any upload or use of someone’s image in an AI tool as handling personal data and put basic safeguards in place.
  • Before creating or sharing AI‑generated content — even internally — check whether it could trigger risks beyond data protection, such as reputational issues, copyright misuse or misuse of someone’s likeness.
  • Prepare for the AI Act’s transparency rules arriving in August 2026, including clear labelling of any content changed or created by AI.

In more detail

Spanish DPA guidance on AI and images

The notice issued by the Spanish DPA on 13 January 2026 provides its clearest position to date on the risks associated with using third party images in generative AI tools. It confirms that uploading, transforming, or generating visual content based on a person’s image constitutes personal data processing, even where the output is not intended to be shared or appears innocuous. This represents an explicit acknowledgement that simply feeding an image into an AI system already triggers General Data Protection Regulation (GDPR) obligations.

The Spanish DPA identifies two main categories of risks:

  1. Visible risks which arise when the generated image or video is shared. These include:
    • Using images outside of their original context without a valid legal basis;
    • The ease of forwarding or distributing content;
    • The practical impossibility of removing replicated copies;
    • The creation of intimate or compromising deepfakes with potentially severe consequences; and
    • The risk of falsely attributing behaviors or actions to individuals.
  2. Less visible risks which arise even when the content is not shared. These include:
    • Loss of control when external providers process the images;
    • The potential existence of unremovable copies;
    • Additional or undisclosed processing by providers;
    • The generation of metadata enabling re-identification; and
    • The practical difficulty for data subjects to exercise their rights.

Overall, the notice establishes a clear and more stringent framework: the use of images in AI systems must be treated as processing of personal data and must be accompanied by appropriate safeguards.

EU draft good practices code for transparency of AI-generated content

  • In parallel, the European Commission has issued its first draft of a voluntary Code of Practice on Transparency of AI-Generated Content (“Code”), intended to help organizations anticipate compliance with the transparency obligations under Article 50 of the AI Act. The final version is expected in June 2026, with mandatory transparency requirements applying to providers and deployers of AI systems from August 2026.
  • The Code introduces a two-tier classification system: (i) fully AI-generated content and (ii) AI-assisted content, where AI substantially influences the final output. Each category must be accompanied by clear labelling using a common icon. Until the official EU icon is adopted, an interim icon to support consistent disclosure composed of a two-letter acronym referring to artificial intelligence (such as “AI”, “IA” or “KI” reflecting the translation into the languages of the Member States) may be used.

The Code also sets out sector- and format-specific rules, especially for deepfakes. For instance, real-time deepfake videos must display a continuous on-screen indicator and an initial notice, while non-real-time videos may use individual or combined options such as fixed icons, opening notices or credits-based disclosures, as detailed in the Code.

Deployers choosing to adhere to the Code must also implement robust internal mechanisms, including documentation of labelling practices, staff training on when and how to apply disclosures, continuous monitoring procedures, and a channel for reporting mislabeling. Any reported inaccuracies must be corrected promptly.

This structure is intended to support a consistent and transparent approach to AI-generated content before the AI Act’s obligations become enforceable.

The Spanish DPA’s notice and the Code highlight that the implications of generative AI extend far beyond data protection. The manipulation or use of third-party images, voices or other content may also impact rights such as honor, privacy, and one’s own image. In addition, generative AI can give rise to significant questions around copyright, design rights, trademarks and other intellectual property rights linked to the source materials or the generated outputs.

A holistic, cross cutting legal assessment is therefore essential before implementing or using any generative AI tool. Organizations should ensure adequate employee training, adopt clear internal safeguards, and mitigate risks emerging from both the use of third-party content and engagement with external AI providers. This broader legal lens is critical to ensuring responsible deployment of generative AI technologies.

For tailored guidance on these regulatory developments and to assess your organization’s exposure and compliance needs, please contact our IPTech team.

Marta Expósito, Associate, has contributed to this legal update.

Author

José María Méndez is head of the Intellectual Property, Tech and Media department at Baker McKenzie Madrid and head of the EMEA IPTech practice.
Mr. Méndez is recognized as a leader in his field by the most prestigious legal directories. According to Chambers Europe, José María Méndez "was born for copyright law" and “his style is oriented to being pragmatic and offers clear and easy to implement solutions." Jose María is hailed as an “expert in media and production” and considered “the king in audiovisual matters.” Clients describe Jose María as “very specialized and has unsurpassed knowledge of the audio-visual industry.”

Author

Silvia, an experienced IP lawyer, currently heads the Litigation and Contractual IP team at Baker McKenzie Barcelona.
Leading her team, she supports high-profile clients across various industries, ensuring the protection and enforcement of their IP rights. Silvia represents clients in pivotal IP cases before the Spanish courts (in civil and criminal cases) as well as before the General Court and the Court of Justice of the European Union. This has provided her with an in-depth understanding of European IP law and its application in cross-border disputes.
Her work includes developing enforcement strategies and negotiating with diverse stakeholders on her clients' behalf, particularly in lookalike matters. Beyond enforcement, Silvia offers strategic advice on brand IP management, partnerships, licensing, and complex IP issues related to emerging technologies like e-commerce, social media and AI.
Actively involved in the IP sector, Silvia attends industry events and contributes to thought leadership. She finds great satisfaction in mentoring and nurturing the talent within her team.
Silvia also manages multijurisdictional projects for international clients, addressing IP and advertising risks across various territories. A member of the AIPPI association, she participates annually in its study questions. Furthermore, she serves as a professor at Instituto Superior de Derecho y Economía (ISDE), teaching industrial property law, and frequently gives talks and lectures at the ICAB and on courses focusing on unfair competition and trade secrets. She regularly publishes technical articles in specialized journals.

Author

Patricia Perez is a Team Leader in Baker McKenzie, Madrid office.

Author

Pablo is a technology senior associate in Baker McKenzie's IPTech team, based in Madrid. He joined the Firm in 2018 and has over nine years of experience in providing legal advice to national and international clients on a multidisciplinary basis. Pablo's practice covers a broad range of aspects on regulatory, legal policy, product counselling, litigation and technology contracting.
Pablo is ranked as an "Associate to watch" by Chambers Europe for TMT: Information Technology - Spain (2023).