Monday, May 11, 2026

Consultation on the draft guidelines on transparency obligations under the EU AI Act

 

Pix credit here

 

On 8 May 2026,  The EU Commission distributed its Draft of the guidelines on the implementation of the transparency obligations for certain AI systems under Article 50 of the AI Act

The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) address risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations pertain to marking and detection of AI generated content and labeling of deep fakes and certain AI generated publications. (here)

The Draft Guidelines may be accessed HERE. The Distribution Release explained: "The Commission prepared these guidelines in parallel to the Code of Practice on marking and labelling of AI-generated content. The guidelines clarify the scope of the legal obligations and addressing aspects not covered by the code." 

The Commission prepared these guidelines in parallel to the Code of Practice on marking and labelling of AI-generated content. The guidelines clarify the scope of the legal obligations and addressing aspects not covered by the code. 

The guidelines on transparency will clarify the scope and help deployers and providers of interactive and generative AI systems to comply with their respective transparency obligations. The rules will become applicable on 2 August 2026. Providers of AI systems will have to inform users when they are interacting with an AI system and implement machine-readable marks in generative AI systems to enable the detection of synthetic content as AI generated or manipulated. (here)

The Commission seeks feedback on draft guidelines on transparency obligations for AI systems. 

Stakeholders can take part in this targeted consultation until 3 June 2026. * * * To ensure a fair and transparent process, only responses received through the online questionnaire will be considered and reflected in the final summary report. This survey targets companies, ranging from startups and SMEs to large companies, and other organisations that develop and deploy AI systems that interact with individuals or generate synthetic content, including deep fakes. Stakeholders, including providers and developers of AI systems, businesses and public authorities as well as academia, research institutions and citizens are invited to share their views.

The well managed consultation is structured around a "Consultation Form" follows below.

 

Stakeholder consultation on the draft Guidelines on the transparency requirements for certain AI systems under Article 50 AI Act​


Fields marked with * are mandatory.

​Stakeholder consultation on the draft Guidelines on the transparency requirements for certain AI systems under Article 50 AI Act​

 

Disclaimer: This document of the AI Office is prepared for the purpose of consultation and does not prejudge the final decision that the Commission may take on the final text of the guidelines on transparency requirements under Article 50 AI Act. The responses to this consultation will provide input for the guidelines on the transparency requirements for certain AI systems under Article 50 AI Act.

 
This consultation is targeted to stakeholders of different categories, including, but not limited to, providers and deployers of interactive and generative AI models and systems, providers and deployers of biometric categorisation and emotion recognition systems, private and public sector organisations using such interactive and generative AI systems, as well as academia and research institutions, civil society organisations, governments, supervisory authorities and the general public. 
 
The Artificial Intelligence Act (‘the AI Act’), which entered into force on 1 August 2024, creates a single market and harmonised rules for trustworthy and human-centric Artificial Intelligence (AI) in the EU. It aims to promote innovation and the uptake of AI, while ensuring a high level of protection of health, safety and fundamental rights, including democracy and the rule of law.  
 
Among various obligations, trustworthiness of AI systems is ensured through a set of transparency obligations in Article 50 AI Act. These transparency obligations are applicable as of 2 August 2026.
 
They aim to enable natural persons to recognise interaction with and content generated or manipulated by AI systems, thus reducing the risks of impersonation, deception or anthropomorphisation and fostering trust and integrity in the information ecosystem.
 
Pursuant to Article 96(1)(d) AI Act, the Commission shall issue guidelines on the practical implementation of transparency obligations laid down in Article 50 AI Act.
 

The purpose of the present targeted stakeholder consultation is to collect input from a wide range of stakeholders on the draft Commission guidelines on the application of the transparency obligations in Article 50 AI Act.

The drafting of these Guidelines was informed by input from a variety of stakeholders collected during a broad consultation organised by the Commission and input from the Member States in the AI Board. The draft guidelines are now published for additional stakeholder feedback before they are formally adopted by the Commission.

 
The targeted consultation is available in English only and will be open for 4 weeks starting on 8 May until 3 June 2026.
 
All contributions to this consultation may be made publicly available. Therefore, please do not share any personal or confidential information in your contribution. It is your responsibility to avoid personal data and any reference in your contribution itself that would reveal your identity.

The guidelines will be complemented with a Code of Practice that is under development to support the practical and effective implementation of the requirements in Article 50(2) and (4) on marking and labelling of AI generated content.

 

 
This consultation accompanies the draft Guidelines on the transparency obligations under Article 50, which aim to support a consistent interpretation and effective implementation of the transparency requirements applicable to certain AI systems and AI-generated or manipulated content across the Union.

This consultation seeks to collect targeted feedback on the draft Guidelines prepared by the Commission based on broad stakeholder input. In particular, the Commission seeks feedback on whether the explanations, concepts and examples provided sufficiently support stakeholders in understanding and complying with the transparency obligations laid down in Article 50 AI Act, including when and how such obligations apply in practice. The draft Guidelines address the transparency obligations under Article 50 AI Act and are structured as follows:

  • Section I presents an introduction outlining the background, objectives and legal context of the present Guidelines. It recalls the rationale of transparency within the AI Act, including its links to fundamental rights, user awareness and trust, and situates Article 50 within the broader risk-based framework of the AI Act.

  • Section II provides an overview of the transparency obligations and related horizontal topics. It includes an explanation of the different obligations under Article 50, the actors responsible for their compliance, exclusions from scope (such as purely personal non-professional activities, research and development), and the interplay with other provisions of the AI Act, including prohibited practices and high-risk AI systems and general-purpose AI models and systems.

  • Section III addresses transparency obligations for AI systems intended to interact directly with natural persons under Article 50(1) AI Act. It sets out the main concepts and scope of application, details the information obligation, and explains the relevant exceptions (including cases of obvious interaction and certain law enforcement uses), as well as the interplay with other Union legal acts.

  • Section IV provides guidance on the marking and detection of AI-generated or manipulated content under Article 50(2) AI Act. It clarifies the scope of application, including different modalities of synthetic content, and explains the transparency obligation (marking and detection) and the technical requirements (effectiveness, reliability, robustness and interoperability). It also addresses relevant exceptions and the interaction with other Union legal frameworks.

  • Section V covers transparency obligations for emotion recognition systems and biometric categorisation systems under Article 50(3) AI Act. It outlines the main concepts, scope and applicable obligations, as well as situations falling outside the scope and the interaction with other Union legal acts.

  • Section VI addresses the labelling of deep fakes and AI-generated or manipulated text published to inform the public on matters of public interest under Article 50(4) AI Act. It provides clarification on the notions of e.g. deep fakes and matters of public interest, the applicable disclosure obligations (including for evidently creative, artistic, satirical, fictional or analogous works or programmes), and the relevant exceptions. It also explains the relationship with other Union legal acts.

  • Section VII sets out horizontal requirements applicable to the information provided under Article 50(5) AI Act, including general principles ensuring that transparency information is clear, meaningful and accessible to users.

All participants are invited to provide feedback on the particular sections of the draft Guidelines they are interested in. As not all sections may be relevant for all stakeholders, respondents may reply only to the section(s) they would like. Respondents are encouraged to provide explanations and practical cases as part of their responses to support the practical usefulness of the Guidelines. We kindly ask the respondents to specify the exact section and paragraph of the draft Guidelines to which their comments refer.
 

The feedback collected through this consultation will support the Commission in refining and finalising the Guidelines, with the objective of ensuring that they are clear, comprehensive and practically useful for all stakeholders involved in the development, deployment and supervision of AI systems within the scope of Article 50 of the AI Act.

 

 
Question
All contributions to this consultation may be made publicly available.

Therefore, please do not share any personal or confidential information in your contribution (your written feedback). It is your responsibility to avoid personal data and any reference in your written feedback itself that would reveal your identity.

For information on how the Commission processes your personal data please read our Privacy Statement

No comments: