![]() |
| Pix credit here |
On 8 May 2026, The EU Commission distributed its Draft of the guidelines on the implementation of the transparency obligations for certain AI systems under Article 50 of the AI Act.
The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) address risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations pertain to marking and detection of AI generated content and labeling of deep fakes and certain AI generated publications. (here)
The Draft Guidelines may be accessed HERE. The Distribution Release explained: "The Commission prepared these guidelines in parallel to the Code of Practice on marking and labelling of AI-generated content. The guidelines clarify the scope of the legal obligations and addressing aspects not covered by the code."
The Commission prepared these guidelines in parallel to the Code of Practice on marking and labelling of AI-generated content. The guidelines clarify the scope of the legal obligations and addressing aspects not covered by the code.
The guidelines on transparency will clarify the scope and help deployers and providers of interactive and generative AI systems to comply with their respective transparency obligations. The rules will become applicable on 2 August 2026. Providers of AI systems will have to inform users when they are interacting with an AI system and implement machine-readable marks in generative AI systems to enable the detection of synthetic content as AI generated or manipulated. (here)
The Commission seeks feedback on draft guidelines on transparency obligations for AI systems.
Stakeholders can take part in this targeted consultation until 3 June 2026. * * * To ensure a fair and transparent process, only responses received through the online questionnaire will be considered and reflected in the final summary report. This survey targets companies, ranging from startups and SMEs to large companies, and other organisations that develop and deploy AI systems that interact with individuals or generate synthetic content, including deep fakes. Stakeholders, including providers and developers of AI systems, businesses and public authorities as well as academia, research institutions and citizens are invited to share their views.
The well managed consultation is structured around a "Consultation Form" follows below.

No comments:
Post a Comment