EU takes next step towards transparent AI: new Code of Practice on AI-generated content

After taking firm steps against online disinformation, the European Commission is now targeting a new frontier in the fight for digital trust: AI-generated content. On , the Commission officially launched work on a Code of Practice on the marking and labelling of AI-generated content, a key step toward ensuring transparency and trust in the use of artificial intelligence across Europe. This initiative builds on Europe’s broader effort to tackle digital misinformation (an important topic we explored a few months back) and helps ensure citizens can trust what they see, read, and hear online.

Developed under the AI Act, the initiative addresses the growing challenge of distinguishing authentic, human-created material from AI-generated or manipulated content such as deepfakes, synthetic text, and other forms of generative media.

Building trust through transparency

Under Article 50 of the AI Act, providers and deployers of generative AI systems will be required to clearly mark AI-generated or manipulated content and to label deepfakes. The goal is to reduce risks related to misinformation, impersonation, and fraud, and to strengthen public trust in the information ecosystem. The Code of Practice will be developed through an inclusive, seven-month process led by independent experts and coordinated by the European AI Office. It will involve working groups focusing on:

  • Providers ensuring AI outputs (audio, image, video, text) are machine-readable and detectable as artificially generated.
  • Deployers ensuring that AI-generated or manipulated content, particularly when informing the public, is clearly disclosed. The process includes open stakeholder participation through public consultation and dedicated workshops, running until May 2026.

Voluntary tool for compliance

Once completed, the Code of Practice will serve as a voluntary instrument to help AI providers and deployers meet their transparency obligations under the AI Act. It will support the development of technical standards for marking and detection tools, ensuring interoperability and reliability across AI systems in the EU. These transparency obligations will become legally applicable from August 2026, complementing other AI Act provisions such as those for high-risk and general-purpose AI systems.

A milestone in Europe’s AI governance

The AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence, built around a risk-based approach to ensure safety, fundamental rights, and human-centric AI. The Act is complemented by voluntary initiatives, including the AI Pact and the General-Purpose AI Code of Practice, which help stakeholders prepare for compliance and promote responsible AI innovation.

 

Read more here 

📩 Stay informed with the latest updates!

Subscribe to our newsletter here.

Subscribe Now  

🔗 Follow us on social media!

Connect with us on LinkedIn and X.

LinkedIn X (Twitter)  

banner news