New EU Code of Practice To Guide Transparency in AI-Generated Media
The European Commission has kicked off a seven-month effort to create a code of practice that sets the standard for how artificial intelligence (AI)-generated content is identified and disclosed. The project highlights the growing difficulty in distinguishing AI-generated content from human-produced material on the internet.
The kick-off plenary meeting Wednesday marked the beginning of a seven-month process to draft the code, which will center around two working groups: providers and deployers.
Providers refer to companies or organizations that supply AI systems. This group will make sure that outputs of AI systems (audio, image, video, text) are marked in a machine-readable format as “artificially generated or manipulated.”
Deployers refer to entities that use or publish the output of those AI systems. This group will focus on deepfakes (images, audio or video that resemble existing people, places, objects, etc.) and AI generated or manipulated publications that inform the public on matters of interest.
The drafting of the code involves eligible stakeholders who replied to a public call launched by the AI Office. This includes providers of generative AI systems, developers of marking and detection techniques, associations of deployers of generative AI systems, civil society organizations, academic experts and other specialized organizations.
The publication of the first draft is expected in December, with the final code of practice expected in May or June 2026.
Comments