Meta Unveils New Policies to Tackle AI-Generated and Altered Media

Meta Unveils New Policies to Tackle AI-Generated and Altered Media
Meta
In a recent announcement, Meta, the parent company of Facebook, has revealed significant updates to its policies concerning digitally created and altered media. 
Starting in May, Meta will implement “Made with AI” labels for videos, images, and audio generated using artificial intelligence across its platforms.
Additionally, Meta plans to introduce distinct and more visible labels for digitally altered media that pose a high risk of misleading the public, irrespective of AI involvement.
This marks a shift from removing manipulated content to keeping it accessible while providing context on its creation process.
While Meta previously disclosed plans to detect images created with other companies’ AI tools, they did not specify a commencement date.
These labeling adjustments will primarily affect content on Facebook, Instagram, and Threads, with different regulations for services like WhatsApp and Quest VR.
Meta intends to promptly apply the enhanced “high-risk” labels, with the changes arriving amidst concerns over the influence of generative AI technologies on political campaigns, particularly ahead of the U.S. presidential election.
Criticism from Meta’s oversight board regarding the company’s existing rules on manipulated media prompted these revisions, with recommendations to extend the policy to non-AI content, audio-only materials, and videos depicting false actions.

Subscribe to our newsletter for latest news and updates. You can disable anytime.