Meta to Label all Fake AI images Across its Platforms

Uchechukwu Nkenta Add a Comment Categories: AI
2 Min Read
Meta-Illustration; Image Credit: Getty Images

Meta, the parent company of Facebook, Instagram, and Threads, has announced plans to implement technology capable of detecting and labeling images produced by artificial intelligence (AI) tools from other companies.

This move aims to address concerns surrounding the spread of AI-generated content across its platforms.

While Meta already identifies AI-generated images created by its systems, it seeks to extend this capability to images from external sources to combat the proliferation of AI-generated fake content.

Senior executive Sir Nick Clegg outlined Meta’s intention to expand its labeling of AI-generated content soon.

However, some experts, like Prof. Soheil Feizi from the University of Maryland’s Reliable AI Lab, caution that such detection systems may be susceptible to evasion.

Feizi suggests that while detectors may identify images generated by specific AI models, they can be circumvented through simple image modifications, leading to potential false positives and limited applicability across various scenarios.

Meta has acknowledged the limitations of its detection tool, particularly in identifying AI-generated audio and video content. Instead, the company plans to rely on user-generated labels for such media and may impose penalties for non-compliance.

Despite efforts to address AI-generated content, Meta faces criticism for its current policies regarding manipulated media.

The Meta Oversight Board, an independent body funded by Meta, has described the company’s stance as “incoherent” and lacking persuasive justification.

The criticism stemmed from a ruling on a manipulated video involving US President Joe Biden, highlighting the need for updated policies to address evolving challenges in synthetic and hybrid content creation.

Sir Nick Clegg acknowledges the shortcomings of Meta’s current policies, particularly in handling synthetic content, and agrees with the Oversight Board’s call for updates. Meta has already implemented measures requiring political advertisements to disclose digitally altered images or videos since January.

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *