Meta’s methods for identifying deepfakes are “not robust or comprehensive enough” to handle how fast misinformation spreads during armed conflicts such as the Iran war. That’s according to the Meta Oversight Board — a semi-independent body that guides the company’s content moderation practices — which is now asking Meta to overhaul how it displays and labels AI-generated content on Facebook, Instagram, and threads.
The call for action stems from an investigation into a fake AI video of alleged damage to buildings in Israel that was shared on Meta’s social platforms last year, but the board says its recommendations are especially relevant now given the “massive military escalation” across the Middle East this week. In its announcement, the board says access to accurate, reliable information is critical to protecting people amid the growing risk of AI tools spreading misinformation.
The Meta Oversight Board said, “The Board’s findings highlight that Meta’s current system for appropriately labeling AI content relies heavily on self-disclosure and increased review of AI use and does not meet the realities of today’s online environment.” “The case also highlights the challenges of cross-platform dissemination of such content, given that the content appears to have originated on TikTok before appearing on Facebook, Instagram and X.”
Recommended steps issued by the board include pressuring Meta to improve its existing rules on misinformation to address misleading deepfakes and establishing a new, separate community standard for AI-generated content. Meta is also being asked to develop better AI detection tools, be transparent about penalties for AI policy violations, and increase AI content labeling efforts. The latter include ensuring that “high-risk AI” labels are added to synthetic images and videos more frequently, and improving the adoption of C2PA (known as content credentials) so that information on AI-generated content is “clearly visible and accessible to users.”
<a href