The Oversight Board is once again urging Meta to change its rules around AI-generated content. This time, the board says Meta should create a separate rule for AI content that is independent from its misinformation policy, invest in more reliable detection tools and make better use of digital watermarks, among other changes.
The group’s recommendations stem from an AI-generated video shared last year that claimed to show damaged buildings in the Israeli city of Haifa during the Israel-Iran conflict in 2025. The clip, which was viewed more than 700,000 times, was posted by an account that claimed to be a news outlet but was actually run by someone in the Philippines.
After Meta reported the video, the company refused to remove it or add a “high risk” AI label, which would have clearly indicated that the content was created or manipulated with AI. The board overturned Meta’s decision not to add the “high risk” label, saying the case highlights several areas where the company’s existing AI rules are falling short.
“Meta should do more to address the proliferation of misleading AI-generated content on its platforms, including inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between genuine and fake content,” the board wrote in its decision. Meta eventually disabled three accounts associated with the page after the board flagged “clear signs of fraud”.
One of the board’s top recommendations is that Meta create a dedicated rule for AI-generated content that is separate from its misinformation policy. According to the board, the rule should include specific information about when and how users must label AI content, as well as information on how Meta punishes rule breakers.
The board also was highly critical of how Meta uses its current “AI Info” labels, noting that the way they are applied is “neither robust nor comprehensive enough to cope with the scale and velocity of AI-generated content,” particularly in times of conflict or crisis. “A system overly reliant on self-disclosure of AI use and advanced review (which occurs infrequently) to appropriately label this output cannot meet the challenges posed in the current environment.”
Meta also needs to invest in more sophisticated recognition technology that can reliably label AI media, including audio and video, the board said. The group said it is “concerned” about reports that the company is “inconsistently applying” digital watermarks to AI content created by its own AI tools.
Meta did not immediately respond to a request for comment on the oversight board’s decision. The company has 60 days to formally respond to its recommendations.
The decision is not the first time that the board has criticized Meta for its handling of AI content. The group has on two other occasions described the company’s manipulated media rules as “inconsistent”, and criticized it for relying on third parties, including fact-checking organizations, to flag problematic content. Meta’s reliance on fact checkers and other “trusted partners” was raised again in the case, with the board saying it had heard from these groups that Meta “has been less responsive to outreach and concerns, partly due to significant deficiencies in the capabilities of Meta’s internal teams.” META, the board writes, “must be able to make such assessments of damage, rather than relying solely on partners during armed conflict.”
While the Oversight Board’s decision relates to a post from last year, the issue of AI-generated content during armed conflicts has taken on a new urgency during the latest conflict in the Middle East. Since the beginning of the US and Israel attacks on Iran earlier this month, there has been a rapid increase in viral AI-generated misinformation on social media. The board, which has previously indicated it would like to work with generic AI companies, included a suggestion that appears to apply not only to Meta.
It reads, “The industry needs consistency to help users distinguish between misleading AI-generated content and platforms must address abusive accounts and pages sharing such outputs.”
<a href