AI Image Falsely Suggests Bondi Beach Terror Attack Was Staged

Social media sites have been flooded with false information about a terrorist attack that killed 15 people and injured dozens of others during Hanukkah celebrations in Australia’s Bondi Beach on Sunday. One AI-generated image in particular has become extremely popular among those spreading disinformation on Twitter.

It is a photo-realistic image made to look as if one of the shooting victims was actually applying fake blood before the attack. But there is nothing real about it. To make matters worse, the tools commonly used by people to verify the authenticity of images are telling people that the photo is legitimate.

Israeli lawyer Arsen Ostrovsky, who moved to Australia just weeks earlier, gave an interview to Australia’s Nine News at the scene of the attack on Sunday. Ostrovsky’s head was bound with bandages and his face covered in blood, in a shocking photo that was similar to an earlier selfie he had taken.

But those real images were hijacked and apparently run through AI to create a fake image that went viral over the next two days. The AI ​​image shows a woman applying fake blood to a man resembling Ostrovsky, who is smiling. This image is intentionally made to look like a photo taken behind the scenes during a film or TV shoot.

Proof This Image Is AI

How do we know it’s fake? For starters, there are probably a dozen red flags that anyone can identify on their own without the help of any additional technology.

The most clearly evident AI clues are in the figures at the back of the photo, which show deformed cars melting together and supporting workers with deformed hands. Many versions of the image circulating online cut out background elements, presumably to better obscure the AI ​​artefact.

The text on Ostrovsky’s T-shirt has also been distorted in the same way AI often does. The blood stains on the fake shirt do not match those seen in the Nine News interview. The makeup artist also appears to have an extra finger in the AI ​​image that swells in an unnatural way when zoomed in closely.

The fake AI image (left) and a zoomed-in version of the image annotated by Gizmodo with red circles to indicate big red flags that it is AI-generated.
The fake AI image (left) and a zoomed-in version of the image annotated by Gizmodo with red circles to indicate big red flags that it is AI-generated. Image:

AI image checkers are notoriously unreliable, but there is a more reliable method that can help.

AI Watermark

Google’s AI image creation tools create an invisible watermark. The watermark initiative is called SynthID and was started a few years ago, but Google did not release any tools at the time that allowed the public to check watermarks themselves. That changed last month, when Gemini was given the ability to recognize it.

Now, anyone can upload an image to Gemini and ask if it has the SynthID mark on it. Ostrovsky’s fake image bears this mark, according to a test conducted Tuesday by Gizmodo. To be clear, the absence of SynthID does not mean the image is genuine, just that it was not necessarily created with a Google product.

Other AI image detectors are not a reliable way to detect AI images, and this is a big problem in a situation like this. Those who have been asking Grok and ChatGPT for the past two days whether the image is real have been assured that it is not AI. In fact, they insist on this point quite strongly.

grok fails

Grok, who is notoriously unreliable, continues to insist that the AI ​​image is real, even leaving some space at the end for an explanation that this could be a false flag because “some online posts suggest” it could be fake.

“No, the image does not show signs of being AI-generated – details such as shadows, textures and objects look consistent with the real photo,” Grok wrote in response to an inquiry on Monday. “It depicts a make-up artist applying fake blood on a film set. Mainstream reports describe the Bondi Beach incident as real, although some online posts suggest otherwise.”

Grok relies heavily on X’s tweets for information, so it makes sense that it would take that nonsense as a sign that the attack could be a false flag.

chatgpt failed

Gizmodo also asked ChatGPT whether the image was real. And like others on

As ChatGPT wrote in response to a question from Gizmodo: “There’s no clear indication that this image is AI-generated. Based on what’s visible, it looks like a real behind-the-scenes photo from a movie or TV set.”

The chatbot also gave a bulleted list explaining why it was not an AI, including “plausible context,” “messy realism,” and “consistently nuanced details.” The bot also said that the image contained “natural human anatomy”, something that was clearly not true for any human examining the fake photo closely.

cloud fails

Gizmodo also uploaded the image to Anthropic’s cloud, which responded: “This is a great behind-the-scenes photo of a film or TV production! The image shows a makeup artist applying special effects makeup to create a realistic wound effect on an actor.”

When asked if the image was AI, Cloud responded: “No, it’s not AI-generated. It’s a real photo from a real movie or TV production set.” The chatbot gave a bulleted list, similar to ChatGPT, of its reasons for being genuine, including “professional make-up work” and “real physical description”.

copilot failed

We also tested Microsoft’s CoPilot and you’ll never guess. Yes, Copilot also called the image genuine, giving the same response as ChatGPT, Cloud and Grok.

Other free AI detectors fail

Gizmodo tested some of the top AI image detectors that appear when the average Internet user searches Google to see what they would say about this obviously fake image. And it was just as bad as the major chatbots.

SiteEngine said it is real and there is only a 9% chance that it was created with AI. WasItAI responded similarly, writing, “We are confident that no AI was used when creating this image.” MyDetector also said that there is a 99.4% chance that it is genuine and not created with AI.

In case you’re wondering, AI detectors focused on text are also unreliable. For example, they would characterize things like the Declaration of Independence as AI.

x fails

A blue checkmark account on X posted screenshots of AI-Checker, claiming that Ostrovsky’s fake image was human-made, not AI. And the person behind the account claimed that it couldn’t be AI because the surrounding environment looks like Bondi Beach, which is an absurdly stupid claim.

AI can create images that look like any environment. But the reaction points to one of the problems with social media platforms like X, where people spreading conspiracy theories have been elevated to the forefront.

Elon Musk got rid of the so-called legacy checkmark when he bought the site in late 2022, a badge used to verify a person is who they are. Musk allowed anyone with $8 to spend to get “verified,” even if the company doesn’t verify anyone’s actual identity.

And what’s worse, the algorithm pushes tweets with blue checkmarks up in replies to any post, meaning the people getting the most visibility are the kind of people who want to give Musk money — i.e., the dumbest people on the planet.

result in australia

Ostrowski, who told Nine News he also survived the October 7, 2023 terrorist attacks in Israel, posted on Twitter on Tuesday admitting that he had seen claims that the Bondi Beach attack was fake and that he was faking it.

“Yes, I am aware of this distorted fake AI campaign @X suggesting that my injuries at the Bondi massacre were faked. This is what I will say. ‘I saw these pictures as I was being prepared for surgery today and I will not honor this sick campaign of lies and hatred by responding.,

Other victims of the attack included a 10-year-old girl and an 87-year-old Holocaust survivor, who were among the 15 dead. According to the Guardian, the first funeral of the victims will be held on Wednesday, with Rabbi Eli Schlanger and Rabbi Yaakov Levitan also officiating.

The parents of 10-year-old shooting victim Matilda Poltavchenko attend a memorial at Bondi Pavilion on Bondi Beach on December 15, 2025 in Sydney, Australia.
The parents of 10-year-old shooting victim Matilda Poltavchenko attend a memorial at Bondi Pavilion on Bondi Beach on December 15, 2025 in Sydney, Australia. © Photo by Audrey Richardson/Getty Images

The two attackers have been identified as Sajid Akram, 50, who was killed by police at the scene, and his son Naveed, 24, who was shot by police and injured and is in hospital. Both men were reportedly inspired by the Islamic State terrorist group and had recently traveled to the Philippines, although it was not clear what they were doing there.

Australia has strict gun laws, which were passed after a deadly mass shooting left 35 people dead in 1996, but for decades there has been a common misconception that it is impossible to obtain a gun in the country. According to police, all six guns used in Sunday’s attack were legally obtained.

Australian Prime Minister Anthony Albanese has come out in favor of stricter gun laws, and has advocated more frequent checks on people holding gun licenses. The dead attacker had got a gun license a decade ago and it seems the police have not conducted any investigation since then.



<a href

Leave a Comment