In Netanyahu’s case, it didn’t help that his team used a fancy camera and filmed with a narrow depth of field, meaning a nice sharp foreground and soft blurry background — which is exactly what AI videos look like, Carrasco says. But by the time Netanyahu posted his coffee shop clip, our world was already filled with fake content, creating suspicions that are difficult to overcome under any circumstances. Netanyahu’s fingers, hallucinatory or otherwise, are the latest wave in an AI flood that has been rising for years.
This chaos has a name: Researchers call it the “liar’s dividend.” Proving something is real is expensive, but doubting it is free. “People, including people in positions of power, can argue that the real content – the real evidence of them doing something – is fake,” says Samuel Woolley, chair of disinformation studies at the University of Pittsburgh in the US.
Politicians and others may use the specter of AI as a shield, crying foul over deepfakes to dismiss the real evidence. But it is a double-edged sword and the same atmosphere of mistrust is coming back to bite them. “The politicians who encouraged this lack of restraint are now suffering the consequences in many ways,” Woolley says.
More like this:
• The number one sign that you’re watching AI videos
• I hacked ChatGPT and Google in 20 minutes
• People are selling your home address online. This privacy tool will help
While talking on the phone with his Aunt Eleanor, reality was creeping in. It turned out that she had heard the codeword advice and actually had one for her kids and her husband, but I didn’t know it. “I’ve read a lot of stories like this, where they talk about voices being cloned from YouTube videos,” he said. “It concerns me. It’s scary.”
<a href