Meta must rein in scammers — or face consequences

Meta, the world’s largest social media company, makes billions from knowingly scam ads, a recent report on the company says. According to internal documents revealed reutersFacebook, Instagram and WhatsApp users see 15 Arab Ads promoting scams, from fake Trump stimulus checks to deepfakes of Elon Musk selling cryptocurrencies, appear daily. The company reportedly knows this; reuters Said that its own trust and security team has estimated that one third Scams in the US involved the Meta platform. So why didn’t Meta do more? Perhaps because these ads are obviously extremely profitable, at $7 billion US or more per year.

Scams are not a small problem. Americans alone reported $16 billion in losses to the FBI last year, and that number is likely lower than the real amount because scams are notoriously underreported. (Victims of scams are often too embarrassed to fall for them.) Globally, the numbers are much higher: The Global Anti-Scam Alliance estimates that scammers will steal more than $1 trillion from people worldwide in 2024.

And people who lose money to scams can hardly afford it. Often, victims of scams are elderly people on fixed incomes, young people looking for a job, immigrants, and others going through transitions or difficult times. These are the people for whom the promise of an extra thousand dollars in government benefits or a steady job is highly motivating. Losing even a few hundred dollars to a scammer can be devastating.

But not for meta. reuters Reports reveal that internal company documents show Meta makes $16 billion each year from scam ads and advertisements for banned items – that’s 10 percent of its total annual revenue. Of this, $7 billion comes from ads that present obvious signs of scams, such as falsely claiming to represent public figures or brands. Even heavy fines would pale in comparison to these huge profits.

As researchers writing about AI and scams, we are often asked what can be done about it. Instead of financial literacy and anti-scam campaigns that put pressure on individuals – which often leaves people feeling embarrassed when they fall victim to a scam – we should hold Meta accountable for its role in this shameful cycle of loss.

The solution to scams is not to put the onus on individuals to avoid them

Meta appears to be able to easily identify many scam ads. but according to reutersIts systems require 95 percent certainty that an ad is fake before it is removed. And when an ad is identified as a scam, wall street journal Report that Meta grants between eight and thirty two “Strike” the account that posted this before it gets banned. As a result, scam advertisers with flagged, removed ads can also run other ads for months – or even other versions of the same ad – making thousands of dollars from innocent victims. Online payment platform Zelle told wall street journal Half of the scams reported by their users involved Meta.

Unfortunately, the online advertising ecosystem makes the problem even worse. Due to algorithmic recommendation systems integrated into virtually all social platforms, anyone who clicks on a scam ad will be shown more scam ads. This means that people who are most vulnerable to fraudulent ads, who are interested enough to click on them, will receive even more.

Meta spokesperson Andy Stone denied this reuters report, “The leaked documents present a selective view that distorts Meta’s approach to fraud and scams by focusing our efforts on assessing the scale of the challenge, rather than on the full range of actions we have taken to address the problem,” Stone said in an email. The Verge“This number was a rough and over-inclusive estimate rather than a definitive or final figure; in fact, subsequent review revealed that many of these ads were not infringing at all,” Stone emphasized the increasing scale and sophistication of scam attempts and said user reports of scam ads have declined by more than 50 percent over the past 15 months,

Scams are a global industry, driven by the growth of Southeast Asian scam complexes run by international criminal organizations with deep ties to online gambling. These scam complexes house victims of human trafficking, who are lured with the promise of white-collar jobs in slavery-like conditions. Due to fear of violence, young people are forced to spend long hours in romance and investment scams. These criminal enterprises are increasingly adopting automation and artificial intelligence to supercharge their scams, increase their scope and scale, and continually diversify their techniques. The types of ads that meta platforms run are often AI-enhanced, using deepfakes of famous entrepreneurs to promote fraudulent investment opportunities and synthetic videos of US politicians to promote non-existent stimulus checks. This problem will only get worse as technology improves and criminal syndicates continue to flourish.

If a small nonprofit can identify fraud better than a billion-dollar company, the company is at fault.

It is the meta’s responsibility to take action. First, it should lower the barrier to removing scam ads. Once an advertiser removes a scam ad, all other ads should also be removed. Second, META should expand the sophistication of the strategies used to identify fraudulent ads. The investigative monitoring group Tech Transparency Project was able to easily spot scam ads using a simple set of criteria: the ad was promoting fake government benefits, it was using FTC-identified scam tactics, victims had complained to the Better Business Bureau about the scam, or, simply, Meta had removed the advertiser’s prior ads for being scammed. (If a small nonprofit can do a better job of identifying fraudulent ads than a billion-dollar company, then the company is clearly guilty.) Third, Meta should institute verified advertiser requirements, where only advertisers using “real names” can buy ads. This will also reduce the use of deepfake ads and provide a paper trail for regulators and law enforcement.

Clearly, more regulation of online advertising is also needed. Governments should consider major tech platforms like Meta as complicit actors in the scam ecosystem, given that these companies make billions of dollars in profits annually from fraudulent advertising, while knowingly using weak detection standards that allow scammers to operate with near impunity.

Governments should make scam prevention a national priority and put platforms like Meta at the center of those efforts. The FTC has the authority to regulate “unfair or deceptive acts or practices”, including false advertising. They may require platforms to verify the identities of advertisers and review ads before they run, and allow independent, third-party audits of online advertising systems. Regulators should also increase the amount of fines platforms face for scam ads that would actually prevent them from running. These fines could finance a scam victim compensation fund.

If the federal government is unwilling to act, state-level legislatures can pass laws mandating similar requirements. State attorneys general may initiate consumer protection lawsuits under existing state consumer fraud and deceptive trade practices laws. This is not a partisan issue. Scams affect everyone.

Meta has repeatedly shown that the health and well-being of its users is less important than its bottom line. In 2018, the company, then known as Facebook, admitted it had not done enough to prevent its platform from being used to promote genocide in Myanmar – now a failed state and home to the very scammers whose ads make Meta so much money. If its past actions aren’t enough to motivate lawmakers and policymakers to take action on the company’s negligence, perhaps billions of dollars will be sucked out of the pockets of vulnerable users.

Follow topics and authors To see more like this in your personalized homepage feed and get email updates from this story.





Leave a Comment