How Scammers Poison AI Results With Fake Customer Support Numbers

call center shutterstock

Scammers love to seed fake customer service numbers on the internet to lure unsuspecting victims who are just trying to fix something wrong in their lives. Con artists have been doing it in Google searches for years, so it makes sense that they’ve moved to the latest place where people are often searching for information: AI chatbots.

A new report from AI cybersecurity company Aurascape details how scammers are able to inject their own phone numbers into LLM-powered systems – resulting in scam numbers appearing as official-sounding answers to requests for contact information in AI applications like Perplexity or Google AI Overview. And when someone calls that number, they are not talking to Apple customer support. They are talking to scammers.

According to Aurascape, scammers are able to do this through a variety of different tactics. One way is to place spam content on trusted websites, such as government, universities, and high-profile sites that use WordPress. This method requires gaining access through methods that may be more difficult but are not impossible.

The easy version of this is to put spam content on user-generated platforms like YouTube and Yelp or other sites that allow reviews. Scammers inject their phone numbers but include all possible search terms that would allow the number to be found by their intended target, such as “Delta Airlines customer support number” and countless variations.

This is all too common for scammers attempting to take advantage of Google search results. But Aurascape notes that it is the structure of the data that may set it apart for LLM. By posting potential search terms in condensed formats that AI prefers to deliver, it has a greater chance of success as these AI chatbots scour the internet for answers.

The new report describes Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) as separate from SEO, leading AI to treat it as authoritative because of the way it retrieves content and presents it. To be clear, the scam output in the Aurascape report is only being retrieved and reproduced in the context of an individual AI output, not the LLM being corrupted.

As reported, scammers use GEO/AEO techniques in HTML and PDFs uploaded to high-trust sites:

That Emirates reservation number is one of several examples that Aurascape used in the report to show how Perplexity gave out a scam phone number during tests. Google’s AI Overview feature also provides fake numbers.

Gizmodo did some quick testing and was not able to duplicate the fake customer support number. But it seems pretty clear that AI companies are starting to pay attention, especially given the specific tests conducted for this Aurascape report.

When Gizmodo asked Perplexity for the customer support number for Emirates Airlines, the bot responded:

The safest way to reach Emirates for reservations is to use the official contact channels listed on the Emirates website rather than third-party numbers that appear in search results or PDFs, which are often spam or agency lines misrepresented as “official”.

Gizmodo pressed the bot to give us only the numbers, but it responded:

There are many conflicting “Emirates reservations” numbers online, and many of them are actually third-party agencies rather than Emirates, so none of the +1‑(8xx) numbers shown on the generic guide can be trusted as the official line.

The bot asked us to go to emirates.com to find the number. And we guess this is one way to fight your AI chatbot that spreads misinformation and spam. Just stop it from spreading specific types of information altogether.

In 2022, we wrote about various scam websites that were successfully luring victims into downloading Canon printer drivers. While Aurascape’s new report doesn’t address downloadable drivers as a potential attack vector, we can imagine this will be something scammers are already trying.

After all, AI chatbots should only be trusted when they show their work. But the flip side of this is that the chatbot needs to provide a hyperlink where the information can be cross-checked. Or, in this hypothetical, where the software can be downloaded. Just make sure you check that URL carefully. There is a big difference between usa.canon.com and canon.com-ijsetup.com. The latter is a phishing website.



<a href

Leave a Comment