Scammers love to seed the internet with fake customer service numbers in order to lure in unsuspecting victims who are just trying to fix something wrong in their life. Con artists have done it to Google Search for years, so it makes sense that they’ve moved on to the latest space where people are frequently searching for information: AI chatbots.
AI cybersecurity company Aurascape has a new report on how scammers are able to inject their own phone numbers into LLM-powered systems—resulting in scam numbers appearing as authoritative-sounding answers to requests for contact information in AI applications like Perplexity or Google AI Overviews. And when someone calls that number, they’re not talking with customer support from, say, Apple. They’re talking with the scammers.
According to Aurascape, the scammers are able to do this through a wide variety of different tactics. One way is by planting spam content on trusted websites, like government, university and high-profile sites that use WordPress. This method requires gaining access in ways that may be more difficult but aren’t impossible.
The easier version of this is planting the spam content on user-generated platforms like YouTube and Yelp or other sites that allow reviews. The scammers inject their phone numbers but include all of the likely search terms that would allow the number to find their intended target, such as “Delta Airlines customer support number†and countless variations.
All of that is normal for scammers trying to juice Google Search results. But Aurascape notes it’s the structure of the data that can set it apart for LLMs. By posting the likely search terms in the summarization formats that AI loves to deliver, it has a higher chance of success as these AI chatbots scour the internet for an answer.
The new report refers to Generative Engine Optimization (GEO) and Answer Engine Optimization (AEO) as distinct from SEO, coaxing the AI to retrieve the content and treat it as authoritative because of the way it’s being presented. To be clear, the scam outputs in the Aurascape report are simply being retrieved and reproduced in the context of an individual AI output, not corrupting the LLM itself.
“For traditional SEO, the goal is to appear high in a list of search results,†the company explains. “For GEO/AEO, the goal is more direct: be the single piece of content that the AI assistant chooses, summarizes, and presents as ‘the answer.’â€
As detailed in the report, the scammers utilize GEO/AEO techniques in html and PDFs uploaded to high-trust sites by:
- Matching the exact wording of likely user questions
- Using simple Q&A or list formats that are easy for models to parse
- “Emirates Reservations Phone Number: +1 (833) 621‑7070â€
- Repeating the same brand name and phone number several times in the document
- Embedding the content on high‑authority or trusted domains (e.g., compromised .gov, .edu, or popular WordPress sites)
That Emirates reservation number is one of the many examples Aurascape uses in the report to show how Perplexity delivered a scam phone number during tests. Google’s AI Overviews feature also delivered fake numbers.
Gizmodo did some quick testing and wasn’t able to duplicate a fake customer support number. But it seems pretty clear that AI companies are starting to take notice, especially given the specific tests conducted for this Aurascape report.
When Gizmodo asked Perplexity for the customer support number for Emirates Airlines, the bot responded:
The safest way to reach Emirates for reservations is to use the official contact channels listed on the Emirates website rather than third‑party numbers that appear in search results or PDFs, which are often spam or agency lines misrepresented as “official.â€
Gizmodo pressed for the bot to just give us the number, but it responded:
There are multiple conflicting “Emirates reservations†numbers online, and many of them are actually third‑party agencies rather than Emirates itself, so none of the +1‑(8xx) numbers shown on generic guides can be trusted as an official line.
The bot told us to visit emirates.com to find the number. And we guess that is one way to fight back against your AI chatbot spreading misinformation and spam. Just stop it from spreading specific types of information altogether.
Back in 2022, we wrote about the different scam websites that were successfully getting victims to download what they thought were Canon printer drivers. While the new report from Aurascape didn’t address downloadable drivers as a potential attack vector, we can imagine that would be something scammers are already trying.
After all, AI chatbots should only be trusted when they show their work. But the flip side of that is the necessity of the chatbot providing hyperlinks where information can be double checked. Or, in this hypothetical, where software could be downloaded. Just make sure you scrutinize that URL carefully. There’s a big difference between usa.canon.com and canon.com-ijsetup.com. The latter is a phishing website.
“Our investigation shows that threat actors are already exploiting this frontier at scale—seeding poisoned content across compromised government and university sites, abusing user-generated platforms like YouTube and Yelp, and crafting GEO/AEO-optimized spam designed specifically to influence how large language models retrieve, rank, and summarize information,†Aurascape wrote.
“The result is a new class of fraud in which AI systems themselves become unintentional amplifiers of scam phone numbers. Even when models provide correct answers, their citations and retrieval layers often reveal exposure to polluted sources. This tells us the problem is not isolated to a single model or single vendor—it is becoming systemic.â€
Original Source: https://gizmodo.com/ai-scam-phone-numbers-2000697589
Original Source: https://gizmodo.com/ai-scam-phone-numbers-2000697589
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
