I asked several times to see how the answers changed and asked other people to do the same. Gemini didn’t bother to say where he got the information from. Every other AI linked to my article, even though they rarely mentioned that I was the only source on this topic on the entire internet. (OpenAI says ChatGPT always includes links when it searches the web so you can find the source.)
“Anyone can do this. It’s stupid, it feels like there are no guardrails,” says Harpreet Chatha, who runs SEO consultancy Harps Digital. “You can create an article on your own website, ‘the best waterproof shoes for 2026’. You simply put your own brand in the first position and other brands two to six, and your page is likely to be cited in Google and in ChatGPT.”
People have been using hacks and exploits to abuse search engines for decades. Google has sophisticated protections in place, and the company says the accuracy of AI previews is comparable to other search features introduced years ago. But experts say AI tools have undone much of the tech industry’s work to keep people safe. These AI tricks are so basic that they’re reminiscent of the early 2000s, before Google even introduced a web spam team, Ray says. “We are experiencing a sort of spammer renaissance.”
Not only is AI easier to fool, but experts worry that users are more likely to fall for it. With traditional search results, you had to go to a website to get the information. “When you actually have to visit a link, people engage in a little more critical thinking,” says Quintin. “If I go to your website and it says you’re the best journalist ever, I might think, ‘well, yeah, he’s biased’.” But with AI, information generally appears to be coming directly from the tech company.
Even when AI tools provide the source, people are much less likely to check it than they were with old-fashioned search results. For example, a recent study found that people are 58% less likely to click a link when an AI preview appears at the top of Google Search.
