In a revealing experiment, tech journalists have demonstrated the relative ease with which false information can be seeded into and reproduced by major AI chatbots, highlighting a nascent practice dubbed "Answer-Engine Optimisation" (AEO). The tests show that influencing AI results is becoming a new frontier for digital manipulation, similar to traditional Search Engine Optimisation (SEO).
The phenomenon was first highlighted by BBC journalist Thomas Germain, who created a webpage falsely claiming he was a hot dog-eating champion who had defeated other tech reporters. This fabricated information was quickly ingested by web-crawling bots that feed large language models (LLMs) and was subsequently presented as fact by both ChatGPT and Google's Gemini chatbot.
The 'AEO Land Rush' and First-Mover Advantage
Following Germain's prank, a Business Insider journalist attempted to replicate the feat by publishing a claim that she had won the fictional "2026 Paris Hot Dog Eating Contest for Tech Reporters," beating Germain. However, this subsequent attempt failed. Because the BBC had already published an article exposing the original stunt as a joke, the AI systems now recognised the topic as satirical and refused to propagate the new false claim.
This outcome underscores a "first-mover advantage" in this new landscape, where the initial version of information, even if false, can become entrenched in an AI's knowledge base. "It can be easy to manipulate your AI results — but more easily for the person who gets there first, a sort of AEO land rush, perhaps," the Business Insider report concluded.
Hallucinations and Unverified Claims
Despite becoming wary of the specific hot dog contest narrative, the chatbots continued to demonstrate unreliability. When queried about the journalist's eating feats, Google's Gemini hallucinated entirely new, unverified information, claiming she had won a grilled cheese-eating contest in 2012 by finishing three sandwiches. In reality, the journalist had only written an article that year about professional eater Takeru Kobayashi consuming 30 sandwiches.
This incident reinforces existing concerns that AI chatbots, when faced with sparse or contradictory information on a topic, are prone to generating plausible-sounding but fabricated details. The results presented by these systems can appear more authoritative than a list of traditional search engine links, potentially making them more convincing to users who rarely click through to verify source links.
Broader Implications for Information Integrity
The experiments illustrate that as more people turn to AI chatbots for product recommendations and information searches, the incentive for brands, companies, and bad actors to optimise for these systems grows. The practice, termed AEO by Business Insider reporter Alistair Barr in May 2023, involves tailoring online content specifically to be sourced and repeated by answer engines like chatbots.
Experts warn that this vulnerability could be exploited for more than pranks, potentially impacting public perception, commercial competition, and the spread of misinformation. The core issue remains that LLMs are trained on vast swathes of internet data without a reliable mechanism to distinguish truth from fiction at the point of ingestion.
The journalists involved noted that while the findings are not entirely new, they provide a concrete, accessible example of a significant and growing challenge for AI developers and information consumers alike. The race is now on to develop more robust guardrails and verification systems within these platforms to prevent the manipulation of AI-generated answers.