https://www.detroitnews.com/story/news/world/2025/04/17/russia-seeds-chatbots-with-lies-any-bad-actor-could-game-ai-the-same-way/83137756007/
Russia is automating the spread of false information to fool artificial intelligence chatbots on key topics, offering a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform.
Experts warn the problem is worsening as more people rely on chatbots rushed to market, social media companies cut back on moderation and the Trump administration disbands government teams fighting disinformation.
Earlier this year, when researchers asked 10 leading chatbots about topics targeted by false Russian messaging, such as the claim that the United States was making bioweapons in Ukraine, a third of the responses repeated those lies.
Moscow’s propaganda inroads highlight a fundamental weakness of the AI industry: Chatbot answersdepend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content.But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor.
“Most chatbots struggle with disinformation,” said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. “They have basic safeguards against harmful content but can’t reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information.”