In recent years, a new form of Russian disinformation has begun to take shape. Instead of targeting users directly, researchers have found that Russian news networks have started to flood the internet with content, not necessarily to be read by humans, but scraped by AI-powered Large Language Models (LLMs) in an activity known as LLM grooming.

The point is to trick LLMs like ChatGPT or Gemini into sourcing their responses from seemingly legitimate sites that parrot or screw certain narratives in favour of Russia.

A recent study by the Institute for Strategic Dialogue (ISD) found that the Russian Pravda Network had begun flooding the online space with articles, publishing up to 23,000 articles a day in May, up from approximately 6,000 daily articles in 2024.

The massive surge could be linked to efforts to groom LLMs into using Kremlin-backed positions when answering users’ questions, for instance, about the war in Ukraine.

The increase is also an attempt by the network to expand its reach in Asia, Africa, and Europe, often through proxies or subsidiaries. Because of how search engine optimisation (SEO) works on sites like Google, the sheer mass of articles, all linking and referencing each other, can also increase their reliability rankings, making them more prominent in searches and harder to spot as direct disinformation.

This means that users could also stumble across articles produced by the network or similar operations without being aware that it is connected to Russia. The same could also be the case for LLMs.

“More than any other Russia-aligned operation, the Pravda network is playing a numbers game,” said Joseph Bodnar, a senior researcher at the ISD. “They’ve saturated the internet ecosystem enough to get in front of real people who are doing research on Russia-related issues.”

Although it is not entirely clear whether Russian disinformation has successfully targeted larger LLMs, researchers at NewsGuard found that some chatbots had been poisoned. Four of ten of the bots they tested, having swallowed descriptions of a staged propaganda video, falsely reported that a Ukrainian battalion burned a Trump effigy. Chatbots “repeated false narratives laundered by the Pravda network 33 per cent of the time”, the report concluded.

The Washington Post also tested the hypothesis, finding that Microsoft Copilot agreed with the fabricated story while ChatGPT performed better. However, Al Jazeera published its own response to the report, co-authored by four academics specialising in disinformation. Having conducted their own test, they said that the figures presented by NewsGuard were overblown.

“In contrast to NewsGuard’s 33 per cent, our prompts generated false claims only 5 per cent of the time,” they wrote.

The article continues: “Just 8 per cent of outputs referenced Pravda websites – and most of those did so to debunk the content. Crucially, Pravda references were concentrated in queries poorly covered by mainstream outlets. This supports the data void hypothesis: When chatbots lack credible material, they sometimes pull from dubious sites – not because they have been groomed, but because there is little else available.”

These researchers argue that gaining the results that NewsGuard involves questions and prompts that are on obscure topics in specific terms, and that “those topics must be ignored by credible outlets; and the chatbot must lack guardrails to deprioritise dubious sources.”

They add that “indiscriminate warnings about disinformation can backfire, prompting support for repressive policies, eroding trust in democracy, and encouraging people to assume credible content is false.” It can also hand Russian propagandists domestic credibility when Western media legitimises the supposed genius of their activities.

This observation is partially supported by Nina Jankowicz, a disinformation expert, who spoke to the UK parliament about Pravda earlier this year. She mentions that whatever else the intention of producing so much content, the network has started filling a Ukraine news gap where other credible outlets have slowed down their coverage.

“There’s a bit less news about Ukraine. And if they can get in there and fill that gap really soon, that means that the Russian viewpoint is the one that’s going to get out there quickly and be cited in large language models.”

The conclusion could be that while Russian disinformation is designed to directly target LLMs and chatbots, it is a knock-on effect from a gap in coverage on certain topics by Western outlets, which is being filled by the Pravda Network.

Written by

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.