Although it is not entirely clear whether Russian disinformation has successfully targeted larger LLMs, researchers at NewsGuard found that some chatbots had been poisoned. Four of ten of the bots they tested, having swallowed descriptions of a staged propaganda video, falsely reported that a Ukrainian battalion burned a Trump effigy. Chatbots “repeated false narratives laundered by the Pravda network 33 per cent of the time”, the report concluded.
The Washington Post also tested the hypothesis, finding that Microsoft Copilot agreed with the fabricated story while ChatGPT performed better. However, Al Jazeera published its own response to the report, co-authored by four academics specialising in disinformation. Having conducted their own test, they said that the figures presented by NewsGuard were overblown.
“In contrast to NewsGuard’s 33 per cent, our prompts generated false claims only 5 per cent of the time,” they wrote.
The article continues: “Just 8 per cent of outputs referenced Pravda websites – and most of those did so to debunk the content. Crucially, Pravda references were concentrated in queries poorly covered by mainstream outlets. This supports the data void hypothesis: When chatbots lack credible material, they sometimes pull from dubious sites – not because they have been groomed, but because there is little else available.”
These researchers argue that gaining the results that NewsGuard involves questions and prompts that are on obscure topics in specific terms, and that “those topics must be ignored by credible outlets; and the chatbot must lack guardrails to deprioritise dubious sources.”
They add that “indiscriminate warnings about disinformation can backfire, prompting support for repressive policies, eroding trust in democracy, and encouraging people to assume credible content is false.” It can also hand Russian propagandists domestic credibility when Western media legitimises the supposed genius of their activities.