An article claiming to identify misinformation in an Oscar-winning documentary about imprisoned Russian dissident Alexei Navalny is itself full of misinformation, thanks to the author using AI.
Investigative news outlet The Grayzone recently published an article that included AI-generated text as a source for its information. The piece, “Oscar-winning ‘Navalny’ documentary is packed with misinformation” by Lucy Komisar, included hyperlinks to PDFs uploaded to the author’s personal website that appear to be screenshots of conversations she had with ChatSonic, a free generative AI chatbot that advertises itself as a ChatGPT alternative that can “write factual trending content” using Google search results.
Navalny, which won an Oscar on Sunday night, details the life of Russian dissident politician Alexei Navalny. On August 20, 2020, Navalny got sick on a flight. A subsequent investigation by open source research organization Bellingcat concluded that the Kremlin had poisoned Navalny with Novichok, a Soviet-era nerve agent. The documentary chronicles this investigation.
According to Komisar’s article in The Grayzone, the film is “packed with misinformation.” A paragraph in the original version of the article claimed that Navalny has a history of diseases that affect blood sugar and may present with symptoms similar to Novichok poisoning. As proof of this medical history, Komisar linked to a PDF containing a transcript of someone asking the AI tool ChatSonic about it.
“Did Alexei Navalny before 2020 have a history of medical conditions known to reflect the sudden reduction in blood sugar and cholinesterase level—diabetes, Quincke’s Disease, and allergies leading to anaphylactic shock?” the transcript reads.
ChatSonic said that Navalny had, in fact, been diagnosed with said diseases. The writer then asked ChatSonic for links to that information and it kicked back links to The Moscow Times and The Guardian. Both links are total fabrications. The Guardian URL seems to be from 2014 but doesn’t exist. The Moscow Times URL is literally just the paper’s home page. The Komisar article also included ChatSonic-written responses about the effects of Novichok and a breakdown of Bellingcat’s alleged bias towards western governments.
Komisar originally published the piece on her personal blog before The Grayzone picked it up. After it went live on The Grayzone, Bellingcat founder Eliot Higgins pointed out Komisar’s use of AI on Twitter.
After Higgins’ Twitter thread, The Grayzone updated its version of the article and removed the links to AI. “We’ve amended this review, republished from an outside writer that used a few improper sources: 2 claims were removed, 2 were adjusted to include proper sourcing,” Max Blumental, editor of The Grayzone, said on Twitter.
The adjusted version of The Grayzone’s article removed the claims of diabetes and instead cited a Meduza piece, written in the immediate aftermath of Navalny’s sickness, speculating on what could have caused it. It’s three years old, refutes the possibility of diabetes entirely, and was published months before Bellingcat and other investigations uncovered evidence of the Novichok poisoning.
“In the case of the claims about Navalny’s health conditions there’s no actual articles that support the claim, the AI just appears to have pieced together the response from a bunch of different sources, many of which are highly dubious,” Higgins said in a tweet.
Another paragraph that was removed entirely focused on Navalny’s underpants, which CNN has claimed was the vector for the poisoning. “Were the underpants still considered dangerous? Did hospital workers who undressed Navalny get sick?” Komisar wrote in the now-deleted paragraph. “Many people were exposed to Navalny and his deadly underpants, but not one has been reported to have fallen ill. The passengers who attended him in the plane and who flew on to Moscow have not reported medical problems.” In the original article, Komisar added a link to a ChatSonic transcript which explained how Novichok worked, but again cited URLs that did not exist.
The Grayzone did not respond to Motherboard’s request for comment submitted through its contact page. In a statement to Motherboard, Komisar admitted that the links to AI scripts are a problem. On her own version of the piece, she replaced the links. She then called Higgins a CIA asset.
“What I wrote was true, which he never denied,” she said. “Just the links referenced by the AI were not good…every single AI link Higgins attacked by a good live link. Message: the information was correct, the links were not. Note that Higgins never attacked the information, just the links.”
But, as noted above, Higgins did, in fact, question the information presented in the article several times in his tweet thread.
When asked about the editing process at The Grazyone, Komisar said she didn’t know. “I had sent a file with corrections including deletion of lines about the filmmakers not having made documentaries but that version was not used,” she said. She said she had never used ChatSonic or any other AI generation tool before and wouldn’t again.
I asked if she attempted to double check the information presented by ChatSonic. “How do you mean double check?” She said. Komisar explained that she’d seen the information on another blog run by a journalist living in Moscow. “But he didn’t have links, so I thought ChatSonic would be considered more neutral.”
The use of AI to generate background information for an article seems like a dangerous new precedent in misinformation. AI chatbots, while convincing conversationalists, are highly prone to making up information, known as “hallucinating” in trade speak.
“In the past we’ve seen phony news sites copy articles from other websites to build a sense of authenticity, but this is something that’s easy to pick up on by just doing simple Google searches for phrases from the articles,” Higgins told Motherboard.
“If AI is used to rewrite those articles then that would be much harder to detect,” he said. “However, you do already have tools like GPTZero that are designed to detect AI generated text, so if those tools could be developed to examine entire websites, and they’re effective, it could be a way to detect those kinds of sites.”
Higgins explained that Bellingcat frequently deals with inauthentic material like the AI-generated PDFs in Komisar’s piece. “When it comes to inauthentic content I see it in terms of two categories, as evidence and as information. As evidence we’ll examine content, triangulate it against other information, and take other steps to ensure it’s authentic,” he said. “It’s a time consuming process, and that’s the problem in terms of it being shared as information. A social media post containing inauthentic content can go viral before any verification or fact checking can be done, and it’s a rare fact check that gets more views than the disinformation it’s fact checking.”
Komisar said she stands by what she wrote. “I’m hoping for the ‘Streisand Effect.’ You and others reporting on this may actually read the whole story! And lead others to do the same,” she said.”
ChatSonic, the AI tool Komisar used to generate the PDFs, seems to have changed its tune about Bellingcat. In the original version of her piece, Komisar asked ChatSonic the question “can you show me some examples of Bellingcat’s bias for western governments?” And used its reply as evidence of Bellingcat’s corruption.
When I asked ChatSonic the same question, it didn’t give the same answer.
“I’m sorry, but I must correct my previous response,” ChatSonic told me. “I made an error by stating that Bellingcat has a bias for Western governments, which is not accurate. In fact, Bellingcat prides itself on being an independent and non-partisan organization. They have been critical of governments from various countries, including Western governments. I apologize for any confusion my previous response may have caused.”
ChatSonic didn’t respond to a request for comment.