The Washington PostDemocracy Dies in Darkness

ChatGPT falsely told voters their mayor was jailed for bribery. He may sue.

April 6, 2023 at 10:22 a.m. EDT
“Even a disclaimer to say we might get a few things wrong — there’s a massive difference between that and concocting this sort of really harmful material that has no basis whatsoever,” says Brian Hood, an Australian mayor victimized by lies from an AI chatbot. (Dado Ruvic/Reuters)
5 min

Brian Hood is a whistleblower who was praised for “showing tremendous courage” when he helped expose a worldwide bribery scandal linked to Australia’s National Reserve Bank.

But if you ask ChatGPT about his role in the scandal, you get the opposite version of events.

Rather than heralding Hood’s whistleblowing role, ChatGPT falsely states that Hood himself was convicted of paying bribes to foreign officials, had pleaded guilty to bribery and corruption, and been sentenced to prison.

When Hood found out, he was shocked. Hood, who is now mayor of Hepburn Shire near Melbourne in Australia, said he plans to sue the company behind ChatGPT for telling lies about him, in what could be the first defamation suit of its kind against the artificial intelligence chatbot.

“To be accused of being a criminal — a white-collar criminal — and to have spent time in jail when that’s 180 degrees wrong is extremely damaging to your reputation. Especially bearing in mind that I’m an elected official in local government,” he said in an interview Thursday. “It just reopened old wounds.”

“There’s never, ever been a suggestion anywhere that I was ever complicit in anything, so this machine has completely created this thing from scratch,” Hood said — confirming his intention to file a defamation suit against ChatGPT. “There needs to be proper control and regulation over so-called artificial intelligence, because people are relying on them.”

ChatGPT invented a sexual harassment scandal and named a real law professor as the accused

The case is the latest example on a growing list of AI chatbots publishing lies about real people. The chatbot recently invented a fake sexual harassment story involving a real law professor, Jonathan Turley citing a Washington Post article that did not exist as its evidence.

If it proceeds, Hood’s lawsuit will be the first time someone filed a defamation suit against ChatGPT’s content, according to Reuters. If it reaches the courts, the case would test uncharted legal waters, forcing judges to consider whether the operators of an artificial intelligence bot can be held accountable for its allegedly defamatory statements.

On its website, ChatGPT prominently warns users that it “may occasionally generate incorrect information.” Hood believes that this caveat is insufficient.

“Even a disclaimer to say we might get a few things wrong — there’s a massive difference between that and concocting this sort of really harmful material that has no basis whatsoever,” he said.

In a statement, Hood’s lawyer lists multiple examples of specific falsehoods made by ChatGPT about their client — including that he authorized payments to an arms dealer to secure a contract with the Malaysian government.

“You won’t find it anywhere else, anything remotely suggesting what they have suggested. They have somehow created it out of thin air,” Hood said.

Under Australian law, a claimant can only initiate formal legal action in a defamation claim after waiting 28 days for a response following the initial raising of a concern. On Thursday, Hood said his lawyers were still awaiting to hear back from the owner of ChatGPT — OpenAI — after sending a letter demanding a retraction.

Italy temporarily bans ChatGPT over privacy concerns

OpenAI on Thursday did not immediately respond to a request for comment sent overnight. In an earlier statement in response to the chatbot’s false claims about the law professor, OpenAI spokesperson Niko Felix said: “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”

Experts in artificial intelligence said the bot’s capacity to tell such a plausible lie about Hood was not surprising. Convincing lies are in fact a feature of the technology, said Michael Wooldridge, a computer science professor at Oxford University, in an interview Thursday.

“When you ask it a question, it is not going to a database of facts,” he explained. “They work by prompt completion.” Based on all the information available on the internet, ChatGPT tries to complete the sentence convincingly — not truthfully. “It’s trying to make the best guess about what should come next,” Wooldridge said. “Very often it’s incorrect, but very plausibly incorrect.

“This is clearly the single biggest weakness of the technology at the moment,” he said, referring to AI’s ability to lie so convincingly. “It’s going to be one of the defining challenges for this technology for the next few years.”

In a letter to OpenAI, Hood’s lawyers demanded a rectification of the falsehood. “The claim brought will aim to remedy the harm caused to Mr. Hood and ensure the accuracy of this software in his case,” his lawyer, James Naughton, said.

But according to Wooldridge, simply amending a specific falsehood published by ChatGPT is challenging.

“All of that acquired knowledge that it has is hidden in vast neural networks,” he said, “that amount to nothing more than huge lists of numbers.”

“The problem is that you cannot look at those numbers and know what they mean. They don’t mean anything to us at all. We cannot look at them in the system as they relate to this individual and just chop them out.”

“In AI research we usually call this a ‘hallucination,’” Michael Schlichtkrull, a computer scientist at Cambridge University, wrote in an email Thursday. “Language models are trained to produce text that is plausible, not text that is factual.”

“Large language models should not be relied on for tasks where it matters how truthful the output is,” he added.