OpenAI is facing what appears to be its first defamation lawsuit over false information generated by its ChatGPT model, against someone.
The lawsuit was filed by Mark Walters, a radio host in Georgia, who claims that ChatGPT produced inaccurate information stating that he had been involved in defrauding and embezzling funds from a non-profit organization.
The information was generated in response to a request from journalist Fred Riehl. Walters filed the case on June 5th in Georgia’s Superior Court of Gwinnett County, seeking unspecified monetary damages from OpenAI.
Tekedia Mini-MBA edition 16 (Feb 10 – May 3, 2025) opens registrations; register today for early bird discounts.
Tekedia AI in Business Masterclass opens registrations here.
Join Tekedia Capital Syndicate and invest in Africa’s finest startups here.
“ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit said.
Walters alleges that the chatbot provided false information to Fred Riehl, the editor-in-chief of the gun publication AmmoLand. Riehl had requested a summary of the case Second Amendment Foundation v. Ferguson, which involved accusations against Washington State’s Attorney General Bob Ferguson for allegedly suppressing the activities of a gun rights foundation, per Bloomberg.
However, according to the lawsuit, the chatbot provided Riehl with a summary that falsely stated Walters was being sued for “defrauding and embezzling funds” from the Second Amendment Foundation as its chief financial officer and treasurer. The lawsuit emphasizes that every statement regarding Walters in the summary is untrue.
Walters clarifies that he is not involved in the Ferguson case and has never been employed by the Second Amendment Foundation. Furthermore, the case itself has no connection to financial accounting allegations against anyone.
The incident adds to the growing concerns surrounding the truthfulness and reliability of AI chatbot outputs. Recent controversies have highlighted instances of chatbots providing confidently inaccurate responses.
In April, an Australian mayor announced his intention to sue OpenAI over false claims generated by ChatGPT, suggesting he had been imprisoned for bribery. Additionally, a New York lawyer who used ChatGPT to draft legal briefs potentially faces sanctions after referencing non-existent case law.
In Riehl’s request to ChatGPT, he had asked for the complete text of the Second Amendment Foundation’s complaint. However, the chatbot allegedly generated a completely fabricated summary that bore no resemblance to the actual complaint, including an erroneous case number.
The legal implications of holding a company accountable for false or defamatory information generated by AI systems are unclear. In the US, Section 230 traditionally shields internet firms from liability for third-party content hosted on their platforms. It remains uncertain whether these protections extend to AI systems, which not only link to data sources but also generate new information, including false data, according to The Verge.
Although OpenAI includes a small disclaimer on ChatGPT’s homepage acknowledging that the system may occasionally generate incorrect information, Walters’ defamation lawsuit in Georgia could serve as a test case for this legal framework.
The Verge quoted Eugene Volokh, a law professor specializing in AI system liability, as saying that while he believes libel claims against AI companies are legally viable in principle, this particular lawsuit may be challenging to sustain.
Volokh pointed out that Walters did not notify OpenAI about the false statements, denying them an opportunity to rectify the situation, and there were no actual damages resulting from ChatGPT’s output. Nevertheless, the outcome of this case remains of interest.