OpenAI Gets Sued for Defamation

OpenAI Gets Sued for Defamation – But is ChatGTP to Blame or Is It User Error?

Mark Walters, a radio host of Armed American Radio in Canton, Georgia, is the first person to sue OpenAI over false claims made by ChatGPT.

The suit alleges that ChatGPT provided Fred Riehl, the editor-in-chief of the gun publication AmmoLand, false material, claiming that Walters had embezzled funds from a gun rights non-profit. The AI even went so far as to completely fabricate passages and case numbers for the article.

This comes on the heels of a Manhattan lawyer appearing before a judge to face possible sanctions for having used ChatGPT for crafting a motion in a Federal District Court that cited references to other similar legal cases to support his motion, that didn’t actually exist.

And what about the professor at Texas A&M university who proceeded to fail his entire class after falsely accusing them of using ChatGPT to write their end of term essays, because ChatGPT had claimed it had generated the content his students had written.

What do all of these stories have in common? They are all examples of people not really understanding how to use the tool and seem to be completely unaware of its limitations.

Hallucination has long been (well, as long as it’s been around) a recognized issue with LLM chatbots. This is not a new phenomenon.

Programs like ChatGPT and other large language models analyze which fragments of text should follow other fragments of text to create a realistic sounding response, based on a statistical model and their data training sets that have scraped billions of bits of information from all over the Internet.

In essence it’s the gathering of information from billions of sources and connecting the dots that makes it sound more realistic, but those dots don’t necessarily connect in terms of factual information.

All of the highly publicized cases of people getting into trouble using ChatGPT because they used misinformation from the program, is not the program’s fault.

No one has ever claimed that ChatGPT output is 100% factual. There are, cautions on the first page of most AI Chatbot programs. ChatGPT warns users no less than three times with a pop up, immediately upon signing in and twice on the main page in an additional in two separate locations, that the information is not always factual.

The radio host who was wrongly implicated in illegal activities should indeed be angry that an article was published accusing him of things that didn’t happen. But, why sue ChatGPT and not the editor-in-Chief who published the article from which his only ‘source’ was ChatGPT? I suspect it has more to do with OpenAI having deeper pockets than AmmoLand.

The lawyer who used ChatGPT to write his motion thought it was more like a super-charged search engine and told the court that he was “unaware that its content could be false.”

The Professor who failed all his students thought he could upload each of his students’ essay answers to ChatGPT and that he could use the program to detect whether the text was AI generated or organic writing. ChatGPT claimed to have written the text, which it had not.

If you are in a position where your actions can forever detrimentally affect the lives of others, like in the case of court rulings, academic grading, or the publishing of information about someone, you would think you wouldn’t risk using a program that you don’t understand.

ChatGPT is not responsible for people not doing their jobs properly and misusing it. You wouldn’t blame the car company for getting rear-ended, you’d blame the driver of the car that rear-ended you.

Author