The dangers of Artificial Intelligence: a lawyer used ChatGPT and made up the entire defense

0
53

Now that ChatGPT is so fashionable, you should be aware of its risks. It’s a tool that can be useful, but it can also play tricks, as in the case of this lawyer whose entire defense was based on AI inventions.

This lawyer used the OpenAI tool to carry out legal research, believing that he was facing another search engine. What he did not imagine is that they would embarrass his face by presenting all the invented jurisprudence. One more example that Artificial Intelligence can have its risks.

ChatGPT invented the defense of a lawyer

The case in question began like many others, especially in the United States. A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal service cart struck his knee during a flight to Kennedy International Airport in New York.

When Avianca asked a federal judge in Manhattan to dismiss the case, Mata’s lawyers vehemently objected, submitting a 10-page brief citing more than half a dozen relevant court decisions. In this letter, alleged previous trials were cited. There was a Martinez vs. Delta Air Lines, Zicherman vs. Korean Air Lines, and Varghese vs. China Southern Airlines.

Twitter User Image

daniel feldmann

@d_feldman

A lawyer used ChatGPT to do “legal research” and cited a number of nonexistent cases in a filing, and is now in a lot of trouble with the judge 🤣 https://t.co/AJSE7Ts7W7

May 27, 2023 • 06:02


35.1K

1.5K

There was only one problem: no one, not the airline’s lawyers, not even the judge himself, could find the decisions or quotes cited and summarized in the brief. Basically, because ChatGPT had invented everything.

The attorney who created the brief, Steven A. Schwartz of Levidow & Oberman, entered court on Thursday, saying in an affidavit that he had used Artificial Intelligence to do his legal research: “a source who It has been revealed as unreliable.” Schwartz, who has been practicing law for at least 30 years, said he is “very sorry” to have trusted ChatGPT “and will never do so in the future without absolute verification of its authenticity.”

It is common for lawyers to use previous cases in a trial. If there is jurisprudence, it can be used to direct the opinion of the court and that the sentence is similar to the one referred to. The problem is that when using the OpenAI chatbot tool, a possible deception attempt is incurred, although this lawyer hides behind the fact that he was unaware that it was not something similar to a search engine and had even asked the program to verify that the cases they were real.

Why does Chat-GPT make stuff up?

The lawyer’s serious error is based on the principle of understanding ChatGPT as just another search engine. The tool developed by OpenAI uses a training system called Large Language Models (large language models) and based on machine learning. They can read, summarize, translate text and predict future words in a sentence, which allows them to generate speech similar to how humans speak and write. Essentially, what you don’t know you make up, create it. He is not aware that he does not know or, based on having collected information from those training sessions, he may believe that he knows.

GPT-3 is based on patterns obtained through vast amounts of text that exists on the web, including human conversations. GPT-4 pretends to be more realistic, but OpenAI has never hidden that the technology is not perfect and sometimes suffers these “hallucinations”.

ChatGPT does not search the web for information on recent events, it does not resolve current issues and its knowledge reaches a specific time cut, the year 2021 (at least in the free version).

Previous articleHow to buy a car in Germany
Next articleBuy your new computer, hard drive or gaming peripheral without paying VAT

LEAVE A REPLY

Please enter your comment!
Please enter your name here