June 13, 2023
AI’s First Court Appearance is an Epic Fail
Well, that didn’t take long.
A pair of lawyers and their firm have very publicly and quite thoroughly embarrassed themselves by asking ChatGPT for case citations that turn out to have been made up by the trendy AI chatbot.
There are so many points of stupidity and laziness here: The global frenzy to adopt ChatGPT, the inability or failure of attorneys to understand new technology, one lawyer’s unthinking reliance on the work of a colleague, a law firm practicing in an area it is not equipped to handle … Let’s break it all down.
New York City law firm Levidow, Levidow & Oberman was working on what is, in most ways, an entirely unremarkable lawsuit: Roberto Mata v. Avianca. Their client — Roberto Mata — sued the airline Avianca claiming that, while on a 2019 flight from San Salvador to New York’s JFK airport, an airline employee failed to take sufficient care in operating a metal serving cart that hit Mata in the knee and seriously injured him.
In January 2023 Avianca moved to dismiss the case in the Southern District of New York Court, asserting the statute of limitations had expired. In March, Plaintiff’s counsel — Peter LoDuca — replied with an affidavit claiming otherwise. In his affidavit, LoDuca cited decisions from several cases including Varghese v. China Southern Airlines and Zicherman v. Korean Air Lines, both of which were supposedly decided by the 11th Circuit Court of Appeals.
Avianca’s counsel quickly pointed out there was no evidence that those or other cases cited by Plaintiff’s counsel existed or, if they did exist, stood for the propositions that Plaintiff said they did.
The judge — P. Kevin Castel — was perplexed, and ordered LoDuca to file an affidavit attaching copies of the cases he cited. LoDuca complied — well, sort of. He submitted an affidavit that attached what he claimed were the official court decisions.
Defendant’s counsel again notified the Court that the cases did not exist or did not actually say what Plaintiff’s counsel had represented.
The judge, now rather angry, ordered LoDuca to show up in Court and explain exactly how he came to submit an affidavit — a sworn document — citing and attaching non-existent cases. In response, LoDuca submitted another affidavit saying that he had relied on Steven Schwartz, another attorney in his firm, to research and draft his affidavit. (By way of background, LoDuca and Schwartz have been practicing law for more than 30 years.)
And this is where the story goes from weird to bad. Really bad.
The reason LoDuca was appearing in Court instead of Schwartz is because Schwartz isn’t admitted to practice in federal court. He’s only admitted in state court where the case started out. To make matters worse, it turns out that despite the fact that Levidow, Levidow & Oberman were representing Mr. Mata in federal court, its lawyers didn’t have a subscription that allowed them to search federal cases.
Without this access to federal cases, Schwartz turned to what he thought was a new “super-search engine” (his words) he had heard about: ChatGPT. He typed questions, and the AI responded with what seemed to Schwartz to be genuine case citations, often peppered with friendly bot chat like “hope that helps!” What could possibly go wrong? A good deal. Because the cases ChatGPT provided Schwartz didn’t actually exist.
On June 8, 2023, the judge held a hearing to determine whether LoDuca, Schwartz, and their firm should be sanctioned.
At this hearing, LoDuca admitted he had neither read the cases cited nor made any legitimate effort to determine if they were real. He argued he had no reason not to rely on the citations Schwartz provided. Schwartz, embarrassed, said he had no reason to believe that ChatGPT wasn’t providing accurate information. Both admitted that, in hindsight, they should have been more skeptical. Counsel for Schwartz argued that lawyers are notoriously bad with technology (personally, I object to this characterization). Throughout the hearing, the packed courtroom gasped.
Cringe-inducing, to be sure. But looking deeper, there’s more to fault here than a tech-challenged attorney blindly relying on some “super search engine” to research case citations. The bigger problem is that, even after Avianca’s lawyers pointed out they couldn’t find any evidence that the cases existed or said what Plaintiff’s lawyer said they said, Plaintiff’s attorneys — LoDuca and Schwartz — persisted in trying to establish that the “cases” they relied on were real despite possessing absolutely no evidence for it. Even after Schwartz couldn’t find the cases through a Google search, neither he nor LoDuca checked the publicly available court records to see if the cases were real. Moreover, they seem to have disregarded some pretty clear signs that the “cases” were, at best, problematic. For example, one case begins as a wrongful death case against an airline and, a paragraph or two later, magically transforms into someone suing because he was inconvenienced when a flight was canceled.
Should the duo and their firm be sanctioned? In general, the standard for sanctions is whether those involved acted in bad faith. Everyone here insisted that their conduct did not meet this standard. Rather, they claimed they were simply mistaken in not knowing how ChatGPT worked or that it couldn’t be trusted.
The judge certainly didn’t seem to see things that way. He was appalled that Schwartz and DoLuca didn’t try to verify (or, apparently, even read) the “cases” they cited. In court, the judge read aloud a few lines from one of the fake opinions, pointing out the text was “legal gibberish.” In addition, while LoDuca, Schwartz and their firm might not have been trying to lie to the court, it’s hard to believe that they fulfilled their obligation to make “an inquiry reasonable under the circumstances,” which is what is required under one of the rules applicable here.
The judge reserved a decision on sanctions, so stay tuned.