Opinion
When generative AI gets it wrong, the wheels can come off
‘Artificial information’ lands attorneys in a soup in a lawsuit where they decided to set store by AI
June 12, 2023 | 12:48 AM
At a time when substantial sections of the global urban population are in awe of the prowess of artificial intelligence (AI) powered chatbots in various walks of life, what happened in a Manhattan federal court last Thursday is a classic example of the flip side of relying on ‘artificial information’. Two apologetic lawyers responding to an angry judge blamed ChatGPT for tricking them into including fictitious legal research in a court filing. Attorneys Steven A Schwartz and Peter LoDuca are facing possible punishment over filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the chatbot.Schwartz’s justification was that he used the groundbreaking program as he searched for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight. The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn’t been able to find through usual methods used at his law firm. But there was a catch. Several of those cases weren’t real or involved airlines that didn’t exist. Schwartz told US District Judge P Kevin Castel he was "operating under a misconception ... that this website was obtaining these cases from some source I did not have access to.” He said he "failed miserably” at doing follow-up research to ensure the citations were correct. "I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.A growing number of lawyers and law firms have been exploring the use of generative AI. Schwartz’s messy collision with the technology made headlines as an early illustration of its potential pitfalls. Prompted in part by the New York case, a federal judge in Texas last week issued a requirement for lawyers in cases before him to certify that they did not use AI to draft their filings without a human checking their accuracy.Judge Castel seemed both baffled and disturbed at the unusual occurrence and disappointed the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the bogus case law in a March filing. The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses.Schwartz also told the court that he had suffered personally and professionally as a result of the blunder and felt "embarrassed, humiliated and extremely remorseful.” He said that he and the firm where he worked — Levidow, Levidow & Oberman — had put safeguards in place to ensure nothing similar happens again. LoDuca, another lawyer who worked on the case, said he trusted Schwartz and didn’t adequately review what he had compiled. Ronald Minkoff, an attorney for the law firm, told the judge that the submission "resulted from carelessness, not bad faith” and should not result in sanctions. He said lawyers have historically had a hard time with technology, particularly new technology, "and it’s not getting easier.”Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of participants in person and online from state and federal courts in the US, including Manhattan federal court. He said the subject drew shock and befuddlement at the conference and explained that this was the first documented instance of potential professional misconduct by an attorney using generative AI. It highlights the dangers of using promising AI technologies without knowing the risks, as Shin said.
June 12, 2023 | 12:48 AM