A federal judge in Alabama this week publicly reprimanded attorneys who had been hired by the state of Maryland to defend conditions at a jail in Baltimore for citing cases that turned out to be made up by the artificial intelligence program ChatGPT.
In a 51-page order on Wednesday, U.S. District Judge Anna M. Manasco referred Matthew Reeves and William Lunsford to the Alabama State Bar and other licensing authorities. They’re both partners at Butler Snow LLP, which gained notoriety for representing prison systems in the Deep South.
Reeves and Lunsford are required to provide a copy of her order to their clients, opposing lawyers and judges in every state and federal court case in which they’re counsel of record.
“Fabricating legal authority is serious misconduct that demands a serious sanction,” Manasco wrote. “In the court’s view, it demands substantially greater accountability than the reprimands and modest fines that have become common as courts confront this form of AI misuse.”
The Baltimore Banner thanks its sponsors. Become one.
Maryland hired the law firm in 2023 to take over defending the state in a decades-old class action lawsuit over medical and mental health care at the Baltimore Central Booking & Intake Center.
A spokesperson for the Maryland Office of the Attorney General, Kelsey Hartman, declined to comment.
Read More
Reeves withdrew from the case. Lunsford is still listed as one of the active attorneys, according to court records.
Neither Reeves nor Lunsford could be immediately reached on Thursday for comment.
In court documents, Reeves acknowledged that he used ChatGPT in an unrelated lawsuit in Alabama to generate legal citations but did not verify their accuracy.
The Baltimore Banner thanks its sponsors. Become one.
He stated in court that he initially starting using AI for personal reasons, including to conduct research about “dietary-related matters.”
Meanwhile, Lunsford declared under penalty of perjury that he never used AI to generate legal citations.
Lunsford reported that the law firm has been “proactive in investigating, warning against and attempting to establish firm guidance on the use of the ever-evolving availability of products generated utilizing artificial intelligence.”
“I can state with certainty that our Firm has made the limitations upon the use of artificial intelligence abundantly clear to all of our attorneys,” Lunsford wrote. “The conduct reported in this instance flies in the face of known Firm policy, which the Firm will handle internally.”
Comments
Welcome to The Banner's subscriber-only commenting community. Please review our community guidelines.