The post Federal Judge In Virginia Declines To Sanction Lawyer Who Filed AI-Generated Erroneous Citations appeared first on Above the Law.
In yet another case involving the use — and misuse — of generative AI in legal research, a federal district court judge has declined to impose sanctions on an experienced attorney who submitted a brief containing miscited and misquoted cases generated by artificial intelligence.
Although the attorney, Thad M. Guyer, acknowledged using gen AI tools in preparing his appellate brief, which contained several incorrectly cited cases, U.S. District Judge Thomas T. Cullen, presiding over the whistleblower lawsuit in the Western District of Virginia, characterized the citation errors as an “honest mistake” rather than intentional misconduct.
While the judge in the case, Iovino v. Michael Stapleton Associates, did not issue a written opinion on the sanctions issue, he made his ruling orally during an Oct. 9, 2024, show cause hearing held to determine whether Guyer should be sanctioned. You can read the transcript of the hearing here. As far as I could find, the order has not yet been reported in any other news media.
“Mr. Guyer, to his credit, owned the mistake,” Judge Cullen said during the hearing. “He took sole responsibility, didn’t try to blame [other] counsel involved, and immediately took steps to correct the issues that led to the erroneous case cites. … This was a quirk. It’s one of the downsides of generative AI.”
Erroneous Cites and Quotes
The Oregon-based Guyer, a nationally known whistleblower lawyer with his own Wikipedia page, was representing the plaintiff in the Virginia lawsuit pro hac vice through his association with the Government Accountability Project (GAP), a whistleblower rights organization for which he served earlier in his career as litigation director and general counsel.
Last July, Judge Cullen issued an opinion and order directing Guyer to show cause why he should not be disciplined, after it was discovered that a brief Guyer had filed, objecting to a protective order in a discovery dispute, contained cites that could not be found and cases that appeared not to exist. The brief also contained case quotations that did not exist in the cited cases. Opposing counsel flagged these discrepancies to the court, calling them “ChatGPT run amok.”
At the time the judge issued the show cause order, Guyer had responded to the erroneous citations by filing supplemental authorities. However, Judge Cullen wrote in the order that Guyer “puzzlingly” had not explained where the citations had come from or who was to blame. “This silence is deafening,” the judge wrote.
After the show cause order was issued, Guyer filed a response in which he denied using any fictitious cases but acknowledged using several erroneous citations and quotations. He asserted that the two cases cited by the court as non-existent did, in fact, exist, but were given the wrong citations. As to the quotations, he said that while they did not appear verbatim in the cases, they accurately reflected principles discussed in those cases.
In a sworn declaration filed with the court, Guyer described himself as an early and regular user of gen AI.
“I study, practice and use Generative AI and GPTs (“GPTs”) extensively in my law practice, and on this case,” he wrote. “At age 74, and being an early adopter and proponent of technologies to aid lawyers, I am fully committed to the responsible use of the GPTs in this profound professional transformation of information technologies for legal research, document preparation, and discovery.”
‘The New Normal’
At the show cause hearing in October, the judge indicated that he, in general, was open to attorneys’ use of AI, calling it “the new normal.”
“This court has neither the authority nor the inclination to curb that practice, even it wanted to,” he said. “If I did that, I would justifiably be perceived by some as overstepping and unwisely decreeing that litigants can’t use this groundbreaking technology in a court of law.”
At the same time, he said, litigants who use gen AI to prepare pleadings and briefs “must still adhere to basic tenets of conduct, including taking reasonable measures to ensure that what they do file in court, including cases cited to bolster legal arguments, is true and accurate to the best of their ability.”
The judge credited Guyer for accepting sole responsibility for his error and for acknowledging that he did not employ the safeguards necessary to ensure that AI was used properly. He also credited Guyer for pointing out additional citation errors that both the court and opposing counsel had missed.
The judge also noted that Guyer and GAP had pledged to implement safeguards going forward, including using databases such as Westlaw to check AI-generated case cites and discussions and assigning a junior attorney to manually check briefs written using gen AI.
“I don’t believe that Mr. Guyer did anything intentionally to mislead this court,” Judge Cullen wrote. “He has an unblemished track record as a lawyer, practicing at a very high level over the course of his career. He made a mistake, and for the most part, he’s acknowledged that.”
That said, the judge was not ready to let Guyer entirely off the hook. He expressed consternation over comments Guyer made in the media asserting that the cases were real and that, even if the citations were wrong, the court should have been able to figure that out, or his opposing counsel should have alerted him in time to quickly correct the errors.
Also, noting that the Virginia state bar had opened an investigation into Guyer’s conduct, and that Guyer had self-reported the matter to the Oregon bar, the judge directed that the transcript of the hearing be sent to the bar authorities in those two states.
“I want those bar agencies to know the ultimate outcome here and how I view this issue,” the judge said. “Hopefully, that helps them figure out what they need to do, if anything.”