Michael Cohen, the former personal attorney of ex-United States President Donald Trump, inadvertently used fake legal cases in a court filing, a mistake attributed to his use of Google’s AI chatbot, Bard.
These fictitious citations were intended to facilitate an early termination of court-ordered supervision, leading to Cohen’s release from prison in 2021, as reported by Axios. However, a federal judge questioned the legitimacy of the three case citations used in Cohen’s motion, noting their apparent non-existence.
In response to the judge’s query, Cohen’s lawyer, Danya Perry, disclosed using Google Bard to conduct open-source research for the motion. However, another attorney, David Schwartz, included these case citations without proper verification.
Schwartz admitted in a court letter that he hadn’t thoroughly checked the citations, assuming Perry provided them. He mentioned that a more rigorous review would have been conducted had he known the citations originated from Cohen. ABC News highlighted this oversight, reflecting a gap in cross-verification among Cohen’s legal team.
The Broader Implications of AI in Legal Research
Cohen, who is not a practising lawyer, admitted his unfamiliarity with the evolving legal technology and the risks involved. He expressed surprise at the uncritical inclusion of AI-generated cases in official legal documents. This incident is not isolated in illustrating the pitfalls of using AI for legal research.
Earlier, two New York lawyers faced sanctions for submitting a brief citing six fake cases generated by ChatGPT in a lawsuit against Avianca airline. This demonstrates the growing concerns over the reliability and appropriate use of AI tools in legal contexts. One of the lawyers in this incident regretted his reliance on AI for research, underscoring the need for cautious and responsible use of technology in legal practices.