AI-Generated Legal Faux Pas: Cohen’s Case Takes a Technological Twist

ai-generated-legal-faux-pas-cohen's-case-takes-a-technological-twist

In a surprising turn of events in the legal world, Michael Cohen, the former attorney for Donald Trump, has become entangled in a unique predicament involving artificial intelligence-generated legal case citations. This development, unfolding in Manhattan’s federal court, underscores the growing impact of AI in various professional fields, including law.

Cohen, who previously pleaded guilty to charges including tax evasion, campaign finance violations, and lying to Congress, has been under court supervision following his incarceration. In an effort to bring this supervision to an early close, his attorney, David M. Schwartz, submitted a motion to the court. However, the motion unexpectedly contained bogus legal case citations, unbeknownst to Cohen at the time of submission.

These fake citations originated from Google Bard, an AI service akin to ChatGPT, which Cohen utilized for his legal research. Mistaking it for an advanced search engine, Cohen did not realize that Google Bard, much like its counterpart integrated into Microsoft’s Bing, is capable of generating fictional information. This phenomenon, known in AI circles as “hallucination,” led to the inclusion of non-existent legal cases in the motion.

The revelation came to light after Judge Jesse Furman, presiding over the case, queried the origins of these spurious citations. Cohen’s response indicated that he, a disbarred lawyer, had turned to online resources for legal research, having lost access to formal legal databases. His reliance on Google Bard, a product launched earlier this year in response to the burgeoning AI industry, inadvertently led to this unusual legal mishap.

Schwartz, Cohen’s lawyer and longtime friend, has been pointed to as the one who failed to verify the validity of these citations before their submission. Cohen, however, has appealed for leniency on behalf of Schwartz, attributing this oversight to an honest error rather than an intent to deceive.

In an intriguing subplot, Schwartz believed that the drafts of the papers submitted were reviewed by another of Cohen’s attorneys, E. Danya Perry, a notion Perry has firmly denied. Perry, upon discovering the false citations, promptly reported them to the judge and federal prosecutors.

This incident is not isolated. Earlier in the year, another case in the Manhattan federal court grappled with a similar issue, where lawyers were fined for citing fake cases generated by ChatGPT.

The story takes on a broader dimension with Trump’s ongoing legal battles. In a separate New York state court case, Trump pleaded not guilty to 34 felony charges related to falsifying business records at his private company, allegations stemming from hush money payments. The former president has also pleaded not guilty in three other criminal cases, denouncing these charges as a strategy to undermine his potential 2024 presidential campaign.

Cohen’s unintentional misstep with AI-generated legal citations not only highlights the complexities and pitfalls of emerging technologies in professional practices but also adds another layer to the ongoing legal dramas surrounding Trump and his associates. As AI continues to permeate various sectors, this case serves as a cautionary tale of the need for vigilance and verification in the digital age.