Sunday 31 December 2023

Disgraced Former Lawyer Michael Cohen and His Attorney Busted Using Google Bard AI to Generate Fake Case Citations

 Disgraced former attorney for President Trump, Michael Cohen, provided his counsel with previous court citations to create a precedent for his motion to terminate his supervised release early.

Cohen plead guilty in 2018 to campaign finance violations and lying to Congress. Cohen’s attorney submitted the citations to the court unchecked.

And they were fake.  Do not exist.

 Cohen, who plead guilty to lying to Congress, switched his testimony regarding President Trump asking him to inflate his assets.  When asked by Trump attorney Cliff Robert, “So Mr. Trump never asked you to inflate the numbers on his financial statement?” Cohen responded “Correct.”

Robert immediately asked Judge Engoron to dismiss the case after Cohen, a key witness, told the court that Trump never instructed him to inflate his assets.  That motion was, of course, denied by the Judge who had already determined Trump was in the wrong before the trial even began. 

President Trump immediately got up and walked out of the court room and addressed the press:

Now This…

On December 12th, New York U.S. District Judge Jesse Furman questioned the validity of three citations that claimed there was precedent set by the Second Circuit of the U.S. Court of Appeals to allow Cohen to terminate the 2018 court-ordered supervised release early.  Judge Furman gave Cohen’s counsel, David M. Schwartz, until December 19th to provide the decisions cited in the filing, according to a December 14th report from Newsweek.

Judge Furman said that if they could not provide the citations, then they would be required to provide a “thorough explanation” as to how a motion was made, citing “cases that do not exist and what role, if any, Mr. Cohen played in drafting or reviewing the motion before it was filed.”

In a December 28th e-mail to Judge Furman, E. Danya Perry, who represents Michael Cohen “with respect to his reply letter”, wrote:

To summarize: Mr. Cohen provided Mr. Schwartz with citations (and case summaries) he had found online and believed to be real. Mr. Schwartz added them to the motion but failed to check those citations or summaries. As a result, Mr. Schwartz mistakenly filed a motion with three citations that—unbeknownst to either Mr. Schwartz or Mr. Cohen at the time—referred to nonexistent cases. Upon later appearing in the case and reviewing the previously-filed motion, I discovered the problem and, in Mr. Cohen’s reply letter supporting that motion, I alerted the Court to likely issues with Mr. Schwartz’s citations and provided (real) replacement citations supporting the very same proposition. ECF No. 95 at 3. To be clear, Mr. Cohen did not know that the cases he identified were not real and, unlike his attorney, had no obligation to confirm as much. While there has been no implication to the contrary, it must be emphasized that Mr. Cohen did not engage in any misconduct.

The letter claims that Cohen was attempting to “assist” his attorney and “conducted open-source research for cases that reflected what he anecdotally knew to be true.”  The non-existent cases Cohen cited were generated using Google Bard.  Cohen claimed that he believed Google Bard was more of a “super-charged search engine, not a generative AI service like Chat-GPT.”  Cohen had previously used Google Bard for research purposes and purports that he “did not appreciate its unreliability as a tool for legal research.

Perry’s letter passes the blame from Cohen to his attorney, David M. Schwartz, because Cohen did not have an “ethical obligation to verify the accuracy” but Schwartz did, which he has admitted to.

While the blame ultimately lies on Cohen’s counsel to validate the filing is accurate, it is a bit concerning that Cohen admitted to previously using Google Bard for research purposes — especially considering there are numerous “legal AI” resources to choose from.  When asked about the quality of information given, specifically “do you have a disclaimer that Google Bard may produce made up references?” Google Bard responded that it is a “conversational AI or chatbot trained to be informative and comprehensive.”

According to Google:

AI hallucinations are incorrect or misleading results that AI models generate. These errors can be caused by a variety of factors, including insufficient training data, incorrect assumptions made by the model, or biases in the data used to train the model. AI hallucinations can be a problem for AI systems that are used to make important decisions, such as medical diagnoses or financial trading.

Post a Comment

Start typing and press Enter to search