AI in law: how a chatbot became an accomplice in the crime of a former Trump lawyer

Brother

Professional
Messages
2,590
Reaction score
526
Points
113
Why is it too early to trust neural networks to draw up court documents?

In court documents released last week, it became known about the unusual use of artificial intelligence by the ex-lawyer of former US President Donald Trump, Michael Cohen. Cohen, who was convicted in 2018 of tax evasion and campaign finance violations, admitted to using the generative AI chatbot Google Bard to create links to court decisions.

These references were used by his lawyer, David Schwartz, in the course of filing a motion to reduce the term of supervision of Cohen. However, it later turned out that all the links were bogus. Judge Jesse Furman pointed out the absence of the mentioned cases and demanded an explanation.

Cohen, who is not a lawyer, admitted that he did not know about the dangers and opportunities of modern technologies in law and perceived Google Bard as an advanced search engine, and not as a content generation service. He also expressed surprise that his team used these links without proper verification.

This case confirms the growing use of AI in legal practice not only in the United States, but also in other countries. For example, the Frankfurt Regional Court in Germany suspects law firms of using artificial intelligence to increase the number of lawsuits in mass cases.

Earlier, two New York lawyers were fined for using ChatGPT to prepare documents, including fictitious court decisions, in a case against the Colombian airline Avianca.

These developments highlight the potential risks and challenges associated with the use of AI in the legal field, highlighting the need for a deeper understanding and control of the use of such technologies.
 
Top