devxlogo

Law firm warns against AI hallucinations

AI Hallucinations
AI Hallucinations

The law firm Morgan & Morgan recently warned its attorneys about the dangers of citing AI-generated case law in court filings. The firm’s Chief Transformation Officer, Yath Ithayakumar, stated that blindly relying on AI could result in disciplinary action, including termination. The issue came to light when Morgan & Morgan’s lead attorney on a case against Walmart, Rudwin Ayala, cited eight cases in a court filing that Walmart’s lawyers could not find anywhere except on ChatGPT.

Walmart urged the court to consider sanctions, noting that the cited cases seemed to only exist in the world of AI. Ayala was immediately removed from the case and replaced by his supervisor, T. Michael Morgan, Esq.

Morgan expressed great embarrassment over the fake citations and agreed to cover all fees and expenses associated with replying to the erroneous filing. Morgan emphasized that AI can be dangerous when used carelessly and that attorneys must independently verify all citations.

law firm cautions on AI citations

“The risk that a court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought,” he said. Lawyers improperly citing AI-generated cases have disrupted litigation in at least seven cases in the past two years. Some have faced sanctions, including fines and mandatory courses on the responsible use of AI in legal applications.

Morgan & Morgan is taking steps to prevent similar issues, including reiterating that AI cannot be solely relied upon for research or drafting briefs. The firm’s technology team and risk management members are discussing and implementing further policies and safeguards. Harry Surden, a law professor studying AI legal issues, suggested that the increasing reliance on AI tools requires lawyers to increase their AI literacy to understand the tools’ strengths and weaknesses.

See also  AI Is Finally Useful—But Guard Your Freedom

As of July 2024, 63 percent of lawyers have used AI, with 12 percent using it regularly. Ithayakumar told the firm’s lawyers that blind reliance on AI is equivalent to citing an unverified case and that it is their responsibility and ethical obligation to verify AI outputs. Failure to comply may result in court sanctions, professional discipline, and reputational harm.

Image Credits: Photo by Saradasish Pradhan on Unsplash

Cameron is a highly regarded contributor in the rapidly evolving fields of artificial intelligence (AI) and machine learning. His articles delve into the theoretical underpinnings of AI, the practical applications of machine learning across industries, ethical considerations of autonomous systems, and the societal impacts of these disruptive technologies.

About Our Editorial Process

At DevX, we’re dedicated to tech entrepreneurship. Our team closely follows industry shifts, new products, AI breakthroughs, technology trends, and funding announcements. Articles undergo thorough editing to ensure accuracy and clarity, reflecting DevX’s style and supporting entrepreneurs in the tech sphere.

See our full editorial policy.