Home Content News Attorneys Face Sanctions As Open Source AI Hallucinations Rise

Attorneys Face Sanctions As Open Source AI Hallucinations Rise

0
75
Open Source AI ChatGPT And Claude Errors Put Lawyers At Legal Risk, Courts Take Action
Open Source AI ChatGPT And Claude Errors Put Lawyers At Legal Risk, Courts Take Action

Attorneys face sanctions for using open source AI like ChatGPT in court, as hallucinations and privacy risks threaten legal work and client confidentiality.

Attorneys in the U.S. are increasingly under scrutiny for misusing open-source AI tools such as OpenAI’s ChatGPT and Anthropic’s Claude in legal work. Courts have sanctioned lawyers for inventing “imaginary” cases, suggesting fabricated court decisions, and providing improper citations.

Damien Charlotin, lawyer and research fellow at HEC Paris, maintains a database of AI hallucination cases, tallying 376 incidents to date, 244 of which occurred in the U.S.

“There is no denying that we were on an exponential curve,” Charlotin told Fortune.

Sean Fitzpatrick, CEO of LexisNexis North America, UK & Ireland, warned about rising stakes: “We have a situation where these (open source models) are making up the law. The stakes are getting higher, and that’s just on the attorney’s side.” He added, “I think it’s only a matter of time before we do see attorneys losing their license over this.”

Open source AI models cannot reliably draft courtroom-ready motions, particularly in sensitive areas such as Medicaid coverage, Social Security benefits, or criminal prosecutions. Entering confidential client information also risks breaching attorney-client privilege. Frank Emmert, executive director at Indiana University’s Center for International and Comparative Law, noted:
“Potentially you could find client names… or at least… information that makes the client identifiable.”

Controlled AI systems like Lexis+ AI offer a safer alternative. These tools encrypt prompts, do not train on customer data, and operate within a “walled garden” of proprietary content, reducing hallucination risks. Still, Fitzpatrick emphasises that ultimate responsibility for maintaining privilege remains with attorneys.

Experts urge the legal sector to embrace AI education. Emmert said, “Starting in academia but continuing in the profession, we need to train every lawyer, every judge, to become masters of artificial intelligence—not in the technical sense, but using it.”

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here