Byte of Prevention Blog

Author: Will Graebe

AI and the Unauthorized Practice of Law

AI 2

Artificial intelligence has already forced the legal profession to confront a number of uncomfortable questions. Can lawyers rely on AI-generated research? Who is responsible when AI produces incorrect information? And how much supervision is required when these tools are used in legal practice?

A new lawsuit addresses the question of whether an AI company can be liable for practicing law without a license? The lawsuit filed by Nippon Life Insurance Co. of America against OpenAI in federal court in Illinois alleges that ChatGPT effectively engaged in the unauthorized practice of law. The lawsuit arises from a dispute with a claimant who had previously settled a disability benefits case with the company.

The claimant, Graciela Dela Torre, reached a settlement with Nippon Life in January 2024. As part of that settlement, she signed an agreement releasing the company from further legal claims related to the matter. In other words, the dispute was supposed to be finished.

After the settlement, though, Dela Torre began to suspect that something may have been wrong with the agreement. She contacted her attorney and asked about reopening the case. Her lawyer explained that she had signed a release of claims and that the case had already been dismissed.

At that point, things took an unusual turn. According to the complaint, Dela Torre uploaded her lawyer’s response into ChatGPT and asked the system whether she was being “gaslighted.” ChatGPT reportedly affirmed her suspicion.

From there, the situation escalated quickly. The complaint alleges that Dela Torre fired her attorney and began using ChatGPT to generate legal arguments and draft court filings. Over time, she filed 21 motions, a subpoena, and multiple notices and statements in the case.

Not surprisingly, the court denied her efforts to reopen the settlement. Undeterred, she allegedly used ChatGPT again to draft a new lawsuit against Nippon.

Nippon now argues that OpenAI should bear responsibility for what happened next. The complaint alleges that ChatGPT provided legal drafting assistance and encouraged litigation strategies that were inconsistent with the settlement agreement and that “served no legitimate legal or procedural purpose.” According to the company, this conduct amounts to the unauthorized practice of law.

OpenAI, for its part, has responded that the lawsuit has no merit. The company also points to its usage policies, which state that users should not rely on ChatGPT for legal or medical advice without the involvement of a licensed professional.

Regardless of how the lawsuit ultimately plays out, the case highlights a growing tension between rapidly advancing AI tools and longstanding professional licensing rules. Generative AI systems are capable of producing legal-sounding arguments and documents with remarkable ease. But ease of production does not necessarily mean the output is accurate or appropriate.

For lawyers, the lesson is a familiar one. Technology can be a powerful tool, but it is not a substitute for professional judgment. And for courts and regulators, this case may be an early glimpse of a much larger issue that the legal system will likely be grappling with for years to come.

For lawyers in North Carolina, the situation described in the lawsuit highlights a concept that already appears in the State Bar’s ethics guidance on generative AI. In 2024 FEO 1, the North Carolina State Bar explained that generative AI tools should be treated much like a nonlawyer assistant within a law firm. In other words, AI can assist with tasks such as research, drafting, or brainstorming, but a licensed lawyer must supervise the work and remain responsible for the final product. Just as a lawyer cannot delegate legal judgment to a paralegal or assistant without oversight, the same principle applies to AI tools that generate legal content.

That idea helps illustrate the tension raised by the Nippon lawsuit. In a traditional law firm setting, if a nonlawyer assistant began generating litigation strategies that encouraged a client to violate a settlement agreement, a supervising lawyer would be expected to step in quickly. But when a litigant interacts directly with an AI system outside the supervision of a lawyer, that layer of professional oversight disappears.

Related Posts