How Generative AI Is Reshaping Professional Liability Risk for Law Firms

In September 2025, a California court ordered an attorney to pay a $10,000 fine for filing a state court appeal that contained 21 fake quotations generated by ChatGPT, according to NBC News. While the fine was the largest to date issued over artificial intelligence (AI) fabrications, it’s certainly not the first instance of AI hallucinations slipping through the cracks in the legal world—and it’s not going to be the last, either.
Many law firms are increasingly relying on AI applications, says Sean Burke, senior broker at Jencap, and for good reason. “It’s really a game-changer,” he says. “You have reduced costs, improved turnaround times and access to analytics. When it’s utilized correctly, it’s incredibly powerful.”
But with every new opportunity comes new risks, which for law firms include “dependence on AI for accuracy and reliability, whether your professional judgment is being eroded, and whether you’re meeting the standard of care and ethical codes that the American Bar Association (ABA) and equivalent regulators require of law firms,” Burke says.

make Your Voice heard at the 2026 Big “I” Legislative Conference
April 22-24 Washington D.c.
From an insurance perspective, the vast majority of law firm professional liability policies do not have express AI exclusions, Burke says. “Underwriters are indicating that when firms use AI appropriately, coverage will extend to claims that involve an AI-related component.”
In an extreme scenario, “if a firm files a brief containing citations to six fictitious cases and the court determines that adequate verification was not performed, the question becomes whether such conduct constitutes a legal service rendered to the client. Does it satisfy the standard of care, or does it place the attorney outside the bounds of recognized legal practice?” Burke says.
When it comes to coverage in those instances, “I think a carrier would say, ‘This is not the intent of providing insurance for negligence in the performing of a legal service, because you’re outside the construct of a legal service at that point,’” Burke says. “I don’t think we need policies to evolve to address AI coverage on an express basis, because the policies will perform in the utilization of AI as a tool where human oversight, review and protocols are in place.”
Of course, AI’s continued development may mean the discussion looks entirely different in a year. “The pace of generative AI will drive certain regulatory oversight or standards that may bring certain exposures that don’t exist today,” he says.
Independent agents can tell their law firm clients what to expect from underwriters by asking “How are you utilizing AI?” on renewals, Burke says. “For now, it remains largely a check-the-box inquiry as underwriters continue to monitor how AI adoption evolves.”
Burke recommends that law firms implement AI governance frameworks that define permissible use and verification standards while ensuring audit trails of AI-generated content are maintained. He also stresses the importance of using closed-system AI, as open-system AI’s use of inputted information introduces credibility and plagiarism concerns, as well as client confidentiality issues.
Agents should also encourage law firms to disclose AI usage to clients. “Client sign-off is a great protection against a future claim,” he says.
More on Professional Liability
When law firms have risk strategies in place, “I suspect they won’t have to worry,” Burke says. “Firms that adopt appropriate AI controls should remain well-positioned. We anticipate that over the next to 12 to 18 months, firms without documented protocols will fall out of preferred-risk tiers and will not secure the most competitive terms.”
AnneMarie McPherson Spears is IA news editor.










