Agent Charter: Creating an AI Governance Framework to Ensure Operational Reliance

By Steve Forte

Over the past two years, independent insurance agencies have rapidly moved from asking “What isartificial intelligence (AI)?” to Where do we deploy it next?” What began as isolated experiments, such as drafting emails, summarizing submissions or extracting data from PDFs, it has evolved into something far more consequential: operational reliance. AI is no longer a novelty—it is infrastructure.

The scale of this shift is underscored by recent data indicating that generative AI could unlock $50–$70 billion in annual value in the insurance industry, primarily through efficiencies in underwriting, claims and distribution, according to a 2026 McKinsey & Company report.

For the independent agent, this translates to a massive leap in operational capacity, with full AI adoption across the insurance value chain jumping from 8% in 2024 to 34% by mid-2025, according to Manu Mazumdar, director of insurance research at Conning.

The Handbook for Preventing E&O Claims in Agency M&A

Agentic AI defines this next phase: goal-driven systems capable of autonomously planning and executing multi-step workflows. These systems don’t just summarize a loss run. They ingest it, structure it, identify gaps and prepare a submission package. They go beyond flagging a renewal to analyze changes in exposure and recommend strategic adjustments.

However, as agencies begin to rely on AI to perform real work, a new leadership challenge emerges. AI is becoming a coworker. And like any coworker, it requires structure, accountability and oversight.

Yet, many agencies are approaching AI implementation as if it were a plug-and-play IT project. The result is a growing governance gap: a disconnect between what AI is capable of doing and how it is being managed operationally. Without a framework to guide decision-making, escalation and accountability, even the most promising AI initiatives risk stalling—or worse, creating hidden exposure.

Why Governance Matters in Managing AI

The greatest risk of AI in the agency environment is not dramatic failure. It’s a quiet failure, the kind that slips through unnoticed until it compounds into a larger issue.

One of the most common forms of this is “authority creep.” As AI systems consistently produce acceptable outputs, staff begin to trust them implicitly. Oversight diminishes,  review becomes cursory and, eventually, validation disappears altogether.

This is not hypothetical; research summarized by Stanford Human-Centered Artificial Intelligence (HAI) in 2023 found that users were significantly more likely to accept incorrect outputs when they perceived the system as authoritative—a phenomenon known as automation bias or overreliance.

Consider a scenario where AI misinterprets a loss run and underrepresents claim severity. That error feeds into a submission. The submission influences carrier pricing. The pricing affects placement decisions. What began as a small misread becomes a chain reaction across underwriting, client advisory and coverage adequacy.

These are interconnected failures, and they are uniquely difficult to detect because each step appears reasonable in isolation, but this is where governance becomes essential.

A governance framework ensures that discrepancies are caught early, often by flagging patterns or variances that human reviewers might miss during manual checks. Governance ensures that speed does not outpace accountability.

Further, errors & omissions (E&O) exposure is fundamentally about discrepancies: what was intended versus what was executed. AI introduces speed and scale, but it also introduces new vectors for misalignment.

When implemented with the right guardrails, these AI systems are powerful. Evidence from the 2026 McKinsey report “AI in insurance: Understanding the implications for investors” suggests that AI-enabled client engagement has reduced customer churn by up to 50% in specific segments.

Defining Guardrails

A 2025 survey from the National Association of Insurance Commissioners (NAIC) revealed that while 88% of auto insurers and 70% of home insurers had adopted AI. Yet, one-third of surveyed health insurers still did not regularly test their models for bias or discrimination. This regulatory gap has led to increased scrutiny from state commissioners.

Further, in March 2025, Wisconsin became the 24th state, plus D.C., to formally adopt the NAIC “Model Bulletin on the Use of Artificial Intelligence Systems.” This bulletin is prescriptive: insurers and their representatives must maintain a documented AI program that demonstrates clear oversight and auditing processes. For an independent agent, having a documented governance framework is an increasingly essential defense against regulatory and E&O exposure. This is where the concept of an agency charter comes into play.

An agency charter is a documented set of rules that defines:

  • What AI is allowed to do.
  • Which systems can AI access?
  • What decisions AI can make independently.
  • When it must escalate to a human.

At its core, the charter answers a simple but critical question: Where does automation end and human judgment begin? Here are three areas agency leadership must define in their charter:

1) Established decision points. Every workflow contains moments of decision. In a manual process, these decisions are often implicit, made based on experience or habit. In an AI-enabled workflow, they must be made explicit.

Agency leadership must define:

  • Where AI can make deterministic decisions. For example, data normalization and document classification.
  • Where AI can make probabilistic recommendations, including risk scoring and market matching.
  • Where human intervention is required, such as coverage interpretation and binding authority.

Without these defined decision points, AI operates in a gray area, neither fully autonomous nor properly supervised.

2) Approval thresholds. One of the most practical elements of governance is the use of approval thresholds. For example, if an AI-generated policy comparison shows less than a 5% variance from the binder, it may pass automatically. If the variance exceeds that threshold, it triggers manual review. If any critical fields, such as limits, endorsements or exclusions, differ, escalation is mandatory.

These thresholds change governance from a philosophical concept into an operational mechanism.

3) The human-in-the-loop mandate. Despite advances in automation, one principle remains non-negotiable: Licensed professionals are responsible for final decisions. AI can analyze, recommend and even execute tasks, but it cannot assume legal accountability. Binding coverage, interpreting policy language and advising clients remain human responsibilities.

The role of humans, however, evolves. Instead of performing every step, they validate outcomes, interpret insights and intervene where judgment is required.

Use AI Without Losing the Human Touch

Nevertheless, not all AI use cases require the same level of governance. The most critical starting point is where workflows are both high-friction and high-impact. These include:

Policy checking. Policy checking is one of the most time-consuming tasks in the agency workflow. It is also one of the most error-prone.

AI can perform line-by-line comparisons between binders and issued policies, identifying discrepancies in seconds. Early adopters report 60%–80% reductions in review time, while also increasing consistency, according to Patra internal data. More importantly, AI excels at detecting subtle mismatches, omitted endorsements, shifting limits and added exclusions.

Submission intake. Submission intake is another area where AI delivers immediate value. Agents routinely receive unstructured data: PDFs, spreadsheets, emails and handwritten notes. AI can ingest this messy information, extract relevant data and assemble carrier-ready submissions.

Up to 40% of insurance operations work can be automated using AI and related technologies, according to a 2022 Accenture study. Clean, structured submissions improve underwriting outcomes and carrier relationships.

Renewal intelligence. Renewals represent a missed opportunity in many agencies. Too often, they are treated as administrative tasks rather than strategic inflection points. AI changes that dynamic.

By analyzing historical data, exposure changes and market conditions, AI can surface insights such as coverage gaps, pricing anomalies and competitive alternatives. This shifts renewals from a reactive process to a proactive advisory engagement.

Reframing Agency Roles

A common misconception about AI is that its primary purpose is to reduce costs. In reality, its greatest impact is capacity creation. Governance plays a critical role in ensuring that this capacity is used effectively.

When administrative tasks are automated, employees are repositioned. The goal is not to do the same work faster. It is to enable staff to do the work they were originally hired to do. This shift manifests across the agency as:

  • Account managers become risk advisors. With AI handling data gathering and analysis, account managers can focus on interpreting insights and advising clients.
  • Processors become workflow supervisors. Instead of manually processing every transaction, agency staff oversee AI-driven workflows, managing exceptions and ensuring quality.
  • Producers become growth engines. Freed from administrative burdens, producers can spend more time on prospecting, relationship building and market strategy.

Effectively Measuring Governance

If agencies implement AI but continue to measure success using outdated metrics, they will limit its impact. Traditional metrics, such as policies processed or quotes generated, were designed for a manual world. They reward activity, not outcomes. In an AI-enabled environment, these metrics can incentivize the wrong behavior: working like a machine instead of thinking like an advisor.

A more meaningful metric is revenue per employee. The 2025 Best Practices Study by the Big “I” and Reagan Consulting found that revenue per employee for top-performing agencies rose to a record $228,321, driven by significant productivity gains.

AI, when governed effectively, is a direct lever for increasing this figure by expanding capacity without a proportional increase in headcount.

Beyond revenue, agencies should measure how time is being reallocated by identifying:

  • Producer selling time. Are producers spending more time on revenue-generating activities?
  • Client advisory engagement. Are account managers having more strategic conversations with clients?

These metrics capture the conversion of operational efficiency into business growth.

AI is a powerful tool. But without governance, it is an unmanaged one. Building an agent charter requires courageous leadership—a willingness to rethink workflows, redefine roles and challenge long-standing assumptions about how work gets done.

Steve Forte is director, product marketing, at Patra, a Big “I” Agents Council for Technology (ACT) supporting partner.