Two-Thirds of Independent Agents Plan to Increase AI Use This Year

Two-thirds of independent insurance agencies plan to increase their use of artificial intelligence (AI) in the next 12 months, according to the “2026 Big ‘I’ Agents Council for Technology Tech Trends Report.” Despite growing interest, many agencies remain in the early stages of adoption, with nearly one-third reporting they are not using AI. 

A key motivation for adopting AI is operational efficiency (60%) and staff productivity (52%). However, data privacy or compliance risks (24%) and inaccurate outputs (22%) topped the list of concerns.  

 “The findings of the report highlight a pivotal moment for independent agencies, with growing AI interest signaling momentum, long-term success hinges on clearer governance, stronger training and more integrated technology strategies,” says Kasey Connors, executive director of the Big “I” Agents Council for Technology (ACT)

make Your Voice heard at the 2026 Big ‘I’ Legislative Conference

April 22-24 Washington D.c.

Of the agencies that are using AI, 32% are “just experimenting,” 22% are “using AI in limited areas, and 8% say “AI is embedded in daily workflows,” the report said. Unsurprisingly, 45% of respondents report using ChatGPT or other large language models (LLMs), followed by about 21% using policy comparison tools, 18% using AI-enabled marketing tools, 14% using AI chatbots or virtual assistants, and 14% using document or data extraction tools.

Despite increasing experimentation, governance and training lag behind adoption. More than 55% of agencies said they do not have a written AI use policy, and another 23% said a policy is still in development.

Further, peer-to-peer learning is the most common way teams learn new technology, cited by 42% of respondents, outpacing formal internal training and vendor-led education. Nearly 23% said they lack a consistent training process altogether.

Amid increasing adoption and experimentation, and a lack of internal governance, trust and reliability are the main concerns for agents regarding broader AI adoption. Data privacy and compliance risks were cited as the top concern by 24% of respondents, followed closely by inaccurate outputs (22%) and fears about losing the human touch (18%). Cost ranked near the bottom of concerns.

The report brings together insights from independent agents, carriers and technology providers to examine how AI, data, cybersecurity and workflow complexity are reshaping the industry—and what must change to move forward responsibly.

Overall, the report identified three priorities shaping the road ahead: productive and responsible use of AI; harnessing data while strengthening security; and the intersection of technology and human impact.

When asked about their top technology-related challenges, agents pointed to several pressing issues: 22% said keeping up with the pace of technological change, 16% cited a lack of automation and streamlined processes, and approximately 16% highlighted the frustration of managing too many disconnected systems.

“Across our research and conversations with technology leaders, the same priorities continue to surface: responsible AI use, stronger data management and security, and protecting the human element that defines the independent agency channel,” Connors said. “The agencies that strike the right balance between innovation and trust will have a clear advantage as AI adoption accelerates.”

Will Jones is IA editor in chief.