From customer service to automated underwriting and even fraud detection, artificial intelligence (AI) promises to be a ground-breaking tool and a disruptive force in insurance.
From customer service to automated underwriting and even fraud detection, artificial intelligence (AI) promises to be a ground-breaking tool and a disruptive force in insurance. AI's double-edged potential could both innovate and shake up the industry.
AI in the insurance industry is taking—and will take—many forms. Large language models, much like ChatGPT, promise to play many roles, as will natural language understanding and machine learning. Generative AI, with its ability to create and analyze images, videos and audio, also holds huge potential for insurance industry use cases.
While AI has been garnering major headlines across many industries recently, the insurance industry has been using AI tools in some form for years.
“From the carrier side, we have been seeing predictive modeling since it started to really grow 10-15 years ago," said Bill Holden, senior vice president of executive perils for The Liberty Company Insurance Brokers. “This is where they use vast amounts of data to make predictions."
But even more familiar tools, such as telematics devices used in usage-based insurance policies, rely on AI underpinnings, as do the basic chatbots found on most insurance company websites.
AI tools have been familiar assets in the insurance landscape for years, but some of the emerging functions are creating new risks that might need to be insured themselves.
Take those large language models for one. AI companies train these models to answer questions autonomously based on predictive text that relies on the data it has been fed. Problems emerge because the AI isn't designed to answer the prompts with 100% accuracy but is instead designed to do their best to predict what word would likely come next. And nobody is there monitoring what it says in real-time.
That has led to a phenomenon known as “hallucinating," where the program simply makes up some element of the response out of thin air because it sounded like it would be correct.
If that hallucination was simply a nonsense response, then there is no harm, but, as The Guardian reported, there has already been one lawsuit filed in Australia based on the AI accusing a mayor, who was a whistleblower in a case, of being the one who committed the offense.
That kind of defamation falls into libel law, and lawsuit judgments based on those damages can easily cost millions of dollars. The key to who must pay in those cases comes down to who was at fault for the publication, which is where the interesting insurance question comes in.
Because the machine made the statement, was it the fault of the person who asked the question and thus prompted the publication in the first place? Was it the fault of the company that hosted the AI chatbot's code on its server? Was it the fault of the programmer? Which insurance company is going to have to pay to defend the lawsuit and, in the case of a guilty verdict, pay to cover the judgment?
A similar question of fault comes with AI that drives autonomous vehicles. If an AI-driven autonomous drone, delivery vehicle or taxi crashes into another vehicle or, worse, kills a pedestrian, who is at fault? Would it be the owner? The programmer? The manufacturer?
These questions have not yet been answered by courts or legislatures.
Generative AI has also sparked lawsuits based on copyright violations. With generative AI, a company trains the program to create art or music by feeding it examples of existing works. The creators of those works have been taking issue with that practice, claiming that their work and styles are being stolen. And they are suing, with lawsuits filed against OpenAI, Stability AI, DeviantArt, Midjourney and other text and image-generating AI programs, according to CNBC. Deep fakes present a similar liability question.
As these questions play out, policy language must be updated in personal lines, umbrella policies, business general liability policies, errors & omissions policies, directors & officers policies, media liability policies, and beyond.
“Liability is like a pebble in a pond," Holden said. “It ripples out, and things you don't think about come into play."
Beyond risks that need to be covered, AI has changed and will continue to change how insurance companies operate—from the point of contact with the customer through the way policies are underwritten and claims are processed.
The most immediate impact will be on paperwork. AI holds the promise to minimize the risk of human error by streamlining services and automating tasks.
Everyday insurance functions, such as filling out forms, filing insurance certificates, checking policies, and any manner of clerical tasks will be shifted as soon as possible to AI tools.
“I know they are already writing briefs," Holden says. “If they are not already, it will not be too far in the future that they will start to write coverage opinions."
When customers have historically applied for policies, insurance companies relied on customer-supplied data, some commercial databases, and limited human investigation to aid in the underwriting.
With artificial intelligence, underwriters can use natural language understanding tools to read unstructured data, such as reviews on sites like Yelp and thousands of public document filings and public records, and scrape social media feeds to build profiles on applicants that can help assess risk.
The next step in AI would be to remove the human underwriter entirely, taking that automatically collected data and creating an automated coverage decision and rate nearly instantaneously. But that must be done cautiously to ensure unintended consequences don't follow.
Through machine learning and modeling, insurance companies could automate many of the tasks that had previously been done through labor-intensive, hands-on processes. After claims are filed, artificial intelligence can step in and use generative AI to analyze images and video of damage and interface with sensors. AI can compare that damage and information with policy documents, returning coverage decisions and settlement offers in a fraction of the time a human would take.
Machine learning also holds the potential for detecting fraud by analyzing patterns that might slip past a human and flag suspicious claims or behaviors that might signal something isn't entirely above board.
Many of these tools are here now, and many are being rolled out in stages. Many more are sure to come.
Job Advancement and Downfalls
With all that automation, many in the industry will be looking over their shoulders to see if their job will be the one at risk.
At the beginning of the AI revolution, the most tedious and repetitive jobs will be most likely to be lost, as will front-line customer-facing roles that had previously been outsourced to call centers.
With large language models able to return answers and elegant responses, chatbots will continue to play an increasingly important and expanding role. And with generative AI able to understand and create human-sounding voice responses, phone-based customer service jobs that haven't already been automated will also likely be further outsourced.
But that doesn't mean every insurance job is immediately at risk. Anyone who has dealt with an automated call center knows the frustration of asking for a human attendant because the AI just isn't cutting it.
And though a drone may be able to capture post-disaster damage, and a phone's camera can relay video and photos to the insurance company's AI to assess the damage after a car crash, Holden says that there is still something missing when there aren't people involved in the process—at least for now.
“Until it can emulate emotion and empathy, AI can't do the claims adjusting on its own," Holden said. “It still needs to learn its bedside manner."
Bias and Discrimination
There is a ghost in the machine when it comes to automating insurance roles that were previously done by humans: bias and discrimination. There are strict laws governing discrimination in insurance, but with many of the AI tools, how they make their decisions is shrouded inside a black box. Many researchers, including the Algorithmic Justice League, have pointed to systemically racist results that have been produced by AI in many different contexts.
Bob Gaydos, CEO of Pendella Technologies, said that while AI can process information much faster than humans, that speed is often ultimately their liability.
“You have to protect it from its speed. Speed is a great thing, but a dangerous thing, and AI makes assumptions at a crazy speed," Gaydos said.
He said that if the AI has an assumption that leads to a biased coverage decision, the nature of AI means that it is going to reproduce that assumption again and again.
A human might rely on wisdom or experience to realize that a biased decision was wrong, ill-informed or even illegal, but steps must be taken to ensure the AI doesn't discriminate.
Underwriters have to be conscious of the implications that automated underwriting can have when it comes to bias and discrimination. Otherwise, they will invite an avalanche of political oversight and regulation.
Already, Colorado is proposing a regulation to prevent AI-driven discrimination in insurance. “The political door is open with Colorado. State regulators are going to say, 'If you are using AI, you are going to have to show us how you are going use it,'" Gaydos said. “But that will open Pandora's Box."
The Future of AI and Insurance
AI is already good at automating repetitive and predictable tasks in the insurance industry. The human touch is still needed. But as AI improves, more complex tasks will continue to be handed off, perhaps opening more opportunities for oversight roles and efficiencies.
From a customer standpoint, the AI future is a dream of an automated, frictionless experience. Picture getting in a car that tells you the different real-time insurance rates for different routes to work that morning based on traffic and road conditions. If an accident occurs, the claim is processed automatically with the click of an app, and the car drives itself to the shop while a replacement car finds its way to the driveway—without you having to take a moment off work.
That world of usage-based and real-time pricing is almost a certainty as data and feedback models become more available and help drive decisions in real-time.
Risk management and mitigation will play as much a role in the customer service realm as underwriting does now, with aerial images coupled with generative AI processing analysis and providing agents information to help their customers head off things like roof leaks before they happen.
But for now, things are changing fast, and AI potential seems to be everywhere.
Michael Giusti, MBA, is senior writer and analyst for InsuranceQuotes.com.