Skip Ribbon Commands
Skip to main content

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

 

 ‭(Hidden)‬ Catalog-Item Reuse

How AI is Changing the Need for Media Liability Coverage

For independent agents working within the media liability niche market, the proliferation of generative AI has created a fluid and complex environment.
Sponsored by
how ai is changing the need for media liability coverage

It is safe to say that artificial intelligence (AI) is the defining technology of our generation. Since its genesis in the 1950s, AI has developed from a predictive numbers tool into something that is widely used by governments, throughout industry and even as a source of entertainment. From its evolving use in mimicking human voices, image generation and content creation, generative AI tools are working their way into many aspects of our lives.

For independent agents working within the media liability niche market, the proliferation of generative AI has created a fluid and complex environment.

“The key trigger in a media liability policy as far as AI is concerned is breach of copyright, and there is normally quite a broad policy trigger for breach for intellectual property (IP) rights," says Angela Weaver, focus group leader—media & entertainment, Beazley. “Just because something's created by AI doesn't mean it won't be the subject of a claim."

In the case of the German magazine, Die Aktuelle, the Schumacher family took legal action against the magazine when it published an AI-generated “interview" including quotes attributed to Michael Schumacher, the seven-time Formula 1 world champion. The magazine claimed it was Schumacher's first interview since a crippling skiing accident in 2013.

“This is an example of somebody who obviously thought they had the right to create and to publish something that was substantially false just because it was created by AI," Weaver says.

When it comes to advising clients on the need for a media liability policy, “agents need to bear in mind that even when an insured feels they're on strong ground that they're not infringing copyright, that doesn't stop somebody bringing a lawsuit—and it doesn't stop the defense costs being quite high," Weaver explains.

“The potential risks of using AI are high," says Bill Rooney, underwriting product manager, Philadelphia Insurance Companies. “There have been high-profile incidents of 'deepfakes' being used to create advertisements showing a high-profile celebrity endorsing a fraudulent scheme or product. The celebrities have the right to sue for misappropriating their likeness."

As AI infiltrates the media and technology landscape, the technology itself is being used to identify copyright and IP infringements, increasing the number of copyright claims. “It's a function of technology where the plaintiff's attorneys or the intellectual property holders themselves can find the infringement by others more easily," says Chris Cooper, senior vice president, head of media liability and middle market professional liability, QBE North America. “There are scraping tools available that they can use to find, for example, when their photograph is being used without a license. It's not the same as it was 40 years ago, where if they license a photo for use in a newspaper in New York and their Kansas City outfit used it too, that licensor photographer may not have known."

While insurers are not currently excluding or limiting coverage for AI-related exposures, agents need to make sure that their clients have certain safeguards in place identifying how and when they are using AI-generated content.

“One safeguard is making sure that any contracts with a content creator are very clear around the use of AI," says Tyler Peterson, senior vice president, underwriting management, Hiscox. “The most critical thing here is making sure that the company has the same clearance procedures and risk management procedures in place for any content that is created by AI that they would if they were using a human being."

Essentially, “it's entirely possible that the new piece of content that's created is going to be based off other people's works and still could be deemed similar enough to ultimately lend itself to an IP infringement claim," Peterson explains. “That's why it's so important that even if AI is used for the efficiency benefit, clients do not compromise the risk management processes that they had in place before they adopted AI."

Protecting clients in an area that is currently unregulated is a key safety net that agents can assist with. “I would work with insureds by talking to their risk manager or general counsel about how much their company is relying on generative AI tools. Establish whether insureds have implemented internal policies and procedures for using those tools. That policy should tell employees what they can use generative AI for; when they can't use it; when they have to disclose it; and what the penalties are for not disclosing it to the editors," Cooper says. “There is not yet a clear path forward. Until then, the best approach is to have a consistent policy that all of your employees or content creators know to follow."

“Clients who intend to utilize AI within their business should understand what aspects of their business would benefit from the use of AI or where employees could potentially utilize AI," Rooney says. “There should be procedures in place to oversee AI use by employees and track to what extent content might have been generated by AI."

While AI may have huge potential for the future, agents have a role to play in ensuring their clients are transparent in their use of such tools, protecting them against any potential third-party claims arising from copyright and trademark infringement, invasion of privacy and defamation.

Olivia Overman is IA content editor. 

17823
Wednesday, July 17, 2024
Publishers Liability
Big I Markets