Insurance and generative AI: The future, love it or hate it–part 2

  • Post category:blog
You are currently viewing Insurance and generative AI: The future, love it or hate it–part 2
This is the second in a two-part series. Read part one here.

Will generative AI take my job? 

PricewaterhouseCoopers admits that “No one yet knows the long-term impact of AI on overall employment.” Just a few years ago—prior to the advent of generative AI—the same company acknowledged that established forms of artificial intelligence aren’t boosting ROI to the extent many experts thought they might. Does generative AI offer hope where prior machine learning models failed, and should it be implemented even as its risk to flesh and blood employees remains unclear?

The United States government has even shown concern over the prospect of new AI models causing large-scale unemployment: President Biden’s Executive Order on AI stresses a commitment to data safety, the equitable availability of the technology and the adoption of the technology in a way that will keep the interests of workers and their unions in mind. Of course, it offers only the kind of abstract solutions unlikely to assuage the fears of working people.

But perhaps this needn’t be a major concern. Per a Korn Ferry report, we can expect a worldwide “talent shortage” of more than 85 million workers by 2030. Some companies are already leveraging AI to combat—not to replace—worker shortages. That is, they’d prefer to hire human workers, but there aren’t enough available. And in time there will be even fewer.

In any case, AI is big business, and it’s growing. That means there’s an incentive to use it, whether or not it suits a company’s needs. What happens, then, if AI is used for its own sake, rather than as a temporary replacement for workers that don’t yet exist? What kind of unemployment outcomes might we see? And when it makes mistakes, companies that depend almost wholly on AI may find that they don’t have enough human experts on staff to fix things. In the insurance sector, this means a company can find itself on the back foot as it faces fraud, or in the event of a data breach.

The theoretical widespread adoption of generative AI would not necessarily result in an employment extinction event, in the insurance sector or more generally. But some still raise concerns. In a recent issue of Insurance Business, Rory Yates, global strategic lead at EIS, warns that “the insurance industry is approaching AI with the wrong mindset.” Comparing the recent generative AI boom to the Industrial Revolution, Yates seems to be anything but sanguine about AI when he says the industry is “adopting [generative AI] from the position of a negative business case where you replace people in the name of efficiency…where people just bear the brunt and it won’t produce a better result for the end customers or humanity.”

Yates cautiously believes that generative AI has a place in the insurance sector, although he offers no concrete solutions as to how to implement it in a worker-first manner. So: Should workers be concerned? Likely not so much in the short term. But beyond that? No one can say just yet.

 

How could AI change the insurance sector? 

McKinsey notes that AI’s role in insurance will increase due to the increasing penetration of data-gathering devices in our everyday lives—smartphones, watches, cars, etc. –the web-connected devices that make up the Internet of Things (IoT). What does this mean for an insurance company that, say, doesn’t offer car insurance? A holistic data-gathering LLM system that stores info on how customers drive, and what state of health they’re in, is breaking each customer down into data sets that are relevant to a variety of insurance products—workers’ comp, personal umbrella, and so on.

In theory, a customer’s relative health and “clumsiness” could speak to how likely they are to be injured on the job, increasing an employer’s commercial work comp insurance premiums. How would the employer decrease them? By requiring applicants to submit background checks that allow them to open up that data, probably via cloud computing. Is that allowed—might HIPAA bar them from doing that? It could, eventually; regulation is always catching up to innovation.

McKinsey imagines that by 2030, much guesswork will be taken out of the purchasing and selling of insurance. Insurance machine learning programs will know so much about a customer that they’ll be able to tailor insurance packages to their (very) specific needs—wholly customized, niche insurance packages, available thanks to highly specific, nuanced, moment-to-moment data gathered via the IoT, aggregated in the cloud, analyzed by artificial intelligence and sold to customers confident that the process has worked as best it can to consider them holistically.

McKinsey also imagines usage-based insurance (UBI) becoming ever more micro-sized and balkanized. This type of insurance could be arranged to kick in only when a customer needs it. This suggests the working together of a great many less expensive niche insurance products as an organism, which may, in certain instances, be preferable to traditional all-risk or workers’ comp packages.

Add to that the increase in public surveillance, which extant systems could theoretically feed insurance data models, and incident reporting becomes all but instantaneous. Claims could be resolved in minutes without a company needing to resort to talking to customers—who  can be forgetful or lie about what happened.

The future of fraud? 

A joint Verisk and the Coalition Against Insurance Fraud study found that 25% of Americans between 25 and 44 do not consider insurance fraud a crime. An LLM trained on that information would develop a bias against customers in that age bracket. In fact, it’s in a company’s interest that it do so: The Coalition Against Insurance Fraud also notes that insurance fraud results in $308.6 billion in losses to Americans a year; a full 10% of property-casualty insurance losses include a fraudulent element.

Those numbers may rise in the future: The same publication found that 12% of respondents between 45 and 54 don’t consider insurance fraud a crime. Such statistics qualify the eligibility of upstart captains of industry to buy insurance: The Harvard Business Review found that the average age of a successful start-up founder is 45.

Does AI in fact make it easier to commit insurance fraud? Maybe, by automating and scaling the creation of fake customers and policies or using voice cloning to adjust a real customer’s policies over the phone. AI can even be used to create realistic-looking X-rays of broken bones that fraudsters could submit to their insurance providers in order to “prove” injury claims.

Take note that AI, deceitfully utilized, can clean up the misspellings and other language errors that help insurance workers easily identify phishing attempts, making attempted data breaches more likely to succeed. This is a danger for anyone, no matter what kind of insurance they provide.

AI insurance? 

The widespread adoption of any new technology will be accompanied by certain hiccups. In the case of generative AI, that’s an artful understatement.

Generative AI has been responsible for disturbing deep fakes, which can permanently harm a company’s reputation. Some black-hat hackers have used generative AI as a superpowered spambot. Particularly odd is the phenomenon of “obituary piracy,” in which, in short, living persons are deemed deceased due to an AI flooding search engines with just such erroneous information. This could affect customers and their insurance providers in obvious ways: Dead people can’t buy insurance. They tend to have trouble keeping up with the payments.

Considering all this, it isn’t hard to imagine the need to develop some manner of AI insurance, and both companies and private individuals may need coverage. This could offer a major ROI boost to any company that decides to sell AI insurance, whatever exactly such a product looks like. It could also be disastrous should the generative AI model an insurance company uses to combat bad-actor generative AI become too complex and costly to maintain.

Because generative AI is easy to abuse, it’s not impossible that some insurance companies may offer AI insurance in bad faith. However, a University of Michigan Law School paper argues that the adoption of AI liability insurance could be a direct cause of the responsible adoption of AI. This sounds promising: It may be that certain issues inherent in the large-scale implementation of AI, and the creation of AI insurance, could work themselves out over time.

Related: Innovation, Insurtech and Arrowhead: our recipe for growth

What can an insurance company do to create a good generative AI model?

Generative AI is “the future” of insurance—movers and shakers in a variety of industries will see to that one way or another. While the manner in which LLMs will be utilized in the insurance sector isn’t yet clear, the implementation of generative AI technology is a foregone conclusion. That being the case, an insurance company looking to develop an AI model will want to do it in the least obstructive way possible.

In terms of data privacy, an LLM can be used to set up firewalls where they’re most needed. IT professionals could write a company’s AI software on a digital ledger, making it much, much harder to get at sensitive information. This is the concept behind blockchain technology, which unfortunately has been given a bad name by its use in the cryptocurrency sector. A company that can see past that and leverage blockchain in an intelligent way may come out ahead of the competition in a digital arena increasingly populated by fraudsters and hackers.

Deloitte recommends developing an “ethical” AI model in order to minimize risks, at the top of which list is “Malicious hallucinations and deep fakes, phishing and prompt injections, and ambivalent actors can expose the attack surface and erode customer trust.” What makes such a model “ethical” is its parent company’s commitment to transparency and governance compliance. This would allow departments to work together more easily, meaning different kinds of experts can be brought in to troubleshoot AI issues.

Some AI programs can detect when something was made by AI. This could help weed out AI-derived false profiles and impersonations. For instance, if a company’s AI notes that it took an applicant a second or two to fill out a multi-page application, it can flag said applicant as a bot. A company developing its own LLM will want to keep this capability in mind.

There’s no simple way to wrap up a discussion of generative AI as it pertains to the insurance sector. It may offer ROI boosts, potentially large ones; using it is inherently risky; it can drive customers away; it could better secure a company’s data and make it easier to parse. Ultimately, the value of any generative AI model varies not only by industry and sector but by individual company. In any case, it’s likely in a company’s interest to leverage this sort of technology, if for no other reason than to be prepared to take advantage of beneficial use cases that haven’t yet been discovered.