Stop me if you’ve heard this one before: an old-economy business gets sold an AI or computer algorithm promising to cut costs and improve business. Innovation! Big Data! Progress! What could go wrong? Unfortunately, we’re increasingly learning that AI incorporates the same biases, including racial biases, that have always existed, except that it makes those biases harder to challenge by wrapping them in a shiny package of technological legitimacy.
Allegations that AI and computer algorithms are used to perpetuate racial biases are nothing new. But a recent lawsuit indicates these practices are infiltrating the insurance business.
To explain how this works, it’s helpful if you understand two important things about how insurance companies think. The first is fraud. Big, consumer-facing insurers (the ones you see advertising on prime time TV with silly animal mascots or jingles), get claims made by thousands or millions of people every year. A small fraction of these (estimated to be ten percent by an industry group, which probably has an incentive to over-estimate) include fraud, i.e., people lying to get money they aren’t entitled to. Sometimes this is straightforward: people claiming to have been injured in car crashes that never happened, for example. Other times it’s more innocent: maybe someone mistakenly claims the couch that got destroyed in a house fire was brand new when it was actually a few years old.
Regardless, fraud is illegal, immoral, and (most relevant here) costs the insurance company money. So most insurance companies have a special department to screen out fraudulent claims. It’s usually called something like the “Special Investigations Unit” (SIU) and given a veneer of law enforcement. Sometimes the SIU finds claims that are genuinely fraudulent. Sometimes, the SIU gets accused of bullying legitimate claimants into dropping the claim through the implied threat that they’ll be found guilty of fraud. This can prey on policyholders perceived as more vulnerable, which typically translates to targeting poor, immigrant, or minority policyholders.
The second part of the insurance business that comes into play when we think about using AI to search for fraud is the intense pressure to cut costs. Insurance is a business like any other. It’s driven by the profit motive, and, as insurers increasingly become publicly-traded (a relatively new innovation spurred in part by the weakening of federal banking regulations in the late 1990’s), it’s driven by the need to please shareholders, and please them RIGHT NOW. The way to do that is show more profits in this quarter’s financial report.
Insurers have (basically) two ways to do this. The first is to charge more premiums. This is often a non-starter. Insurance consumers are very price-sensitive these days. In the past, your average American family might buy insurance from an agent with a brick and mortar office on the local Main Street that they’d known for years. They bought based on their personal relationship with that agent.
But, nowadays, those people probably buy coverage online. They have no human relationship with the company. So they do what we all do under these circumstances: hit up a few websites and buy from the cheapest company. Raising premiums by a few dollars drives these customers away. Fewer customers can mean angry shareholders.
So if the company can’t boost profits by charging more, it has to try to save money. This can mean paying employees less, spending less money and time investigating claims, paying fewer claims (and paying fewer dollars on those claims).
In this context, it can also mean turning the SIU’s fraud investigating functions over to an AI. Why pay an experienced investigator a handsome salary to spend thousands of hours combing through millions of claim files every year looking for evidence of fraud when you can pay an AI a few bucks to crunch all the data and flag the fraudulent claims? Everybody wins!
If that sounds too good to be true, it probably is, at least according to one lawsuit that alleges one major insurer used an AI to flag files for fraud based on what amounts to racial dog-whistles. If the allegations are to be believed, the AI works by (basically) assuming claims from policyholders living in poorer, Black-er neighborhoods are more likely to be fraudulent.
According to the lawsuit, which cites a study of 800 policyholders in the Midwest, the AI flags claims as potentially fraudulent based on a host of characteristics that boil down to a proxy for race:

The lawsuit was filed December 14, 2022. No judge or jury has decided if these allegations are true.
But it’s a fascinating look at how practices we think of as tech-industry-focused (using AI in questionable ways) can infect the insurance industry, an industry that is about as “old economy” as it gets.