AI And Racial Bias In Insurance Claims

Stop me if you’ve heard this one before: an old-economy business gets sold an AI or computer algorithm promising to cut costs and improve business. Innovation! Big Data! Progress! What could go wrong? Unfortunately, we’re increasingly learning that AI incorporates the same biases, including racial biases, that have always existed, except that it makes those biases harder to challenge by wrapping them in a shiny package of technological legitimacy.

Allegations that AI and computer algorithms are used to perpetuate racial biases are nothing new. But a recent lawsuit indicates these practices are infiltrating the insurance business.

To explain how this works, it’s helpful if you understand two important things about how insurance companies think. The first is fraud. Big, consumer-facing insurers (the ones you see advertising on prime time TV with silly animal mascots or jingles), get claims made by thousands or millions of people every year. A small fraction of these (estimated to be ten percent by an industry group, which probably has an incentive to over-estimate) include fraud, i.e., people lying to get money they aren’t entitled to. Sometimes this is straightforward: people claiming to have been injured in car crashes that never happened, for example. Other times it’s more innocent: maybe someone mistakenly claims the couch that got destroyed in a house fire was brand new when it was actually a few years old.

Regardless, fraud is illegal, immoral, and (most relevant here) costs the insurance company money. So most insurance companies have a special department to screen out fraudulent claims. It’s usually called something like the “Special Investigations Unit” (SIU) and given a veneer of law enforcement. Sometimes the SIU finds claims that are genuinely fraudulent. Sometimes, the SIU gets accused of bullying legitimate claimants into dropping the claim through the implied threat that they’ll be found guilty of fraud. This can prey on policyholders perceived as more vulnerable, which typically translates to targeting poor, immigrant, or minority policyholders.

The second part of the insurance business that comes into play when we think about using AI to search for fraud is the intense pressure to cut costs. Insurance is a business like any other. It’s driven by the profit motive, and, as insurers increasingly become publicly-traded (a relatively new innovation spurred in part by the weakening of federal banking regulations in the late 1990’s), it’s driven by the need to please shareholders, and please them RIGHT NOW. The way to do that is show more profits in this quarter’s financial report.

Insurers have (basically) two ways to do this. The first is to charge more premiums. This is often a non-starter. Insurance consumers are very price-sensitive these days. In the past, your average American family might buy insurance from an agent with a brick and mortar office on the local Main Street that they’d known for years. They bought based on their personal relationship with that agent.

But, nowadays, those people probably buy coverage online. They have no human relationship with the company. So they do what we all do under these circumstances: hit up a few websites and buy from the cheapest company. Raising premiums by a few dollars drives these customers away. Fewer customers can mean angry shareholders.

So if the company can’t boost profits by charging more, it has to try to save money. This can mean paying employees less, spending less money and time investigating claims, paying fewer claims (and paying fewer dollars on those claims).

In this context, it can also mean turning the SIU’s fraud investigating functions over to an AI. Why pay an experienced investigator a handsome salary to spend thousands of hours combing through millions of claim files every year looking for evidence of fraud when you can pay an AI a few bucks to crunch all the data and flag the fraudulent claims? Everybody wins!

If that sounds too good to be true, it probably is, at least according to one lawsuit that alleges one major insurer used an AI to flag files for fraud based on what amounts to racial dog-whistles. If the allegations are to be believed, the AI works by (basically) assuming claims from policyholders living in poorer, Black-er neighborhoods are more likely to be fraudulent.

According to the lawsuit, which cites a study of 800 policyholders in the Midwest, the AI flags claims as potentially fraudulent based on a host of characteristics that boil down to a proxy for race:

The lawsuit was filed December 14, 2022. No judge or jury has decided if these allegations are true.

But it’s a fascinating look at how practices we think of as tech-industry-focused (using AI in questionable ways) can infect the insurance industry, an industry that is about as “old economy” as it gets.

Ninth Circuit Considers When Death By Drunk Driving Is “Accidental”

Is death “accidental” when a person gets in a fatal car crash while over the legal alcohol limit? Courts have had a hard time answering this question. A recent Ninth Circuit ruling provides some clarity.

In Wolf v. Life Insurance Company of North America, the Ninth Circuit held that death resulting from drunk driving was “accidental” for purposes of insurance policy coverage.

In that case, the insured died after driving 65 miles per hour going the wrong way on a one-way road with a 10 mile per hour speed limit. An autopsy found he had a blood-alcohol level of 0.20%.

His family made an insurance claim with Life Insurance Company of North America (LINA). LINA had sold the deceased an insurance policy covering “accidental” death.

LINA denied the claim. It determined that death under these circumstances was not “accidental” because it was foreseeable that driving with a 0.20% blood-alcohol level would result in death or serious injury.

The family filed a lawsuit contesting the denial under ERISA. The Seattle federal court sided with the family. The court ruled that the decedent’s behavior was “extremely reckless” but did not make death so certain as to render it not “accidental”. LINA appealed to the Ninth Circuit Court of Appeals.

The Ninth Circuit agreed with the lower court. It acknowledged that courts have applied different tests to determine whether death under these circumstances is “accidental.” It decided that the most appropriate question is whether death was “substantially certain” to occur: if death is substantially certain, it can’t be accidental.

Applying that test, the Ninth Circuit agreed that death was accidental. Although the insured’s behavior was reckless, it did not make death substantially certain. The court emphasized: “there is no doubt that drunk driving is ill-advised, dangerous, and easily avoidable.” But death was still accidental.

The court concluded with the truism that insurers who don’t want to cover death resulting from drunk driving should just add an explicit exclusion to their policies:

The solution for insurance companies like [LINA] is
simple: add an express exclusion in policies covering
accidental injuries for driving while under the influence of
alcohol, or for any other risky activity that the company
wishes to exclude….This would allow policyholders to form reasonable expectations about what type of coverage they are purchasing without having to make sense of conflicting bodies of caselaw that deal with obscure issues of contractual interpretation.

Summer News Roundup: Bans on Credit Scoring, Bertha the Tunnel Machine, Bargains for Arbitration in ERISA Plans, and Benefit Managers

Courts had a busy summer on insurance and ERISA issues.

A Washington State judge struck down the Washington Insurance Commissioner’s ban on using credit scores to price insurance. The judge acknowledged that using credit scores (which are a proxy for poverty) has a discriminatory impact. Insureds with low credit scores pay more for insurance even if they present a low risk to the insurer. But the judge found that the legislature, not the Insurance Commissioner, has the authority to ban the practice.

The Washington Supreme Court held that there was no insurance coverage for damage to the machine used to bore the tunnel for the replacement of the Alaskan Way Viaduct in Seattle (affectionately nicknamed “Bertha” after Seattle’s former mayor). The machine broke down during the project in 2013. It was determined the machine suffered from a design defect. The Supreme Court held that the design defect fell within the scope of an exclusion in the applicable insurance policy for “machinery breakdown.”

Employers asked the U.S. Supreme Court to rule that ERISA disputes should go to arbitration. Several courts have decided that certain types of lawsuits alleging violations of ERISA’s fiduciary duties cannot be forced into arbitration. The reason is that the plaintiff in these cases sues on behalf of the governing employee benefit plan. ERISA treats such a plan as a separate legal entity. Therefore, an individual employee’s signature on an employment contract with an arbitration clause in the fine print does not bar that employee from suing on behalf of the ERISA plan–at least according to these courts. If the Supreme Court steps in, that could change.

The Supreme Court declined to revisit a case holding that ERISA allows health plans to pay high prescription drug prices. The plaintiffs argued that their health plan’s administrator (called a Pharmacy Benefit Manager) acted as a fiduciary under ERISA when it set the prices the health plan and its participating employees paid for prescription drugs. As an ERISA fiduciary, the administrator would have an obligation to act in the best interest of the participating employees when setting drug prices. The Supreme Court’s decision not to take up the case leaves in place the lower court’s ruling that these administrators were not subject to ERISA’s fiduciary duties.

Insurers’ Ability to Deny Claims “Because We Said So” Limited in Proposed Amendment to ERISA

One question that’s important in deciding an ERISA-governed insurance claim is: who decides? ERISA plans typically provide an employee or beneficiary receives a benefit under certain criteria. For instance, a health plan might provide for payment of medical bills if the treatment was “medically necessary,” a disability benefit plan might pay a portion of an employee’s wages if she can’t perform the “material and substantial” duties of her job, and so on.

Where a dispute arises over whether the plan should have paid these benefits, who decides whether these criteria are satisfied?

Since ERISA provides employees with a right to file a lawsuit in federal court to recover their benefits, you might assume the judge decides. But that’s often not the case.

A 1989 U.S. Supreme Court decision interpreted ERISA as allowing employee benefit plans to provide discretion to the plan’s decision-makers. This means that an employee benefit plan can empower itself or its agents with “discretion” to determine facts and interpret the terms of the plan. Where the plan’s decision-maker has discretion, federal courts often are not allowed to overrule them.

In other worse, these types of discretionary provisions in ERISA plans invite the decision-maker to deny claims for benefits based on little more than “because we said so.” And when the employee sues to get their benefits, the federal court is often not allowed to say that the decision-maker got their facts wrong or misread the benefit criteria. The judge can often do little more than send the case back to the same decision makers for another look.

Because many employee benefits are funded through an insurance policy, the person with this discretion is often an insurance company. Insurance companies, as the late Justice Scalia aptly observed, have a powerful incentive to abuse this discretion because they profit with every claim they reject.

New legislation was recently proposed to change this. The “Employee and Retiree Access to Justice Act of 2022” would amend ERISA to forbid this kind of “discretionary” language in ERISA plans. It would require insurers and other decision-makers who deny claims for benefits under ERISA plans to defend their decisions in court on the merits. They would no longer be able to point to their “discretion” and argue that the judge is forbidden from disagreeing with their decision.

WA Long Term Care Tax Not Preempted By ERISA, Says Federal Court

We previously blogged about the questions whether the Washington State Long Term Care Act is preempted by ERISA. On April 25, 2022, the U.S. District Court for the Western District of Washington dismissed a lawsuit challenging the law on the basis of, among other things, ERISA preemption. The federal court held the law is not preempted by ERISA.

Most Washingtonians are by now familiar with the Washington State Long Term Care Act. The law deducts a percentage of employees’ wages to pay for future state long-term care benefits. The question at issue in the lawsuit was whether this arrangement violates ERISA, which generally prevents states from regulating employee benefit plans.

The court ruled that it does not. The ruling emphasizes that ERISA applies only to benefits “established or maintained” by employers. But the law “is a creation of the Washington Legislature, which, in this context, is neither an employer nor an employee organization as defined by ERISA” according to the court.

After finding ERISA did not apply, the court remanded the case to the Washington State courts to decide the plaintiffs’ other challenges to the law because, with ERISA off the table, there was no basis for the lawsuit to be in federal court.