AI And Racial Bias In Insurance Claims

Stop me if you’ve heard this one before: an old-economy business gets sold an AI or computer algorithm promising to cut costs and improve business. Innovation! Big Data! Progress! What could go wrong? Unfortunately, we’re increasingly learning that AI incorporates the same biases, including racial biases, that have always existed, except that it makes those biases harder to challenge by wrapping them in a shiny package of technological legitimacy.

Allegations that AI and computer algorithms are used to perpetuate racial biases are nothing new. But a recent lawsuit indicates these practices are infiltrating the insurance business.

To explain how this works, it’s helpful if you understand two important things about how insurance companies think. The first is fraud. Big, consumer-facing insurers (the ones you see advertising on prime time TV with silly animal mascots or jingles), get claims made by thousands or millions of people every year. A small fraction of these (estimated to be ten percent by an industry group, which probably has an incentive to over-estimate) include fraud, i.e., people lying to get money they aren’t entitled to. Sometimes this is straightforward: people claiming to have been injured in car crashes that never happened, for example. Other times it’s more innocent: maybe someone mistakenly claims the couch that got destroyed in a house fire was brand new when it was actually a few years old.

Regardless, fraud is illegal, immoral, and (most relevant here) costs the insurance company money. So most insurance companies have a special department to screen out fraudulent claims. It’s usually called something like the “Special Investigations Unit” (SIU) and given a veneer of law enforcement. Sometimes the SIU finds claims that are genuinely fraudulent. Sometimes, the SIU gets accused of bullying legitimate claimants into dropping the claim through the implied threat that they’ll be found guilty of fraud. This can prey on policyholders perceived as more vulnerable, which typically translates to targeting poor, immigrant, or minority policyholders.

The second part of the insurance business that comes into play when we think about using AI to search for fraud is the intense pressure to cut costs. Insurance is a business like any other. It’s driven by the profit motive, and, as insurers increasingly become publicly-traded (a relatively new innovation spurred in part by the weakening of federal banking regulations in the late 1990’s), it’s driven by the need to please shareholders, and please them RIGHT NOW. The way to do that is show more profits in this quarter’s financial report.

Insurers have (basically) two ways to do this. The first is to charge more premiums. This is often a non-starter. Insurance consumers are very price-sensitive these days. In the past, your average American family might buy insurance from an agent with a brick and mortar office on the local Main Street that they’d known for years. They bought based on their personal relationship with that agent.

But, nowadays, those people probably buy coverage online. They have no human relationship with the company. So they do what we all do under these circumstances: hit up a few websites and buy from the cheapest company. Raising premiums by a few dollars drives these customers away. Fewer customers can mean angry shareholders.

So if the company can’t boost profits by charging more, it has to try to save money. This can mean paying employees less, spending less money and time investigating claims, paying fewer claims (and paying fewer dollars on those claims).

In this context, it can also mean turning the SIU’s fraud investigating functions over to an AI. Why pay an experienced investigator a handsome salary to spend thousands of hours combing through millions of claim files every year looking for evidence of fraud when you can pay an AI a few bucks to crunch all the data and flag the fraudulent claims? Everybody wins!

If that sounds too good to be true, it probably is, at least according to one lawsuit that alleges one major insurer used an AI to flag files for fraud based on what amounts to racial dog-whistles. If the allegations are to be believed, the AI works by (basically) assuming claims from policyholders living in poorer, Black-er neighborhoods are more likely to be fraudulent.

According to the lawsuit, which cites a study of 800 policyholders in the Midwest, the AI flags claims as potentially fraudulent based on a host of characteristics that boil down to a proxy for race:

The lawsuit was filed December 14, 2022. No judge or jury has decided if these allegations are true.

But it’s a fascinating look at how practices we think of as tech-industry-focused (using AI in questionable ways) can infect the insurance industry, an industry that is about as “old economy” as it gets.

Ninth Circuit Considers When Death By Drunk Driving Is “Accidental”

Is death “accidental” when a person gets in a fatal car crash while over the legal alcohol limit? Courts have had a hard time answering this question. A recent Ninth Circuit ruling provides some clarity.

In Wolf v. Life Insurance Company of North America, the Ninth Circuit held that death resulting from drunk driving was “accidental” for purposes of insurance policy coverage.

In that case, the insured died after driving 65 miles per hour going the wrong way on a one-way road with a 10 mile per hour speed limit. An autopsy found he had a blood-alcohol level of 0.20%.

His family made an insurance claim with Life Insurance Company of North America (LINA). LINA had sold the deceased an insurance policy covering “accidental” death.

LINA denied the claim. It determined that death under these circumstances was not “accidental” because it was foreseeable that driving with a 0.20% blood-alcohol level would result in death or serious injury.

The family filed a lawsuit contesting the denial under ERISA. The Seattle federal court sided with the family. The court ruled that the decedent’s behavior was “extremely reckless” but did not make death so certain as to render it not “accidental”. LINA appealed to the Ninth Circuit Court of Appeals.

The Ninth Circuit agreed with the lower court. It acknowledged that courts have applied different tests to determine whether death under these circumstances is “accidental.” It decided that the most appropriate question is whether death was “substantially certain” to occur: if death is substantially certain, it can’t be accidental.

Applying that test, the Ninth Circuit agreed that death was accidental. Although the insured’s behavior was reckless, it did not make death substantially certain. The court emphasized: “there is no doubt that drunk driving is ill-advised, dangerous, and easily avoidable.” But death was still accidental.

The court concluded with the truism that insurers who don’t want to cover death resulting from drunk driving should just add an explicit exclusion to their policies:

The solution for insurance companies like [LINA] is
simple: add an express exclusion in policies covering
accidental injuries for driving while under the influence of
alcohol, or for any other risky activity that the company
wishes to exclude….This would allow policyholders to form reasonable expectations about what type of coverage they are purchasing without having to make sense of conflicting bodies of caselaw that deal with obscure issues of contractual interpretation.

Summer News Roundup: Bans on Credit Scoring, Bertha the Tunnel Machine, Bargains for Arbitration in ERISA Plans, and Benefit Managers

Courts had a busy summer on insurance and ERISA issues.

A Washington State judge struck down the Washington Insurance Commissioner’s ban on using credit scores to price insurance. The judge acknowledged that using credit scores (which are a proxy for poverty) has a discriminatory impact. Insureds with low credit scores pay more for insurance even if they present a low risk to the insurer. But the judge found that the legislature, not the Insurance Commissioner, has the authority to ban the practice.

The Washington Supreme Court held that there was no insurance coverage for damage to the machine used to bore the tunnel for the replacement of the Alaskan Way Viaduct in Seattle (affectionately nicknamed “Bertha” after Seattle’s former mayor). The machine broke down during the project in 2013. It was determined the machine suffered from a design defect. The Supreme Court held that the design defect fell within the scope of an exclusion in the applicable insurance policy for “machinery breakdown.”

Employers asked the U.S. Supreme Court to rule that ERISA disputes should go to arbitration. Several courts have decided that certain types of lawsuits alleging violations of ERISA’s fiduciary duties cannot be forced into arbitration. The reason is that the plaintiff in these cases sues on behalf of the governing employee benefit plan. ERISA treats such a plan as a separate legal entity. Therefore, an individual employee’s signature on an employment contract with an arbitration clause in the fine print does not bar that employee from suing on behalf of the ERISA plan–at least according to these courts. If the Supreme Court steps in, that could change.

The Supreme Court declined to revisit a case holding that ERISA allows health plans to pay high prescription drug prices. The plaintiffs argued that their health plan’s administrator (called a Pharmacy Benefit Manager) acted as a fiduciary under ERISA when it set the prices the health plan and its participating employees paid for prescription drugs. As an ERISA fiduciary, the administrator would have an obligation to act in the best interest of the participating employees when setting drug prices. The Supreme Court’s decision not to take up the case leaves in place the lower court’s ruling that these administrators were not subject to ERISA’s fiduciary duties.

Don’t Assume All Employer-Adjacent Insurance is ERISA-governed, Says Ninth Circuit

There’s often an erroneous assumption that any insurance a person buys in connection with their employment is automatically subject to ERISA. But ERISA does not regulate all employer-adjacent insurance. ERISA only applies to employee benefit “plans.” Whether an ERISA “plan” exists can be complex, but without one, an insurance policy will not be subject to ERISA even if an employer was involved in its purchase.

A recent Ninth Circuit decision is a good reminder of this. In Steiglemann v. Symetra Life Ins. Co., the appellate court determined that an insurance policy purchased in connection with the plaintiff’s employment was not subject to ERISA because the requirements for an employee benefit “plan” were not met. The decision is unpublished, meaning it may be persuasive to lower courts but is not binding.

Jill Steiglemann bought a disability insurance policy from Symetra Life Insurance Company. She had access to the policy through her membership in a trade association for insurance agents. Her company paid for the insurance. The lower court held that the policy was part of an employee benefit plan and subject to ERISA.

The Ninth Circuit Court of Appeals reversed the lower court and held that the policy was not governed by ERISA. Even though Steiglemann’s employer arranged for the option for her to buy coverage and paid premiums, this was not enough to show the employer established an ERISA plan.

The employer never contracted to provide for coverage. It never promised to act as an administrator for the insurance. And it never took the steps necessary to maintain an ERISA plan, like recordkeeping and filing returns with the Department of Labor.

Steiglemann’s trade association also did not do the things necessary to create an ERISA plan. The association did not function for the main purpose of representing employees against their employer.

There was therefore no evidence that Steiglemann’s insurance policy was part of an employee benefit plan. And without a plan, the policy was not subject to ERISA.

This decision is a helpful reminder not to assume that ERISA applies to all employer-adjacent insurance.

Insurers Still Breaking Mental Health Coverage Rules Says Department of Labor

The 2022 report to Congress from the Department of Labor (DoL) on compliance by group health plans with the federal mental health parity laws identifies numerous instances of continued discrimination in coverage for treatment of mental health diagnoses.

Federal law generally prohibits insurers from discriminating against people who need coverage for treatment of mental health conditions. Basically, health insurers cannot have limitations that are more restrictive of treatment for a mental health condition than for other conditions. These rules have only become more important since the COVID-19 pandemic contributed to mental health issues for many Americans; for instance, the CDC noted a 30% increase of overdose deaths since the pandemic.

In large part for this reason, DoL has made enforcement of the mental health parity rules a priority in recent years. One new enforcement tool is a 2021 rule passed by Congress requiring health plans to provide DoL with a comparative analysis of treatment limitations for mental health conditions to help DoL ensure these practices follow the law.

DoL’s report identified many problems with health plans’ reporting about mental health parity. For instance:

  • Failure to document comparisons of treatment limitations for mental health limitations before implementing those limitations;
  • Lack of evidence or explanation for their assertions; and
  • Failure to identify the specific benefits affected by mental health limitations.

DoL also noted that enforcing these reporting rules had led to the removal of several widespread insurer practices that violated the mental health parity rules.

For example, one major insurer was found to routinely deny certain behavioral health treatment for children with Autism Spectrum Disorder. This resulted in denying early intervention that could have lifelong results for autistic children. DoL found over 18,000 insureds affected by this exclusion.

Another example involved the systemic denial of treatment used in combatting the opioid epidemic. New research has found that combining therapy with medication can be more effective for treating opioid addiction than medication alone. DoL found a large health plan excluded coverage for this therapy in violation of the mental health parity rules.

Other treatments DoL’s report identified as being denied on a widespread basis in violation of the law included counseling to treat eating disorders, drug testing to treat addiction, and burdensome pre-certification requirements for mental health benefits.

DoL’s report is a reminder that discrimination on the basis of mental health related disabilities remains a part of the insurance business despite years of federal legislation to the contrary.