AI And Racial Bias In Insurance Claims

Stop me if you’ve heard this one before: an old-economy business gets sold an AI or computer algorithm promising to cut costs and improve business. Innovation! Big Data! Progress! What could go wrong? Unfortunately, we’re increasingly learning that AI incorporates the same biases, including racial biases, that have always existed, except that it makes those biases harder to challenge by wrapping them in a shiny package of technological legitimacy.

Allegations that AI and computer algorithms are used to perpetuate racial biases are nothing new. But a recent lawsuit indicates these practices are infiltrating the insurance business.

To explain how this works, it’s helpful if you understand two important things about how insurance companies think. The first is fraud. Big, consumer-facing insurers (the ones you see advertising on prime time TV with silly animal mascots or jingles), get claims made by thousands or millions of people every year. A small fraction of these (estimated to be ten percent by an industry group, which probably has an incentive to over-estimate) include fraud, i.e., people lying to get money they aren’t entitled to. Sometimes this is straightforward: people claiming to have been injured in car crashes that never happened, for example. Other times it’s more innocent: maybe someone mistakenly claims the couch that got destroyed in a house fire was brand new when it was actually a few years old.

Regardless, fraud is illegal, immoral, and (most relevant here) costs the insurance company money. So most insurance companies have a special department to screen out fraudulent claims. It’s usually called something like the “Special Investigations Unit” (SIU) and given a veneer of law enforcement. Sometimes the SIU finds claims that are genuinely fraudulent. Sometimes, the SIU gets accused of bullying legitimate claimants into dropping the claim through the implied threat that they’ll be found guilty of fraud. This can prey on policyholders perceived as more vulnerable, which typically translates to targeting poor, immigrant, or minority policyholders.

The second part of the insurance business that comes into play when we think about using AI to search for fraud is the intense pressure to cut costs. Insurance is a business like any other. It’s driven by the profit motive, and, as insurers increasingly become publicly-traded (a relatively new innovation spurred in part by the weakening of federal banking regulations in the late 1990’s), it’s driven by the need to please shareholders, and please them RIGHT NOW. The way to do that is show more profits in this quarter’s financial report.

Insurers have (basically) two ways to do this. The first is to charge more premiums. This is often a non-starter. Insurance consumers are very price-sensitive these days. In the past, your average American family might buy insurance from an agent with a brick and mortar office on the local Main Street that they’d known for years. They bought based on their personal relationship with that agent.

But, nowadays, those people probably buy coverage online. They have no human relationship with the company. So they do what we all do under these circumstances: hit up a few websites and buy from the cheapest company. Raising premiums by a few dollars drives these customers away. Fewer customers can mean angry shareholders.

So if the company can’t boost profits by charging more, it has to try to save money. This can mean paying employees less, spending less money and time investigating claims, paying fewer claims (and paying fewer dollars on those claims).

In this context, it can also mean turning the SIU’s fraud investigating functions over to an AI. Why pay an experienced investigator a handsome salary to spend thousands of hours combing through millions of claim files every year looking for evidence of fraud when you can pay an AI a few bucks to crunch all the data and flag the fraudulent claims? Everybody wins!

If that sounds too good to be true, it probably is, at least according to one lawsuit that alleges one major insurer used an AI to flag files for fraud based on what amounts to racial dog-whistles. If the allegations are to be believed, the AI works by (basically) assuming claims from policyholders living in poorer, Black-er neighborhoods are more likely to be fraudulent.

According to the lawsuit, which cites a study of 800 policyholders in the Midwest, the AI flags claims as potentially fraudulent based on a host of characteristics that boil down to a proxy for race:

The lawsuit was filed December 14, 2022. No judge or jury has decided if these allegations are true.

But it’s a fascinating look at how practices we think of as tech-industry-focused (using AI in questionable ways) can infect the insurance industry, an industry that is about as “old economy” as it gets.

Summer News Roundup: Bans on Credit Scoring, Bertha the Tunnel Machine, Bargains for Arbitration in ERISA Plans, and Benefit Managers

Courts had a busy summer on insurance and ERISA issues.

A Washington State judge struck down the Washington Insurance Commissioner’s ban on using credit scores to price insurance. The judge acknowledged that using credit scores (which are a proxy for poverty) has a discriminatory impact. Insureds with low credit scores pay more for insurance even if they present a low risk to the insurer. But the judge found that the legislature, not the Insurance Commissioner, has the authority to ban the practice.

The Washington Supreme Court held that there was no insurance coverage for damage to the machine used to bore the tunnel for the replacement of the Alaskan Way Viaduct in Seattle (affectionately nicknamed “Bertha” after Seattle’s former mayor). The machine broke down during the project in 2013. It was determined the machine suffered from a design defect. The Supreme Court held that the design defect fell within the scope of an exclusion in the applicable insurance policy for “machinery breakdown.”

Employers asked the U.S. Supreme Court to rule that ERISA disputes should go to arbitration. Several courts have decided that certain types of lawsuits alleging violations of ERISA’s fiduciary duties cannot be forced into arbitration. The reason is that the plaintiff in these cases sues on behalf of the governing employee benefit plan. ERISA treats such a plan as a separate legal entity. Therefore, an individual employee’s signature on an employment contract with an arbitration clause in the fine print does not bar that employee from suing on behalf of the ERISA plan–at least according to these courts. If the Supreme Court steps in, that could change.

The Supreme Court declined to revisit a case holding that ERISA allows health plans to pay high prescription drug prices. The plaintiffs argued that their health plan’s administrator (called a Pharmacy Benefit Manager) acted as a fiduciary under ERISA when it set the prices the health plan and its participating employees paid for prescription drugs. As an ERISA fiduciary, the administrator would have an obligation to act in the best interest of the participating employees when setting drug prices. The Supreme Court’s decision not to take up the case leaves in place the lower court’s ruling that these administrators were not subject to ERISA’s fiduciary duties.

Insurers Still Breaking Mental Health Coverage Rules Says Department of Labor

The 2022 report to Congress from the Department of Labor (DoL) on compliance by group health plans with the federal mental health parity laws identifies numerous instances of continued discrimination in coverage for treatment of mental health diagnoses.

Federal law generally prohibits insurers from discriminating against people who need coverage for treatment of mental health conditions. Basically, health insurers cannot have limitations that are more restrictive of treatment for a mental health condition than for other conditions. These rules have only become more important since the COVID-19 pandemic contributed to mental health issues for many Americans; for instance, the CDC noted a 30% increase of overdose deaths since the pandemic.

In large part for this reason, DoL has made enforcement of the mental health parity rules a priority in recent years. One new enforcement tool is a 2021 rule passed by Congress requiring health plans to provide DoL with a comparative analysis of treatment limitations for mental health conditions to help DoL ensure these practices follow the law.

DoL’s report identified many problems with health plans’ reporting about mental health parity. For instance:

  • Failure to document comparisons of treatment limitations for mental health limitations before implementing those limitations;
  • Lack of evidence or explanation for their assertions; and
  • Failure to identify the specific benefits affected by mental health limitations.

DoL also noted that enforcing these reporting rules had led to the removal of several widespread insurer practices that violated the mental health parity rules.

For example, one major insurer was found to routinely deny certain behavioral health treatment for children with Autism Spectrum Disorder. This resulted in denying early intervention that could have lifelong results for autistic children. DoL found over 18,000 insureds affected by this exclusion.

Another example involved the systemic denial of treatment used in combatting the opioid epidemic. New research has found that combining therapy with medication can be more effective for treating opioid addiction than medication alone. DoL found a large health plan excluded coverage for this therapy in violation of the mental health parity rules.

Other treatments DoL’s report identified as being denied on a widespread basis in violation of the law included counseling to treat eating disorders, drug testing to treat addiction, and burdensome pre-certification requirements for mental health benefits.

DoL’s report is a reminder that discrimination on the basis of mental health related disabilities remains a part of the insurance business despite years of federal legislation to the contrary.

Washington State Insurance News Roundup: Credit Scores, Surprise Medical Billing, and Vaccines

Washington State’s Office of the Insurance Commissioner (“OIC”) has had a busy March. The OIC, Washington State’s regulator responsible for overseeing insurance sold in Washington, issued several orders regarding discriminatory insurance pricing and the COVID pandemic.

First, the OIC banned insurers from using credit scores to price insurance. The insurance commissioner found the ban necessary to prevent discriminatory pricing in auto, renters, and homeowners insurance. Using credit scores to price insurance has been criticized as discriminatory because the practice results in low-income policyholders and people of color paying more for insurance. Auto insurance companies, for example, charge good drivers with low credit scores nearly 80% more for state-mandated auto coverage. This practice is anticipated to become even more egregious as COVID emergency protections expire this year, causing people who experienced financial hardship due to the pandemic to pay more for insurance merely because their credit scores have dropped. The insurance commissioner acted after legislation banning credit scores in insurance pricing failed to advance through the Washington State legislature.

Second, OIC extended certain emergency orders regarding COVID. These orders require health insurance companies to waive cost-sharing and protect consumers from surprise bills for COVID testing. The orders also require insurers to allow out-of-network providers to treat or test for COVID if the insurer lacks sufficient in-network providers. These orders were originally entered last year and are now extended to April 18, 2021. OIC also extended the requirement that insurers cover telehealth services.

Third, OIC responded to COVID vaccine misinformation. False reports have percolated that getting the COVID vaccine can void life insurance coverage or affect premiums or benefits. The OIC clarified that COVID vaccination will not harm your insurance eligibility.

Lastly, OIC gave an update on the effect of the American Rescue Plan Act on health insurance premiums for policies purchased on the Exchange (a/k/a “Obamacare” policies). OIC explained that the revisions in the new law reduces the percentage of income that people must pay for health coverage on an Exchange policy. The new law also increases subsidies for people receiving unemployment benefits and covers COBRA premiums for people who lost their job but want to keep their employer-sponsored coverage.

Industry Group Reviewing Insurance Rate Practices for Racial Bias

An industry group known as the Insurance Information Institute is analyzing the role racial bias plays in calculating insurance premiums. Explicit racial bias, i.e.., setting premiums directly based on race (known as “redlining”) has been illegal since the mid 20th century.  But rates continue to bet set based on criteria that indirectly reflect racial bias. One study found persistent rate increases for homeowners’ insurance in minority neighborhoods that exceeded legitimate risk differentials.

Rate criteria reflecting implicit racial bias include credit scores and occupations. The insurance industry has long defended these criteria as reliable predictors of risk. But the new working group pushes back on those assumptions:

Research shows that average credit scores for white and Asian customers are better than those for Black and Hispanic customers…Insurance credit scores reflect and perpetuate historic racism and unfairly discriminate against Black and Hispanic communities.

Other facially neutral rate setting policies can have a discriminatory impact. Motor vehicle records (e.g., traffic tickets) can reflect systemic racism on the basis that affluent white drivers are better able to afford hiring lawyers to dismiss or downgrade citations.

The industry group is also investigating whether the use of computer algorithms to analyze so-called “big data” about drivers can reflect implicit racial bias. This mirrors concerns in other fields (e.g., facial recognition software) that computer programs inadvertently perpetuate existing biases.

This new report shows the insurance industry as a whole is following up on efforts from state regulators to limit discriminatory premium rates. New York’s Department of Financial Services recently prohibited using education and occupation to price car insurance. The rule only applies in New York. Hopefully this pushback will become more widespread as other groups take note.