Washington State recently implemented model legislation from the National Association of Insurance Commissioners that regulates pet insurance.
Pet insurance is a relatively new product that has grown in popularity with the increasing prevalence of pet ownership in the United States. Industry statistics reflect an increase of over 2 million in the number of pets insured nationally since 2017.
Unfortunately, this emerging product line has been riddled with complaints of unfair business practices. Policyholders complain of insurers misrepresenting coverages, hiding exclusions, and failing to pay claims. These issues have led to significant regulatory enforcement from Washington’s Office of the Insurance Commissioner.
The new pet insurance statutes are an attempt to fix these problems. The new law establishes clear definitions for pet insurance terms like “chronic condition”, “preexisting condition”, and “veterinarian.” It requires policies using these terms to follow the statutory definition. This helps make sure consumers know what they are getting when they buy pet insurance.
The law also requires disclosure of important exclusions. Policies must state exclusions in specific language. They must explicitly identify limitations based on things like preexisting conditions or hereditary disorders. Agents selling pet insurance must also be appropriately licensed and trained
And the law gives consumers a 15 day “free look period” to change their mind and return the policy to get their money back.
Stop me if you’ve heard this one before: an old-economy business gets sold an AI or computer algorithm promising to cut costs and improve business. Innovation! Big Data! Progress! What could go wrong? Unfortunately, we’re increasingly learning that AI incorporates the same biases, including racial biases, that have always existed, except that it makes those biases harder to challenge by wrapping them in a shiny package of technological legitimacy.
Allegations that AI and computer algorithms are used to perpetuate racial biases are nothing new. But a recent lawsuit indicates these practices are infiltrating the insurance business.
To explain how this works, it’s helpful if you understand two important things about how insurance companies think. The first is fraud. Big, consumer-facing insurers (the ones you see advertising on prime time TV with silly animal mascots or jingles), get claims made by thousands or millions of people every year. A small fraction of these (estimated to be ten percent by an industry group, which probably has an incentive to over-estimate) include fraud, i.e., people lying to get money they aren’t entitled to. Sometimes this is straightforward: people claiming to have been injured in car crashes that never happened, for example. Other times it’s more innocent: maybe someone mistakenly claims the couch that got destroyed in a house fire was brand new when it was actually a few years old.
Regardless, fraud is illegal, immoral, and (most relevant here) costs the insurance company money. So most insurance companies have a special department to screen out fraudulent claims. It’s usually called something like the “Special Investigations Unit” (SIU) and given a veneer of law enforcement. Sometimes the SIU finds claims that are genuinely fraudulent. Sometimes, the SIU gets accused of bullying legitimate claimants into dropping the claim through the implied threat that they’ll be found guilty of fraud. This can prey on policyholders perceived as more vulnerable, which typically translates to targeting poor, immigrant, or minority policyholders.
The second part of the insurance business that comes into play when we think about using AI to search for fraud is the intense pressure to cut costs. Insurance is a business like any other. It’s driven by the profit motive, and, as insurers increasingly become publicly-traded (a relatively new innovation spurred in part by the weakening of federal banking regulations in the late 1990’s), it’s driven by the need to please shareholders, and please them RIGHT NOW. The way to do that is show more profits in this quarter’s financial report.
Insurers have (basically) two ways to do this. The first is to charge more premiums. This is often a non-starter. Insurance consumers are very price-sensitive these days. In the past, your average American family might buy insurance from an agent with a brick and mortar office on the local Main Street that they’d known for years. They bought based on their personal relationship with that agent.
But, nowadays, those people probably buy coverage online. They have no human relationship with the company. So they do what we all do under these circumstances: hit up a few websites and buy from the cheapest company. Raising premiums by a few dollars drives these customers away. Fewer customers can mean angry shareholders.
So if the company can’t boost profits by charging more, it has to try to save money. This can mean paying employees less, spending less money and time investigating claims, paying fewer claims (and paying fewer dollars on those claims).
In this context, it can also mean turning the SIU’s fraud investigating functions over to an AI. Why pay an experienced investigator a handsome salary to spend thousands of hours combing through millions of claim files every year looking for evidence of fraud when you can pay an AI a few bucks to crunch all the data and flag the fraudulent claims? Everybody wins!
If that sounds too good to be true, it probably is, at least according to one lawsuit that alleges one major insurer used an AI to flag files for fraud based on what amounts to racial dog-whistles. If the allegations are to be believed, the AI works by (basically) assuming claims from policyholders living in poorer, Black-er neighborhoods are more likely to be fraudulent.
According to the lawsuit, which cites a study of 800 policyholders in the Midwest, the AI flags claims as potentially fraudulent based on a host of characteristics that boil down to a proxy for race:
The lawsuit was filed December 14, 2022. No judge or jury has decided if these allegations are true.
But it’s a fascinating look at how practices we think of as tech-industry-focused (using AI in questionable ways) can infect the insurance industry, an industry that is about as “old economy” as it gets.
Is death “accidental” when a person gets in a fatal car crash while over the legal alcohol limit? Courts have had a hard time answering this question. A recent Ninth Circuit ruling provides some clarity.
In that case, the insured died after driving 65 miles per hour going the wrong way on a one-way road with a 10 mile per hour speed limit. An autopsy found he had a blood-alcohol level of 0.20%.
His family made an insurance claim with Life Insurance Company of North America (LINA). LINA had sold the deceased an insurance policy covering “accidental” death.
LINA denied the claim. It determined that death under these circumstances was not “accidental” because it was foreseeable that driving with a 0.20% blood-alcohol level would result in death or serious injury.
The family filed a lawsuit contesting the denial under ERISA. The Seattle federal court sided with the family. The court ruled that the decedent’s behavior was “extremely reckless” but did not make death so certain as to render it not “accidental”. LINA appealed to the Ninth Circuit Court of Appeals.
The Ninth Circuit agreed with the lower court. It acknowledged that courts have applied different tests to determine whether death under these circumstances is “accidental.” It decided that the most appropriate question is whether death was “substantially certain” to occur: if death is substantially certain, it can’t be accidental.
Applying that test, the Ninth Circuit agreed that death was accidental. Although the insured’s behavior was reckless, it did not make death substantially certain. The court emphasized: “there is no doubt that drunk driving is ill-advised, dangerous, and easily avoidable.” But death was still accidental.
The court concluded with the truism that insurers who don’t want to cover death resulting from drunk driving should just add an explicit exclusion to their policies:
The solution for insurance companies like [LINA] is simple: add an express exclusion in policies covering accidental injuries for driving while under the influence of alcohol, or for any other risky activity that the company wishes to exclude….This would allow policyholders to form reasonable expectations about what type of coverage they are purchasing without having to make sense of conflicting bodies of caselaw that deal with obscure issues of contractual interpretation.
Courts had a busy summer on insurance and ERISA issues.
A Washington State judge struck down the Washington Insurance Commissioner’s ban on using credit scores to price insurance. The judge acknowledged that using credit scores (which are a proxy for poverty) has a discriminatory impact. Insureds with low credit scores pay more for insurance even if they present a low risk to the insurer. But the judge found that the legislature, not the Insurance Commissioner, has the authority to ban the practice.
The Washington Supreme Court held that there was no insurance coverage for damage to the machine used to bore the tunnel for the replacement of the Alaskan Way Viaduct in Seattle (affectionately nicknamed “Bertha” after Seattle’s former mayor). The machine broke down during the project in 2013. It was determined the machine suffered from a design defect. The Supreme Court held that the design defect fell within the scope of an exclusion in the applicable insurance policy for “machinery breakdown.”
Employers asked the U.S. Supreme Court to rule that ERISA disputes should go to arbitration. Severalcourts have decided that certain types of lawsuits alleging violations of ERISA’s fiduciary duties cannot be forced into arbitration. The reason is that the plaintiff in these cases sues on behalf of the governing employee benefit plan. ERISA treats such a plan as a separate legal entity. Therefore, an individual employee’s signature on an employment contract with an arbitration clause in the fine print does not bar that employee from suing on behalf of the ERISA plan–at least according to these courts. If the Supreme Court steps in, that could change.
The Supreme Court declined to revisit a case holding that ERISA allows health plans to pay high prescription drug prices. The plaintiffs argued that their health plan’s administrator (called a Pharmacy Benefit Manager) acted as a fiduciary under ERISA when it set the prices the health plan and its participating employees paid for prescription drugs. As an ERISA fiduciary, the administrator would have an obligation to act in the best interest of the participating employees when setting drug prices. The Supreme Court’s decision not to take up the case leaves in place the lower court’s ruling that these administrators were not subject to ERISA’s fiduciary duties.
There’s often an erroneous assumption that any insurance a person buys in connection with their employment is automatically subject to ERISA. But ERISA does not regulate all employer-adjacent insurance. ERISA only applies to employee benefit “plans.” Whether an ERISA “plan” exists can be complex, but without one, an insurance policy will not be subject to ERISA even if an employer was involved in its purchase.
A recent Ninth Circuit decision is a good reminder of this. In Steiglemann v. Symetra Life Ins. Co., the appellate court determined that an insurance policy purchased in connection with the plaintiff’s employment was not subject to ERISA because the requirements for an employee benefit “plan” were not met. The decision is unpublished, meaning it may be persuasive to lower courts but is not binding.
Jill Steiglemann bought a disability insurance policy from Symetra Life Insurance Company. She had access to the policy through her membership in a trade association for insurance agents. Her company paid for the insurance. The lower court held that the policy was part of an employee benefit plan and subject to ERISA.
The Ninth Circuit Court of Appeals reversed the lower court and held that the policy was not governed by ERISA. Even though Steiglemann’s employer arranged for the option for her to buy coverage and paid premiums, this was not enough to show the employer established an ERISA plan.
The employer never contracted to provide for coverage. It never promised to act as an administrator for the insurance. And it never took the steps necessary to maintain an ERISA plan, like recordkeeping and filing returns with the Department of Labor.
Steiglemann’s trade association also did not do the things necessary to create an ERISA plan. The association did not function for the main purpose of representing employees against their employer.
There was therefore no evidence that Steiglemann’s insurance policy was part of an employee benefit plan. And without a plan, the policy was not subject to ERISA.
This decision is a helpful reminder not to assume that ERISA applies to all employer-adjacent insurance.