AI and Claims Handling: Navigating the Next Wave of Bad Faith Suits
By Chris Johnson, Gent Silberkleit, Ph.D., Michelle Rey LaRocca | This article was originally published in the November/December 2024 issue of the Defense Research Institute’s (DRI’s) flagship publication, For the Defense
Navigating the Next Wave of Bad Faith Suits
When AI-related bad faith cases go to trial, overcoming juror skepticism about both AI and insurance companies will becritical to winning.
Artificial intelligence (“AI”) is revolutionizing industries, and the claim handling world is no exception. However, with innovation comes a wave of legal challenges. Headlines about AI range from sensational predictions of job takeovers to dire warnings of machines gone rogue. And as AI changes the way some insurance companies handle claims, it has become a significant target for bad faith litigation. A growing number of lawsuits accuse insurers of using AI systems to systematically and improperly deny claims or to make “lowball” settlement offers, with a current wave of high-profile cases targeting health insurance providers. However, the themes plaintiffs’ attorneys are espousing in AI-related bad faith litigation are not entirely new; they are building on themes and strategies from earlier lawsuits involving software-driven claim handling practices. As the insurance industry continues to innovate, both insurers and plaintiffs are preparing for the next wave of litigation, where the transparency and fairness of AI decision-making will take center stage. When AI-related bad faith cases go to trial, overcoming juror skepticism about both AI and insurance companies will be critical to winning.
What is AI?
AI has no uniform definition; however, it is generally defined as software that enables computers and digital devices to learn, read, write, create and analyze. One legal definition of AI in a non-insurance context is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions.” 15 U.S. Code § 9401. Essentially, AI is a system capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. Whatever definition one uses, when discussed in a bad faith context AI essentially is a system and/or computer software programmed to execute algorithms with instructions to perform specific tasks, often taking over the more rote and mundane duties traditionally handled by claims specialists.
AI in Claims Handling
A myriad of “AI” claims handling systems and products have permeated the market, often purporting to be able to automate routine, administrative claims handling tasks and reduce related costs. Some seek to replace data entry functions and handle initial claim intake, often using chatbots that get initial claim information. These systems attempt to automatically categorize and prioritize claims based on urgency and complexity. Some systems go a step further by searching for “red flags” and, where none are found, quickly and automatically resolve and pay simple claims. In complex claims, they recommend outcomes to claims specialists.
Those promoting partially automated claims systems purport to help make the claims process easier and faster for both claims handlers and the insureds by providing real-time data and instant access to analytics. They tout that such systems create transparency and, if done correctly, help eliminate human bias and error.
One Recent Wave of Suits Related to AI and Automated Claims Handling
Like many groups, the plaintiffs’ bar is vigorously discussing the role AI has on the insurance industry, and more specifically, on claims handling. Many bad faith plaintiffs’ attorneys view the use of AI in claims handling as a large target for the next generation of extracontractual claims they plan to file against carriers. Using focus-group tested themes such as ‘bots gone bad,’ ‘garbage in, garbage out,’ and ‘figures don’t lie but liars can figure,’ charismatic policyholders’ attorneys argue that data used to train AI models, and the complex algorithms they use, were trained and programmed to reduce costs at the expense of coverage.
The most recent wave of AI-related lawsuits target health insurance providers, alleging they use various AI tools to improperly deny claims against elderly and chronically ill patients who are less likely than other groups to appeal claim denials.
For example, putative class action suits have been filed against Humana in the U.S. District Court for the Western District of Kentucky and against United Healthcare in the U.S. District Court for Southern District of Minnesota. Both purport that
AI software was used by the carriers to improperly deny extended care claims for elderly patients, alleging that the AI claim handling systems at issue have error rates exceeding 90%. Barrows et. al. vs. Humana et. al., case no. 3:23-cv-00654-RGJ, First Amended Complaint, filed Apr. 22, 2024; Estate of Lokken et. al. vs. Unitedhealth Group, Inc. et. al., case no. 0:23-cv-03514, Complaint, filed Nov. 14, 2023. Of course, a very strong argument can be made that the 90% figure – which closely tracks claims made by various media reports – is based on a flawed methodology and considers only a skewed and cherry-picked sample size. For example, it appears that some such figures appear to be derived by focusing only on the ultimate results of a self-selected subset of disputed claims that are ultimately appealed, but fail to take into account the vast majority of claims that are not disputed. However, claimants and policyholder attorneys may only be appealing the more extreme outliers of the claims denied. Many of those claims may be settled and ultimately approved, not because there was a claim handling error, but to avoid fees and costs associated with ultimately defending legal claims. The plaintiffs’ claims are susceptible to dozens of other lines of attack as well, both substantive and procedural, which are beyond the scope of this article.
Another class action suit – filed against Cigna in the U.S. District Court for Eastern District of California – is focused on the AI algorithm known as PxDx. Plaintiffs claim that Cigna improperly and routinely denies plaintiffs’ claims and believe it is a flawed AI model, adding that Cigna “knows that only a tiny minority of policyholders
(roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-of-pocket costs or forgo the at-issue procedure.” Kisting-Leung et. al. vs. Cigna Corp. et. al., case no. 2:23-cv-01477-DAD-CSK, Third Amended Complaint, filed June 14, 2024, at 4-6. This suit is largely based upon, and even cites, a ProPublica article which claims that three doctors rejected roughly 264,000 claims (121,000, 80,000 and 63,000, respectively) in a period of two months. See How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them, ProPublica, by Patrick Rucker, updated April 14, 2023; https:// www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims. The article further claims to have internal Cigna documents showing that Cigna doctors spent an average of only 1.2 seconds looking at each claim. Id.
Plaintiffs in the above-mentioned suits paint alleged AI claims handling software as an opaque system that arbitrarily cuts the follow-up care patients can receive (e.g., the length of stay in assisted living facilities or hospitals) based on algorithms calculating what the stay should be. Their attorneys argue that these elderly plaintiffs cannot spend years appealing because they do not have years. They allege their clients do not understand how the claim handling systems were used and were not given sufficient or specific explanations for how the claims were made. They try and paint AI claim handling software as arbitrary.
Cigna is vigorously defending against these claims, and is certain to raise numerous compelling defenses. “Based on our initial research, we cannot confirm that these individuals were impacted by PxDx at all,” the carrier told CBS news. Cigna accused of using an algorithm to reject patients’ health insurance claims, by Aimee Picchi, July 26, 2023, https://www.cbsnews.com/news/ cigna-algorithm-patient-claims-lawsuit/. “To be clear, Cigna uses technology to verify that the codes on some of the most common, low-cost procedures are submitted correctly based on our publicly available coverage policies, and this is done to help expedite physician reimbursement.” Id.
New, But Not New
Although future AI-related cases are expected to have a new flavor to them, in many instances plaintiffs’ counsel are regurgitating arguments and themes from other high-profile cases focused on automated claims handling. For example, policyholders have accused homeowners’ carriers of using software they claim was “improperly programed” with algorithms permitting carriers to intentionally “lowball” offers by underestimating material and labor costs. In some instances, carriers have obtained summary judgment arguing, in part, that the programs like Xactimate are commonly used in the insurance industry, and carriers do not lack a reasonable basis for using such programs when determining depreciation. Sands v. State Farm, No. 5:17-cv-4160, 2018 WL 1693387 (E.D. Pa. 2018). And, in Sheahan v. State Farm General Ins. Co., the Court dismissed various extracontractual claims based on allegations that State Farm improperly relied upon valuations from programs like Xactimate and 360 which purportedly “undervalued the replacement costs of Plaintiffs’ homes.” 394 F. Supp. 3d 997, 1014 (N.D. Cal. 2019); 442 F. Supp. 3d 1178, 1182 (N.D. Cal. 2020.) In other cases, however, Courts have denied summary judgment, finding that issues of fact exist as to whether a carrier acted in bad faith or was negligent in allegedly “rely[ing] solely on its computer system to determine policy limits, limits that current estimates of the cost of rebuilding suggest to be inadequate.” Lewis v. Allstate Ins. Co., 2016 WL 5408332, No. 3:15-cv-8074-HRH (D. Ariz. Sept. 28, 2016).
Moreover, arguments in forthcoming extracontractual lawsuits expected to be filed soon in the auto insurance space are will likely echo well-known allegations made in years past. In Strawn v. Farmers Ins. Co. of Oregon, 350 Or. 336 (2011), automobile insureds brought a class action against Farmers alleging breach of contract, breach of the covenant of good faith and fair dealing, and fraud. The Plaintiffs argued that Farmers used a “cost containment software program” to improperly reduce PIP/no fault payments by automatically rejecting bills above the 80th percentile “as the cutoff point for reasonable expenses[.]” Strawn v. Farmers Ins. Co. of Oregon, 258 P.3d 1199, 1203 (Or. 2011). Plaintiffs argued that, instead, claims adjusters should have “review[ed] each medical bill to determine whether the bill was reasonable” as had been done prior to the implementation of the software. Id. Oregon’s Supreme Court upheld a $900,000 compensatory damages award and reinstated an $8 million punitive damages award. Plaintiffs are currently turning the page in this playbook and planning similar suits, but re-packaged in the new language of generative AI software and programs that some carriers are using.
Both Sides of the Coin
Plaintiffs’ attorneys claim that AI claims handling system processes violate several provisions of most states’ Unfair Claims Settlement Practices Acts, including alleged refusal to pay claims without conducting a reasonable investigation; failure to attempt to effectuate prompt, fair and equitable payment of claims which are owed; failure to adopt and implement reasonable standards related to claim investigations, and compelling insureds to institute litigation to recover amounts owing under policies. They argue that AI systems can be opaque and may not adequately consider individual circumstances, and raise questions about fairness, transparency, and accountability when AI is used in the claims handling process. The alleged lack of transparency can be problematic where, as a practical matter, insurers will likely need to convince judges and jurors that their proprietary and complex AI systems used in the claims handling process are not unreasonable.
Carriers, on the other hand, defend their use of AI by highlighting its ability to process claims efficiently and consistently. They contend that AI systems are designed to follow the guidelines and coverage criteria set forth in the policy and that any decisions made are consistent with these terms. Insurers point out that AI is merely a tool that aids in decision-making and is not the sole arbiter of claims. They posit that AI helps to eliminate human error and bias, leading to more consistent and objective outcomes. Carriers point to administrative cost savings of using AI that can be passed on to customers in the form of lower premiums. According to a McKinsey study, “AI-enabled [prior authorization processes] can automate 50 to 75 percent of manual tasks” when adjusting routine health insurance claims, which can help and free up carriers/payors to focus on more complex cases. Healthcare Payers Recognize that Prior Authorization (PA) is Ripe for Improvement. AI-enabled PA Design may Deliver Substantial Financial, User-Experience, and Care, McKinsey & Company, April 19, 2022, https://www.mckinsey. com/industries/healthcare/our-insights/ ai-ushers-in-next-gen-prior-authorization-in-healthcare. For example, some AI systems can automate obtaining and cross-validating medical records, resulting in faster turnaround times that may benefit policyholders. Id. If used correctly, analytical AI models can be used to root out fraudulent claims, the cost of which would otherwise be borne by other policyholders. Id. Carriers also point to gains in speed and efficiency of approving and paying routine claims, with policyholders receiving the benefits of faster claim payments and lower premiums.
Daily news headlines that warn of the effects AI will have on our society have certainly exacerbated many people’s concerns about this new frontier.
While it is impossible to predict the exact direction AI-related bad faith litigation will take, early waves of AI-related litigation focus heavily on the amount of control and actual oversight humans have on claims denials. Carriers are taking steps to make such claims more defensible. For example, several property and casualty carriers have developed, or are developing, systems where AI products can be used to handle routine administrative tasks and even “approve” routine claims, but a human claims handler makes the decision whether to deny claims and a proper investigation.
In sum, the insurance industry is faced with the challenge of balancing the technological advancements that AI systems offer with carriers’ commitment to fair claims handling, while also keeping an eye on the risk of potential bad faith exposure. The level of human involvement and reasonableness of AI decision making will likely be the deciding factors in how Courts view AI claims handling software.
Juror Perceptions of AI, Insurance, and the Use of Technology in Claims Handling
For decades, litigation consultants have researched jurors’ views on insurance companies, including their impressions of automated technologies that carriers have used in the claims handling process. This empirical research has consistently found that many jurors distrust insurance companies in general and, more specifically, are wary of how carriers use algorithms or unfamiliar technology in handling claims. This body of research suggests that jurors want a human to be involved in evaluating and investigating claims. Jurors also want to understand how and why the technology is being used. As one might expect, jurors are particularly critical of technology that appears to prioritize insurer savings over the interests of the insured, viewing such practices as potentially indicative of bad faith.
These common juror predispositions align with plaintiff arguments that automated tools and algorithms fail to assess claims on their individual merits and are primarily employed to reduce company costs; but, importantly, the context and details of an individual case certainly influence jurors’ decision making, and defense counsel can take steps at trial to effectively mitigate jurors’ negative preconceptions. It is also critical, of course, to identify and strike the most biased prospective jurors during jury selection. The following sections summarize recent research on the public’s attitudes towards AI and towards insurance companies, followed by the authors’ analysis and brief practical tips for defense attorneys trying AI-related bad faith cases.
Jurors Are Worried About AI
Recent opinion polls have confirmed that most Americans are worried about how AI will affect them personally and how it will affect our society at large. For example, global communication firm, Edelman, recently published their 2024 Edelman Trust Barometer, for which they polled 1,150 U.S. respondents. 2024 Edelman Trust Barometer, Edelman Trust Institute, presented on January 14, 2024, https://www. edelman.com/trust/2024/trust-barometer. A top-line result from this large-scale opinion survey is that most Americans distrust AI.
The Edelman survey revealed that over the last 5 years, Americans’ already low trust in AI companies has declined, especially in the last 2 years: In 2023, 43% of those questioned reported that they trusted AI companies to do what is right, whereas only 35% reported the same in 2024. The Edleman survey also found that 63% of U.S. respondents felt that “government regulators lack adequate understanding of emerging technologies to regulate them effectively.” Id. Additionally, over half of U.S. respondents reported the opinion that “innovation is poorly managed” (56%), rather than “well managed” (22%), and the other 39% selected “neither.” Id.
One of the most interesting results of the Edelman survey is the lack of substantial differences in views of AI depending on self-reported political ideology (although not necessarily by party affiliation, an important modern divergence that is outside the scope of this article). Americans on the right and left are both skeptical of AI: 59% of right-leaning respondents and 51% of left-leaning respondents reported that they reject the use of AI. In contrast, the survey identified huge differences between the portion of right-leaning versus left-leaning respondents who rejected other innovations like green energy, gene-based medicine, and GMO foods. Id.
One can speculate that these latter innovations have been more politicized for a longer period of time than AI has, but these other innovations could serve as examples of how ideological/political polarization could significantly influence impressions of AI and drive a wedge between progressives and conservatives on this topic. It will be critical for attorneys, claims professionals, and litigation consultants working on AI-related cases to keep their finger on the pulse of Americans’—and therefore potential jurors’—evolving attitudes towards AI.
While the public’s current concerns about AI’s influence on their lives may not be wholly unfounded, their worries may also be overblown by substantial media coverage about the dangers of AI combined with their little actual knowledge of the subject. Jurors are likely to be biased against AI without being fully informed about the technology and its applications. Litigators who can properly explain the role of AI in the claim handling process may be able to disabuse jurors of certain misconceptions around the technology. Nonetheless, it is clear that most Americans are presently leery of AI technologies, suspicion that is likely amplified when powerful entities that they distrust—like insurance companies—are the ones using it.
Jurors’ Views of Insurance Companies
Most Americans, and therefore most jurors, believe that insurance companies operate only in the company’s own best interest. During mock trials and focus groups, the authors frequently hear mock jurors voice negative impressions of the insurance industry, and mock jurors often recount personal experiences where they or a loved one were denied coverage or denied what they thought was fair compensation. Accordingly, a May 2020 survey of jury-eligible Americans by Decision Analysis, Inc. found that 70% of respondents believed that “insurance companies would do anything to avoid paying even legitimate claims.” Studying Juror Attitudes Toward COVID-19 Insurance Claims, Decision Analysis, https://www.law360.com/ articles/1306933/studying-juror-attitudes-toward-covid-19-insurance-claims.
Given these negative attitudes towards carriers, most jurors will be hard-pressed to give insurance companies the benefit of the doubt when it comes to fair and ethical use of AI. Furthermore, it is likely that there would be an additive effect here— jurors’ distrust of AI and their distrust of insurance companies will converge to make many jurors even more suspicious of insurance companies’ use of AI than they are of either AI or insurance companies separately. However, there are many lessons from the legal psychology and communication fields that can inform us how to help jurors feel more comfortable with AI—in particular more understanding of and open to how the insurance industry might use AI tools.
Practical Advice for Jury Trials Involving AI and Insurance
At the outset of an AI-related bad faith trial, a juror is likely to assume that an insurer is using technology in an attempt to deny claims or reduce its payout. Regardless of the burden of proof, as a practical matter, the onus is on the insurer to clearly explain the role of any technology used and assure jurors that the claims handler is using AI as an appropriate tool.
We have often observed insurance companies struggle to explain the decision-making process of claims handling in a way that is persuasive to jurors. Jurors want context—they need to understand the norms of the industry or the “world” at issue, as well as the thought processes of those involved. Therefore, it is essential that witnesses are thoroughly prepared to explain the key aspects of the insurance industry and claims handling, the AI tools in use, and the claims handlers’ decision-making processes. Of particular note, in the current climate, it is highly valuable if a carrier can explain that the company is not using an AI tool to fully replace a human handler’s final judgment for whether to deny a claim. Jurors want a substantive human touch involved in the final decision; so, as the facts allow, help your witnesses to clearly describe AI’s role as a tool that claims handlers use to improve efficiency and accuracy, but that the human performs the ultimate analysis and makes the final decision about a claim.
In bad faith cases where complex systems or claims handling is at issue, just as important as persuasion—if not more important—is educating the jury. Because jurors today are disenchanted with governments and institutions, highly polarized politically, and deeply skeptical, many jurors are inherently resistant to persuasion. Jurors want to learn more than they want to be persuaded. Instead of the role of the advocate at trial being to forcefully argue the case to persuade jurors to find for their clients, the role of the advocate should be to help jurors understand the entire context of the case and let them step into your shoes to learn, investigate, and solve the case or handle the claim with you. Keep in mind that “self-persuasion” is more powerful than “presenter persuasion.” Indeed, the trial team that does a better job of teaching the jury to understand their positions is the one that often prevails.
Especially when dealing with complex or technical issues—like explaining how AI works—it is invaluable to test your case with well-designed mock trials and/or focus groups. Every case is nuanced and every venue is different, and Americans’ views of both AI and the insurance industry continue to evolve, seemingly at record speed these days. Testing your case allows you to refine your strategy based on case-specific empirical data, which can substantially boost your odds of success at trial.
If the case goes to trial, selecting a receptive jury is key. Jury selection is really jury de-selection: your goal should be to identify and eliminate negative and risky jurors. Start by identifying the problematic issues in your case and how the opposing side will likely present their case before designing a juror questionnaire and voir dire questions. You will also want to create a jury selection plan well in advance of trial that includes a meaningful jury profile, dealing with court procedures, processing questionnaires and internet research, voir dire goals and procedures, developing cause challenges, and strike strategies.
When questioning potential jurors during voir dire, seek to create a conversational tone which makes jurors feel safe to have a meaningful conversation about important and difficult issues. You want them to feel comfortable expressing negative attitudes that cut against your client. In general, you also want to prompt the jurors to talk as much as possible, so ask them open-ended questions and ask them to elaborate on their answers.
In a bad faith case involving AI use in claims handling, create a jury profile and design voir dire questions aimed at identifying panelists who are most likely to reject insurance companies’ use of AI in claims handling, with the ultimate goal of striking those people from the panel. Below are a few example questions:
Some people would say that they are more excited than concerned about artificial intelligence. Some would say they are more concerned than excited. Who here would say that they are more concerned than excited?
Who here would not trust an insurance company to use AI responsibly?
Who here thinks that insurance companies will use AI to deny or reduce claims as opposed to legitimately evaluating a claim?
Who here has read or heard something negative about insurance companies using AI or other technology in their claims handling process?
Have you, or anyone close to you, ever had a claim that was unfairly denied by an insurance company? If so, please tell us about that experience.
In a nutshell, the intersection of AI, insurance, and jury trials presents both challenges and opportunities for insurers. The key to success lies in carefully managing how AI tools are explained and perceived by jurors, who are often skeptical of both the technology and insurance companies. Through thoughtful jury selection, clear and transparent explanations, and a focus on education over persuasion, defense counsel handling bad faith claims can help disarm biases and frame AI – or automated claims systems – as a tool for fairness and efficiency rather than a faceless algorithm driven by cost-cutting motives. Navigating these complexities effectively will be critical as AI becomes more central to claims handling and comes under increased scrutiny in the courtroom.
Conclusion
Nobody knows what the future of AI holds; only that it is here, and it is here to stay. As AI continues to assume a more prominent role in claim handling, it becomes increasingly crucial for insurance companies and legal professionals to update their strategies for navigating the complex landscape of bad faith litigation. The key to success in future bad faith trials may lie in effectively communicating the benefits and safeguards of AI technologies, addressing judges’ and jurors’ fears, and dispelling misconceptions about AI’s role in decision-making processes.