AI and Claims Handling: Navigating the Next Wave of Bad Faith Suits

By Chris Johnson, Gent Silberkleit, Ph.D., Michelle Rey LaRocca | This article was originally published in the November/December 2024 issue of the Defense Research Institute’s (DRI’s) flagship publication, For the Defense

Navigating the Next Wave of Bad Faith Suits

When AI-related bad faith cases go to trial, overcoming juror skepticism about both AI and insurance companies will becritical to winning.

Artificial intelligence (“AI”) is revolution­izing industries, and the claim handling world is no exception. However, with inno­vation comes a wave of legal challenges. Headlines about AI range from sensational predictions of job takeovers to dire warn­ings of machines gone rogue. And as AI changes the way some insurance compa­nies handle claims, it has become a sig­nificant target for bad faith litigation. A growing number of lawsuits accuse insur­ers of using AI systems to systematically and improperly deny claims or to make “lowball” settlement offers, with a current wave of high-profile cases targeting health insurance providers. However, the themes plaintiffs’ attorneys are espousing in AI-related bad faith litigation are not entirely new; they are building on themes and strat­egies from earlier lawsuits involving soft­ware-driven claim handling practices. As the insurance industry continues to inno­vate, both insurers and plaintiffs are pre­paring for the next wave of litigation, where the transparency and fairness of AI deci­sion-making will take center stage. When AI-related bad faith cases go to trial, over­coming juror skepticism about both AI and insurance companies will be critical to winning.

What is AI?

AI has no uniform definition; however, it is generally defined as software that enables computers and digital devices to learn, read, write, create and analyze. One legal definition of AI in a non-insurance con­text is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions.” 15 U.S. Code § 9401. Essentially, AI is a system capable of performing com­plex tasks that historically only a human could do, such as reasoning, making deci­sions, or solving problems. Whatever def­inition one uses, when discussed in a bad faith context AI essentially is a system and/or computer software programmed to exe­cute algorithms with instructions to per­form specific tasks, often taking over the more rote and mundane duties tradition­ally handled by claims specialists.

AI in Claims Handling

A myriad of “AI” claims handling systems and products have permeated the market, often purporting to be able to automate routine, administrative claims handling tasks and reduce related costs. Some seek to replace data entry functions and handle initial claim intake, often using chatbots that get initial claim information. These systems attempt to automatically catego­rize and prioritize claims based on urgency and complexity. Some systems go a step further by searching for “red flags” and, where none are found, quickly and auto­matically resolve and pay simple claims. In complex claims, they recommend out­comes to claims specialists.

Those promoting partially automated claims systems purport to help make the claims process easier and faster for both claims handlers and the insureds by pro­viding real-time data and instant access to analytics. They tout that such systems create transparency and, if done correctly, help eliminate human bias and error.

One Recent Wave of Suits Related to AI and Automated Claims Handling

Like many groups, the plaintiffs’ bar is vig­orously discussing the role AI has on the insurance industry, and more specifically, on claims handling. Many bad faith plain­tiffs’ attorneys view the use of AI in claims handling as a large target for the next gen­eration of extracontractual claims they plan to file against carriers. Using focus-group tested themes such as ‘bots gone bad,’ ‘garbage in, garbage out,’ and ‘fig­ures don’t lie but liars can figure,’ charis­matic policyholders’ attorneys argue that data used to train AI models, and the com­plex algorithms they use, were trained and programmed to reduce costs at the expense of coverage.

The most recent wave of AI-related lawsuits target health insurance provid­ers, alleging they use various AI tools to improperly deny claims against elderly and chronically ill patients who are less likely than other groups to appeal claim denials.

For example, putative class action suits have been filed against Humana in the U.S. District Court for the Western District of Kentucky and against United Healthcare in the U.S. District Court for Southern District of Minnesota. Both purport that

AI software was used by the carriers to improperly deny extended care claims for elderly patients, alleging that the AI claim handling systems at issue have error rates exceeding 90%. Barrows et. al. vs. Humana et. al., case no. 3:23-cv-00654-RGJ, First Amended Complaint, filed Apr. 22, 2024; Estate of Lokken et. al. vs. Unitedhealth Group, Inc. et. al., case no. 0:23-cv-03514, Complaint, filed Nov. 14, 2023. Of course, a very strong argument can be made that the 90% figure – which closely tracks claims made by various media reports – is based on a flawed methodology and considers only a skewed and cherry-picked sample size. For example, it appears that some such figures appear to be derived by focus­ing only on the ultimate results of a self-selected subset of disputed claims that are ultimately appealed, but fail to take into account the vast majority of claims that are not disputed. However, claimants and policyholder attorneys may only be appeal­ing the more extreme outliers of the claims denied. Many of those claims may be set­tled and ultimately approved, not because there was a claim handling error, but to avoid fees and costs associated with ulti­mately defending legal claims. The plain­tiffs’ claims are susceptible to dozens of other lines of attack as well, both substan­tive and procedural, which are beyond the scope of this article.

Another class action suit – filed against Cigna in the U.S. District Court for East­ern District of California – is focused on the AI algorithm known as PxDx. Plain­tiffs claim that Cigna improperly and rou­tinely denies plaintiffs’ claims and believe it is a flawed AI model, adding that Cigna “knows that only a tiny minority of policy­holders

(roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-of-pocket costs or forgo the at-issue procedure.” Kisting-Leung et. al. vs. Cigna Corp. et. al., case no. 2:23-cv-01477-DAD-CSK, Third Amended Complaint, filed June 14, 2024, at 4-6. This suit is largely based upon, and even cites, a ProPublica article which claims that three doctors rejected roughly 264,000 claims (121,000, 80,000 and 63,000, respectively) in a period of two months. See How Cigna Saves Millions by Having Its Doctors Reject Claims With­out Reading Them, ProPublica, by Patrick Rucker, updated April 14, 2023; https:// www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims. The article further claims to have internal Cigna documents showing that Cigna doctors spent an average of only 1.2 seconds looking at each claim. Id.

Plaintiffs in the above-mentioned suits paint alleged AI claims handling software as an opaque system that arbitrarily cuts the follow-up care patients can receive (e.g., the length of stay in assisted living facili­ties or hospitals) based on algorithms cal­culating what the stay should be. Their attorneys argue that these elderly plaintiffs cannot spend years appealing because they do not have years. They allege their clients do not understand how the claim handling systems were used and were not given suf­ficient or specific explanations for how the claims were made. They try and paint AI claim handling software as arbitrary.

Cigna is vigorously defending against these claims, and is certain to raise numer­ous compelling defenses. “Based on our ini­tial research, we cannot confirm that these individuals were impacted by PxDx at all,” the carrier told CBS news. Cigna accused of using an algorithm to reject patients’ health insurance claims, by Aimee Picchi, July 26, 2023, https://www.cbsnews.com/news/ cigna-algorithm-patient-claims-lawsuit/. “To be clear, Cigna uses technology to ver­ify that the codes on some of the most com­mon, low-cost procedures are submitted correctly based on our publicly available coverage policies, and this is done to help expedite physician reimbursement.” Id.

New, But Not New

Although future AI-related cases are expected to have a new flavor to them, in many instances plaintiffs’ counsel are regurgitating arguments and themes from other high-profile cases focused on auto­mated claims handling. For example, pol­icyholders have accused homeowners’ carriers of using software they claim was “improperly programed” with algorithms permitting carriers to intentionally “low­ball” offers by underestimating material and labor costs. In some instances, carriers have obtained summary judgment argu­ing, in part, that the programs like Xacti­mate are commonly used in the insurance industry, and carriers do not lack a reason­able basis for using such programs when determining depreciation. Sands v. State Farm, No. 5:17-cv-4160, 2018 WL 1693387 (E.D. Pa. 2018). And, in Sheahan v. State Farm General Ins. Co., the Court dismissed various extracontractual claims based on allegations that State Farm improperly relied upon valuations from programs like Xactimate and 360 which purport­edly “undervalued the replacement costs of Plaintiffs’ homes.” 394 F. Supp. 3d 997, 1014 (N.D. Cal. 2019); 442 F. Supp. 3d 1178, 1182 (N.D. Cal. 2020.) In other cases, how­ever, Courts have denied summary judg­ment, finding that issues of fact exist as to whether a carrier acted in bad faith or was negligent in allegedly “rely[ing] solely on its computer system to determine pol­icy limits, limits that current estimates of the cost of rebuilding suggest to be inade­quate.” Lewis v. Allstate Ins. Co., 2016 WL 5408332, No. 3:15-cv-8074-HRH (D. Ariz. Sept. 28, 2016).

Moreover, arguments in forthcoming extracontractual lawsuits expected to be filed soon in the auto insurance space are will likely echo well-known allegations made in years past. In Strawn v. Farmers Ins. Co. of Oregon, 350 Or. 336 (2011), auto­mobile insureds brought a class action against Farmers alleging breach of con­tract, breach of the covenant of good faith and fair dealing, and fraud. The Plaintiffs argued that Farmers used a “cost contain­ment software program” to improperly reduce PIP/no fault payments by automat­ically rejecting bills above the 80th per­centile “as the cutoff point for reasonable expenses[.]” Strawn v. Farmers Ins. Co. of Oregon, 258 P.3d 1199, 1203 (Or. 2011). Plaintiffs argued that, instead, claims adjusters should have “review[ed] each medical bill to determine whether the bill was reasonable” as had been done prior to the implementation of the software. Id. Oregon’s Supreme Court upheld a $900,000 compensatory damages award and rein­stated an $8 million punitive damages award. Plaintiffs are currently turning the page in this playbook and planning similar suits, but re-packaged in the new language of generative AI software and programs that some carriers are using.

Both Sides of the Coin

Plaintiffs’ attorneys claim that AI claims handling system processes violate several provisions of most states’ Unfair Claims Settlement Practices Acts, including alleged refusal to pay claims without conducting a reasonable investigation; failure to attempt to effectuate prompt, fair and equitable payment of claims which are owed; fail­ure to adopt and implement reasonable standards related to claim investigations, and compelling insureds to institute lit­igation to recover amounts owing under policies. They argue that AI systems can be opaque and may not adequately con­sider individual circumstances, and raise questions about fairness, transparency, and accountability when AI is used in the claims handling process. The alleged lack of transparency can be problematic where, as a practical matter, insurers will likely need to convince judges and jurors that their proprietary and complex AI systems used in the claims handling process are not unreasonable.

Carriers, on the other hand, defend their use of AI by highlighting its ability to pro­cess claims efficiently and consistently. They contend that AI systems are designed to follow the guidelines and coverage crite­ria set forth in the policy and that any deci­sions made are consistent with these terms. Insurers point out that AI is merely a tool that aids in decision-making and is not the sole arbiter of claims. They posit that AI helps to eliminate human error and bias, leading to more consistent and objective outcomes. Carriers point to administrative cost savings of using AI that can be passed on to customers in the form of lower pre­miums. According to a McKinsey study, “AI-enabled [prior authorization processes] can automate 50 to 75 percent of man­ual tasks” when adjusting routine health insurance claims, which can help and free up carriers/payors to focus on more com­plex cases. Healthcare Payers Recognize that Prior Authorization (PA) is Ripe for Improvement. AI-enabled PA Design may Deliver Substantial Financial, User-Expe­rience, and Care, McKinsey & Company, April 19, 2022, https://www.mckinsey. com/industries/healthcare/our-insights/ ai-ushers-in-next-gen-prior-authoriza­tion-in-healthcare. For example, some AI systems can automate obtaining and cross-validating medical records, resulting in faster turnaround times that may benefit policyholders. Id. If used correctly, ana­lytical AI models can be used to root out fraudulent claims, the cost of which would otherwise be borne by other policyholders. Id. Carriers also point to gains in speed and efficiency of approving and paying rou­tine claims, with policyholders receiving the benefits of faster claim payments and lower premiums.

Daily news headlines that warn of the effects AI will have on our society have certainly exacerbated many people’s concerns about this new frontier.

While it is impossible to predict the exact direction AI-related bad faith liti­gation will take, early waves of AI-related litigation focus heavily on the amount of control and actual oversight humans have on claims denials. Carriers are taking steps to make such claims more defensible. For example, several property and casualty carriers have developed, or are develop­ing, systems where AI products can be used to handle routine administrative tasks and even “approve” routine claims, but a human claims handler makes the deci­sion whether to deny claims and a proper investigation.

In sum, the insurance industry is faced with the challenge of balancing the techno­logical advancements that AI systems offer with carriers’ commitment to fair claims handling, while also keeping an eye on the risk of potential bad faith exposure. The level of human involvement and reason­ableness of AI decision making will likely be the deciding factors in how Courts view AI claims handling software.

Juror Perceptions of AI, Insurance, and the Use of Technology in Claims Handling

For decades, litigation consultants have researched jurors’ views on insurance com­panies, including their impressions of auto­mated technologies that carriers have used in the claims handling process. This empir­ical research has consistently found that many jurors distrust insurance compa­nies in general and, more specifically, are wary of how carriers use algorithms or unfamiliar technology in handling claims. This body of research suggests that jurors want a human to be involved in evaluating and investigating claims. Jurors also want to understand how and why the technol­ogy is being used. As one might expect, ju­rors are particularly critical of technology that appears to prioritize insurer savings over the interests of the insured, viewing such practices as potentially indicative of bad faith.

These common juror predispositions align with plaintiff arguments that auto­mated tools and algorithms fail to assess claims on their individual merits and are primarily employed to reduce company costs; but, importantly, the context and details of an individual case certainly influ­ence jurors’ decision making, and defense counsel can take steps at trial to effec­tively mitigate jurors’ negative preconcep­tions. It is also critical, of course, to identify and strike the most biased prospective ju­rors during jury selection. The follow­ing sections summarize recent research on the public’s attitudes towards AI and towards insurance companies, followed by the authors’ analysis and brief practical tips for defense attorneys trying AI-related bad faith cases.

Jurors Are Worried About AI

Recent opinion polls have confirmed that most Americans are worried about how AI will affect them personally and how it will affect our society at large. For exam­ple, global communication firm, Edelman, recently published their 2024 Edelman Trust Barometer, for which they polled 1,150 U.S. respondents. 2024 Edelman Trust Barometer, Edelman Trust Institute, pre­sented on January 14, 2024, https://www. edelman.com/trust/2024/trust-barome­ter. A top-line result from this large-scale opinion survey is that most Americans dis­trust AI.

The Edelman survey revealed that over the last 5 years, Americans’ already low trust in AI companies has declined, espe­cially in the last 2 years: In 2023, 43% of those questioned reported that they trusted AI companies to do what is right, whereas only 35% reported the same in 2024. The Edleman survey also found that 63% of U.S. respondents felt that “government regulators lack adequate understanding of emerging technologies to regulate them effectively.” Id. Additionally, over half of U.S. respondents reported the opinion that “innovation is poorly managed” (56%), rather than “well managed” (22%), and the other 39% selected “neither.” Id.

One of the most interesting results of the Edelman survey is the lack of substan­tial differences in views of AI depending on self-reported political ideology (although not necessarily by party affiliation, an important modern divergence that is out­side the scope of this article). Americans on the right and left are both skeptical of AI: 59% of right-leaning respondents and 51% of left-leaning respondents reported that they reject the use of AI. In contrast, the survey identified huge differences between the portion of right-leaning versus left-leaning respondents who rejected other innovations like green energy, gene-based medicine, and GMO foods. Id.

One can speculate that these latter inno­vations have been more politicized for a longer period of time than AI has, but these other innovations could serve as examples of how ideological/political polarization could significantly influence impressions of AI and drive a wedge between progres­sives and conservatives on this topic. It will be critical for attorneys, claims profession­als, and litigation consultants working on AI-related cases to keep their finger on the pulse of Americans’—and therefore poten­tial jurors’—evolving attitudes towards AI.

While the public’s current concerns about AI’s influence on their lives may not be wholly unfounded, their worries may also be overblown by substantial media coverage about the dangers of AI com­bined with their little actual knowledge of the subject. Jurors are likely to be biased against AI without being fully informed about the technology and its applications. Litigators who can properly explain the role of AI in the claim handling process may be able to disabuse jurors of certain mis­conceptions around the technology. None­theless, it is clear that most Americans are presently leery of AI technologies, suspi­cion that is likely amplified when powerful entities that they distrust—like insurance companies—are the ones using it.

Jurors’ Views of Insurance Companies

Most Americans, and therefore most ju­rors, believe that insurance companies operate only in the company’s own best interest. During mock trials and focus groups, the authors frequently hear mock jurors voice negative impressions of the insurance industry, and mock jurors often recount personal experiences where they or a loved one were denied coverage or denied what they thought was fair com­pensation. Accordingly, a May 2020 sur­vey of jury-eligible Americans by Decision Analysis, Inc. found that 70% of respon­dents believed that “insurance companies would do anything to avoid paying even legitimate claims.” Studying Juror Attitudes Toward COVID-19 Insurance Claims, Deci­sion Analysis, https://www.law360.com/ articles/1306933/studying-juror-atti­tudes-toward-covid-19-insurance-claims.

Given these negative attitudes towards carriers, most jurors will be hard-pressed to give insurance companies the benefit of the doubt when it comes to fair and ethi­cal use of AI. Furthermore, it is likely that there would be an additive effect here— jurors’ distrust of AI and their distrust of insurance companies will converge to make many jurors even more suspicious of insurance companies’ use of AI than they are of either AI or insurance companies separately. However, there are many les­sons from the legal psychology and com­munication fields that can inform us how to help jurors feel more comfortable with AI—in particular more understanding of and open to how the insurance industry might use AI tools.

Practical Advice for Jury Trials Involving AI and Insurance

At the outset of an AI-related bad faith trial, a juror is likely to assume that an insurer is using technology in an attempt to deny claims or reduce its payout. Regardless of the burden of proof, as a practical matter, the onus is on the insurer to clearly explain the role of any technology used and assure jurors that the claims handler is using AI as an appropriate tool.

We have often observed insurance com­panies struggle to explain the decision-making process of claims handling in a way that is persuasive to jurors. Jurors want context—they need to understand the norms of the industry or the “world” at issue, as well as the thought processes of those involved. Therefore, it is essential that witnesses are thoroughly prepared to explain the key aspects of the insurance industry and claims handling, the AI tools in use, and the claims handlers’ decision-making processes. Of particular note, in the current climate, it is highly valuable if a carrier can explain that the company is not using an AI tool to fully replace a human handler’s final judgment for whether to deny a claim. Jurors want a substantive human touch involved in the final deci­sion; so, as the facts allow, help your wit­nesses to clearly describe AI’s role as a tool that claims handlers use to improve effi­ciency and accuracy, but that the human performs the ultimate analysis and makes the final decision about a claim.

In bad faith cases where complex sys­tems or claims handling is at issue, just as important as persuasion—if not more important—is educating the jury. Because jurors today are disenchanted with govern­ments and institutions, highly polarized politically, and deeply skeptical, many ju­rors are inherently resistant to persuasion. Jurors want to learn more than they want to be persuaded. Instead of the role of the advocate at trial being to forcefully argue the case to persuade jurors to find for their clients, the role of the advocate should be to help jurors understand the entire context of the case and let them step into your shoes to learn, investigate, and solve the case or handle the claim with you. Keep in mind that “self-persuasion” is more powerful than “presenter persuasion.” Indeed, the trial team that does a better job of teach­ing the jury to understand their positions is the one that often prevails.

Especially when dealing with complex or technical issues—like explaining how AI works—it is invaluable to test your case with well-designed mock trials and/or focus groups. Every case is nuanced and every venue is different, and Americans’ views of both AI and the insurance indus­try continue to evolve, seemingly at record speed these days. Testing your case allows you to refine your strategy based on case-specific empirical data, which can substan­tially boost your odds of success at trial.

If the case goes to trial, selecting a recep­tive jury is key. Jury selection is really jury de-selection: your goal should be to identify and eliminate negative and risky jurors. Start by identifying the problematic issues in your case and how the opposing side will likely present their case before designing a juror questionnaire and voir dire ques­tions. You will also want to create a jury selection plan well in advance of trial that includes a meaningful jury profile, dealing with court procedures, processing ques­tionnaires and internet research, voir dire goals and procedures, developing cause challenges, and strike strategies.

When questioning potential jurors dur­ing voir dire, seek to create a conversational tone which makes jurors feel safe to have a meaningful conversation about impor­tant and difficult issues. You want them to feel comfortable expressing negative atti­tudes that cut against your client. In gen­eral, you also want to prompt the jurors to talk as much as possible, so ask them open-ended questions and ask them to elaborate on their answers.

In a bad faith case involving AI use in claims handling, create a jury profile and design voir dire questions aimed at identi­fying panelists who are most likely to reject insurance companies’ use of AI in claims handling, with the ultimate goal of strik­ing those people from the panel. Below are a few example questions:

Some people would say that they are more excited than concerned about arti­ficial intelligence. Some would say they are more concerned than excited. Who here would say that they are more con­cerned than excited?

Who here would not trust an insurance company to use AI responsibly?

Who here thinks that insurance com­panies will use AI to deny or reduce claims as opposed to legitimately eval­uating a claim?

Who here has read or heard something negative about insurance companies using AI or other technology in their claims handling process?

Have you, or anyone close to you, ever had a claim that was unfairly denied by an insurance company? If so, please tell us about that experience.

In a nutshell, the intersection of AI, insurance, and jury trials presents both challenges and opportunities for insurers. The key to success lies in carefully man­aging how AI tools are explained and per­ceived by jurors, who are often skeptical of both the technology and insurance com­panies. Through thoughtful jury selec­tion, clear and transparent explanations, and a focus on education over persua­sion, defense counsel handling bad faith claims can help disarm biases and frame AI – or automated claims systems – as a tool for fairness and efficiency rather than a faceless algorithm driven by cost-cutting motives. Navigating these complexities effectively will be critical as AI becomes more central to claims handling and comes under increased scrutiny in the courtroom.

Conclusion

Nobody knows what the future of AI holds; only that it is here, and it is here to stay. As AI continues to assume a more prominent role in claim handling, it becomes increas­ingly crucial for insurance companies and legal professionals to update their strate­gies for navigating the complex landscape of bad faith litigation. The key to success in future bad faith trials may lie in effec­tively communicating the benefits and safeguards of AI technologies, addressing judges’ and jurors’ fears, and dispelling misconceptions about AI’s role in decision-making processes.

Download Article By Clicking the Link Below