A prior authorization denial lawsuit is a legal claim brought by patients or their families alleging that health insurance companies wrongfully denied coverage for medically necessary care through flawed or automated processes. These lawsuits represent one of the fastest-growing areas of healthcare litigation, driven largely by evidence that insurance companies are using artificial intelligence algorithms to make coverage decisions with alarming error rates—sometimes rejecting treatments that should have been approved. The most prominent case, Lokken v. UnitedHealth Group Inc., involves patients whose post-acute care coverage was terminated based on UnitedHealthcare’s AI algorithm called “nH Predict,” which reportedly has a 90% error rate and made coverage denials without meaningful human review.
The scope of the problem is staggering. In 2024 alone, nearly 53 million prior authorization determinations were sent to Medicare Advantage insurers—most processed through automated systems with minimal clinical oversight. When patients and doctors appeal these denials, they are overturned at rates exceeding 80%, indicating that many initial decisions were incorrect. Multiple class action lawsuits have been filed against major insurers including UnitedHealth, Cigna, and Humana, alleging that AI-driven prior authorization denials have caused unnecessary suffering, delayed critical care, and in some cases contributed to patient deaths. These lawsuits challenge a system in which profit-driven insurance companies have increasingly replaced human medical judgment with machines, often with catastrophic consequences for vulnerable patients.
Table of Contents
- How Do Prior Authorization Denials Lead to Lawsuits?
- What Evidence Shows These Denials Are Wrongful?
- What Are the Major Cases Against Insurance Companies?
- What Is the Impact on Patients?
- How Are Regulators and Insurers Responding?
- Who Is Eligible for These Lawsuits?
- What Is the Future of Prior Authorization?
- Conclusion
How Do Prior Authorization Denials Lead to Lawsuits?
Prior authorization is an insurance company requirement that doctors obtain approval before providing certain treatments, medications, or procedures. When insurers deny these requests, patients are left without coverage and must pay out-of-pocket or forgo care entirely. The legal issue arises when insurance companies deny prior authorizations based on flawed algorithms, insufficient medical review, or predetermined quotas rather than individualized clinical assessment of each patient’s specific condition and needs. In the Lokken case, the plaintiffs alleged that UnitedHealthcare used its nH Predict algorithm to automatically deny post-acute care coverage without requiring human physicians to review the decision. Court documents suggest the algorithm was accurate less than 10% of the time, yet the company continued using it for months, denying coverage to thousands of patients.
U.S. District Court Judge Tunheim allowed the case to move forward on claims of breach of contract and violation of state insurance laws, finding the allegations sufficiently credible to proceed to discovery. This decision signaled that courts are willing to hold insurers accountable for algorithmic denial decisions that lack proper human oversight. Another legal theory focuses on bad faith denial—the insurance industry term for rejecting claims without legitimate reason or ignoring evidence that contradicts the denial. When appeal data shows that over 80% of AI-driven prior authorization denials are overturned when challenged, it raises questions about whether the initial denials were made in good faith or whether insurers are using these systems knowing they will reject many legitimate claims.

What Evidence Shows These Denials Are Wrongful?
The evidence of systematic wrongfulness is compelling. Research from AI2Work found that prior authorization decisions made by AI algorithms have an 82% overturn rate when appealed—meaning that for every five patients who appeal a denial, four of them win. This extraordinarily high reversal rate indicates not random errors, but systemic inaccuracy. No legitimate medical review process should overturn its own decisions in four of five cases. It suggests insurers are either deploying algorithms they know are unreliable, failing to validate these systems before using them on patients, or refusing to invest in adequate human oversight. A U.S. Senate investigation examined prior authorization practices at the nation’s largest insurers between 2019 and 2022.
The findings were damning. UnitedHealthcare and CVS denied prior authorization requests for post-acute care at approximately three times their overall denial rates. Humana’s post-acute care denial rate was more than 16 times higher than its overall denial rate—suggesting these denials were not clinically justified, but rather reflecting company-wide pressure to reduce payouts. The Senate report noted that many of these denials were later reversed on appeal, indicating the initial decisions lacked adequate clinical justification. The weakness in the current system is that there is no requirement for insurers to validate AI algorithms before deploying them on patients. Unlike drug approvals by the FDA or medical device approvals, insurance companies can develop algorithms in-house, test them on limited data, and then use them to make life-or-death coverage decisions for hundreds of thousands of patients without external oversight or peer review. If an algorithm fails, the insurance company simply reverses decisions on appeal—creating the illusion of a safety net while the system continues denying legitimate claims.
What Are the Major Cases Against Insurance Companies?
The Lokken v. UnitedHealth Group Inc. case is the largest pending prior authorization denial lawsuit. Filed as a class action on behalf of all patients whose post-acute care coverage was denied by nH Predict, the case alleges that UnitedHealthcare deployed an algorithm to replace human review of medically necessary care, directly violating the company’s obligations under insurance contracts and state insurance laws. The allegation that the algorithm had a 90% error rate—meaning it was wrong nine times out of ten—makes this one of the most egregious cases of algorithmic failure in the healthcare system. Judge Tunheim’s decision allowing the case to proceed means substantial discovery lies ahead, during which plaintiffs can examine internal UnitedHealthcare documents, test results, and communications about the nH Predict system.
Two additional class action lawsuits were filed in November 2023 against Cigna and UnitedHealth, alleging that their AI models called “PxDx” and “nH Predict” were used to wrongfully deny medically necessary care. These cases are advancing through the courts as separate actions, though they share similar legal theories and factual allegations. Together, these lawsuits represent claims affecting potentially hundreds of thousands of patients who received coverage denials from these three insurers during the periods when the AI systems were operational. Unlike past healthcare litigation that focused on individual denied claims, these class actions target the systems themselves. The lawsuits argue that any patient denied coverage by these specific AI algorithms during a defined time period is part of an injured class, because the algorithms were fundamentally unreliable and the insurance companies failed to implement adequate human review. This class action structure means that even patients who did not appeal their denials may be eligible for compensation once the cases settle or are decided.

What Is the Impact on Patients?
The human consequences of prior authorization denials are severe and documented. Patients lose access to necessary treatments, including rehabilitation services after hospitalizations, specialized mental health care, cancer medications, and surgical procedures. Some patients deteriorate while waiting for appeals; others die before coverage is approved. Physicians report spending hours navigating insurance company bureaucracies instead of treating patients. The emotional toll—the uncertainty about whether treatment will be covered, the need to plead with insurance companies for approval, the financial devastation when coverage is denied—compounds the physical effects of illness. Consider a typical scenario: An elderly patient recovering from hip surgery is denied post-acute care coverage by their Medicare Advantage insurer’s AI algorithm. The doctor requests an appeal, but the denial remains in place for weeks.
By the time the denial is overturned, the patient has declined significantly and loses the window during which rehabilitation could have prevented permanent disability. The damage is done, and the overturn comes too late. This scenario, multiplied across thousands of cases, explains why these lawsuits exist—insurance companies have created systems that harm patients first and ask for permission later. The trade-off between insurance company profits and patient care is not theoretical. Every percentage-point increase in prior authorization denials increases company revenue by denying legitimate claims, while pushes costs onto patients and healthcare providers. When insurers deploy unreliable AI systems to maximize denials, they are knowingly trading patient health for corporate profit. That calculus is what makes these cases legally actionable under fraud and bad faith theories.
How Are Regulators and Insurers Responding?
The regulatory response has been mixed but accelerating. In June 2025, major U.S. health insurers including CVS Health (Aetna), UnitedHealthcare, Cigna, Humana, Elevance Health, and Blue Cross Blue Shield agreed to streamline the prior authorization process, pledging to reduce denials, improve transparency, and implement faster approval timelines. This industry-wide agreement was a public relations move—an attempt to demonstrate reform without admitting fault or settling pending litigation. However, the agreement does not address the core issue: the use of unvalidated AI algorithms to make coverage decisions. More significantly, the Centers for Medicare & Medicaid Services (CMS) released a proposed rule on April 10, 2026, extending prior authorization reform to drugs billed under the medical benefit.
This marks the first time federal policy requires electronic prior authorization for provider-administered therapies reimbursed under Medicare Part B. The rule would establish standards for response times, appeal procedures, and clinical review requirements. The limitation is that these rules apply only to Medicare, not to commercial insurance plans or Medicaid, leaving hundreds of millions of Americans still vulnerable to the same AI-driven denial systems documented in pending lawsuits. Additionally, the Electronic Frontier Foundation (EFF) sued CMS over transparency and oversight of the WISeR prior authorization AI model, which launched on January 1, 2026, in six states. This lawsuit challenges the federal government’s own use of AI for prior authorization denials, arguing that the agency has failed to disclose how the algorithm works or to provide adequate oversight. If the EFF prevails, it could set a legal precedent requiring insurers to validate and disclose their AI algorithms before using them. The warning here is that regulatory reform moves slowly, and the algorithms are currently being used on patients right now, making litigation an important check on corporate power.

Who Is Eligible for These Lawsuits?
If you received a prior authorization denial from UnitedHealthcare, Cigna, or Humana during the periods when these AI algorithms were in use (generally 2019 through early 2025), you may be eligible to join a class action lawsuit. The Lokken case specifically applies to patients whose post-acute care coverage—including skilled nursing facilities, rehabilitation services, and home health care—was denied. The November 2023 cases against Cigna and UnitedHealth cover wrongful denials across multiple categories of care. You do not need to have appealed the denial to be part of the class.
Class certification in these cases is expected to include all patients who received denials from the specific AI algorithms, as the lawsuits argue the systems were fundamentally flawed. Most class actions of this type settle, and settlements typically provide compensation to class members based on the amount of denied care, any out-of-pocket costs paid, and documented harm. Your eligibility depends on when the denial occurred, which insurer issued it, and what type of care was denied. If you think you may qualify, contact a class action attorney or check the pending case dockets for more information about claim filing deadlines and settlement distributions.
What Is the Future of Prior Authorization?
The prior authorization system itself is under scrutiny. Some states are considering legislation to require human review of all AI-assisted coverage decisions or to ban certain types of algorithms altogether. The federal government is moving toward transparency and validation requirements, though slowly. The larger question is whether prior authorization—even with human review—is compatible with equitable healthcare.
Some healthcare reformers argue the system should be eliminated entirely, on the grounds that the time and resources spent denying claims and managing appeals could be better spent on actual patient care. The litigation over prior authorization denials will likely continue for several years, with appeals, settlement negotiations, and potentially jury trials. Each court victory for patients creates precedent that makes it harder for insurers to deploy unvalidated algorithms without scrutiny. The combination of private litigation and regulatory pressure may finally force the healthcare insurance industry to choose between profit maximization and patient care—or at least to be more transparent about which they are prioritizing. For now, if you have experienced a wrongful prior authorization denial, these lawsuits offer a legal avenue to hold insurance companies accountable.
Conclusion
Prior authorization denial lawsuits represent a fundamental reckoning with how health insurance companies make coverage decisions. They challenge the use of AI algorithms with 90% error rates to deny medically necessary care, the systematic reversal of denials on appeal, and the profit-driven incentives that prioritize corporate revenue over patient health. The evidence is clear: major insurers deployed unreliable systems, denied legitimate claims at higher rates than clinically justified, and left patients to suffer the consequences while hoping few would appeal.
If you or a family member received a wrongful prior authorization denial from UnitedHealthcare, Cigna, Humana, or another major insurer, these lawsuits may provide compensation and accountability. Class action settlements are now being negotiated, and the window to join these cases remains open. Additionally, regulatory reforms are gradually being implemented, though they lag behind the speed at which insurers deploy new technologies. The path forward requires holding both private insurance companies and government agencies accountable for transparent, validated, and human-centered coverage decisions.