Pluralistic: Cigna's nopeinator (29 Apr 2024)

Today's links

An existential plane extending to an abstract background. Scattered through the scene are mainframes and control panels, being worked by faceless figure. In the center stands a downcast MD in old-fashioned scrubs. In the foreground to the right is an impatient older man in a business suit, staring at his watch and brandishing a sheaf of papers. In the background left is a grim reaper figure raising a glass of blood in a toast, the blood spattering his robes. In the center background in large magnetic 'computer font' lettering is the word 'NO.'

Cigna's nopeinator (permalink)

Cigna – like all private health insurers – has two contradictory imperatives:

I. To keep its customers healthy; and

II. To make as much money for its shareholders as is possible.

Now, there's a hypothetical way to resolve these contradictions, a story much beloved by advocates of America's wasteful, cruel, inefficient private health industry: "If health is a "market," then a health insurer that fails to keep its customers healthy will lose those customers and thus make less for its shareholders." In this thought-experiment, Cigna will "find an equilibrium" between spending money to keep its customers healthy, thus retaining their business, and also "seeking efficiencies" to create a standard of care that's cost-effective.

But health care isn't a market. Most of us get our health-care through our employers, who offer small handful of options that nevertheless manage to be so complex in their particulars that they're impossible to directly compare, and somehow all end up not covering the things we need them for. Oh, and you can only change insurers once or twice per year, and doing so incurs savage switching costs, like losing access to your family doctor and specialists providers.

Cigna – like other health insurers – is "too big to care." It doesn't have to worry about losing your business, so it grows progressively less interested in even pretending to keep you healthy.

The most important way for an insurer to protect its profits at the expense of your health is to deny care that your doctor believes you need. Cigna has transformed itself into a care-denying assembly line.

Dr Debby Day is a Cigna whistleblower. Dr Day was a Cigna medical director, charged with reviewing denied cases, a job she held for 20 years. In 2022, she was forced out by Cigna. Writing for Propublica and The Capitol Forum, Patrick Rucker and David Armstrong tell her story, revealing the true "equilibrium" that Cigna has found:

Dr Day took her job seriously. Early in her career, she discovered a pattern of claims from doctors for an expensive therapy called intravenous immunoglobulin in cases where this made no medical sense. Dr Day reviewed the scientific literature on IVIG and developed a Cigna-wide policy for its use that saved the company millions of dollars.

This is how it's supposed to work: insurers (whether private or public) should permit all the medically necessary interventions and deny interventions that aren't supported by evidence, and they should determine the difference through internal reviewers who are treated as independent experts.

But as the competitive landscape for US healthcare dwindled – and as Cigna bought out more parts of its supply chain and merged with more of its major rivals – the company became uniquely focused on denying claims, irrespective of their medical merit.

In Dr Day's story, the turning point came when Cigna outsourced pre-approvals to registered nurses in the Philippines. Legally, a nurse can approve a claim, but only an MD can deny a claim. So Dr Day and her colleagues would have to sign off when a nurse deemed a procedure, therapy or drug to be medically unnecessary.

This is a complex determination to make, even under ideal circumstances, but Cigna's Filipino outsource partners were far from ideal. Dr Day found that nurses were "sloppy" – they'd confuse a mother with her newborn baby and deny care on that grounds, or confuse an injured hip with an injured neck and deny permission for an ultrasound. Dr Day reviewed a claim for a test that was denied because STI tests weren't "medically necessary" – but the patient's doctor had applied for a test to diagnose a toenail fungus, not an STI.

Even if the nurses' evaluations had been careful, Dr Day wanted to conduct her own, thorough investigation before overriding another doctor's judgment about the care that doctor's patient warranted. When a nurse recommended denying care "for a cancer patient or a sick baby," Dr Day would research medical guidelines, read studies and review the patient's record before signing off on the recommendation.

This was how the claims denial process is said to work, but it's not how it was supposed to work. Dr Day was markedly slower than her peers, who would "click and close" claims by pasting the nurses' own rationale for denying the claim into the relevant form, acting as a rubber-stamp rather than a skilled reviewer.

Dr Day knew she was slower than her peers. Cigna made sure of that, producing a "productivity dashboard" that scored doctors based on "handle time," which Cigna describes as the average time its doctors spend on different kinds of claims. But Dr Day and other Cigna sources say that this was a maximum, not an average – a way of disciplining doctors.

These were not long times. If a doctor asked Cigna not to discharge their patient from hospital care and a nurse denied that claim, the doctor reviewing that claim was supposed to spend not more than 4.5 minutes on their review. Other timelines were even more aggressive: many denials of prescription drugs were meant to be resolved in fewer than two minutes.

Cigna told Propublica and The Capitol Forum that its productivity scores weren't based on a simple calculation about whether its MD reviewers were hitting these brutal processing time targets, describing the scores as a proprietary mix of factors that reflected a nuanced view of care. But when Propublica and The Capitol Forum created a crude algorithm to generate scores by comparing a doctor's performance relative to the company's targets, they found the results fit very neatly into the actual scores that Cigna assigned to its docs:

The newsrooms’ formula accurately reproduced the scores of 87% of the Cigna doctors listed; the scores of all but one of the rest fell within 1 to 2 percentage points of the number generated by this formula. When asked about this formula, Cigna said it may be inaccurate but didn’t elaborate.

As Dr Day slipped lower on the productivity chart, her bosses pressured her bring her score up (Day recorded her phone calls and saved her emails, and the reporters verified them). Among other things, Dr Day's boss made it clear that her annual bonus and stock options were contingent on her making quota.

Cigna denies all of this. They smeared Dr Day as a "disgruntled former employee" (as though that has any bearing on the truthfulness of her account), and declined to explain the discrepancies between Dr Day's accusations and Cigna's bland denials.

This isn't new for Cigna. Last year, Propublica and Capitol Forum revealed the existence of an algorithmic claims denial system that allowed its doctors to bulk-deny claims in as little as 1.2 seconds:

Cigna insisted that this was a mischaracterization, saying the system existed to speed up the approval of claims, despite the first-hand accounts of Cigna's own doctors and the doctors whose care recommendations were blocked by the system. One Cigna doctor used this system to "review" and deny 60,000 claims in one month.

Beyond serving as an indictment of the US for-profit health industry, and of Cigna's business practices, this is also a cautionary tale about the idea that critical AI applications can be resolved with "humans in the loop."

AI pitchmen claim that even unreliable AI can be fixed by adding a "human in the loop" that reviews the AI's judgments:

In this world, the AI is an assistant to the human. For example, a radiologist might have an AI double-check their assessments of chest X-rays, and revisit those X-rays where the AI's assessment didn't match their own. This robot-assisted-human configuration is called a "centaur."

In reality, "human in the loop" is almost always a reverse-centaur. If the hospital buys an AI, fires half its radiologists and orders the remainder to review the AI's superhuman assessments of chest X-rays, that's not an AI assisted radiologist, that's a radiologist-assisted AI. Accuracy goes down, but so do costs. That's the bet that AI investors are making.

Many AI applications turn out not to even be "AI" – they're just low-waged workers in an overseas call-center pretending to be an algorithm (some Indian techies joke that AI stands for "absent Indians"). That was the case with Amazon's Grab and Go stores where, supposedly, AI-enabled cameras counted up all the things you put in your shopping basket and automatically billed you for them. In reality, the cameras were connected to Indian call-centers where low-waged workers made those assessments:

This Potemkin AI represents an intermediate step between outsourcing and AI. Over the past three decades, the growth of cheap telecommunications and logistics systems let corporations outsource customer service to low-waged offshore workers. The corporations used the excuse that these subcontractors were far from the firm and its customers to deny them any agency, giving them rigid scripts and procedures to follow.

This was a very usefully dysfunctional system. As a customer with a complaint, you would call the customer service line, wait for a long time on hold, spend an interminable time working through a proscribed claims-handling process with a rep who was prohibited from diverging from that process. That process nearly always ended with you being told that nothing could be done.

At that point, a large number of customers would have given up on getting a refund, exchange or credit. The money paid out to the few customers who were stubborn or angry enough to karen their way to a supervisor and get something out of the company amounted to pennies, relative to the sums the company reaped by ripping off the rest.

The Amazon Grab and Go workers were humans in robot suits, but these customer service reps were robots in human suits. The software told them what to say, and they said it, and all they were allowed to say was what appeared on their screens. They were reverse centaurs, serving as the human faces of the intransigent robots programmed by monopolists that were too big to care.

AI is the final stage of this progression: robots without the human suits. The AI turns its "human in the loop" into a "moral crumple zone," which Madeleine Clare Elish describes as "a component that bears the brunt of the moral and legal responsibilities when the overall system malfunctions":

The Filipino nurses in the Cigna system are an avoidable expense. As Cigna's own dabbling in algorithmic claim-denial shows, they can be jettisoned in favor of a system that uses productivity dashboards and other bossware to push doctors to robosign hundreds or thousands of denials per day, on the pretense that these denials were "reviewed" by a licensed physician.

Hey look at this (permalink)

A Wayback Machine banner.

This day in history (permalink)

#20yrsago AdBusters new sneaker to compete toe-to-toe with Nikes

#15yrsago Warner Music claims Lessig is a pirate, has his presentation taken off YouTube

#15yrsago Real-Money Trades: turning gold-farming into a game company profit-center

#15yrsago Britain’s deportee detention system subjects small children to horrific abuse

#15yrsago Transparency isn’t enough

#10yrsago Mathematicians: refuse to work for the NSA!

Upcoming appearances (permalink)

A photo of me onstage, giving a speech, holding a mic.

A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)

A grid of my books with Will Stahle covers..

Latest books (permalink)

A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025

  • Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025

Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: Precaratize Bosses

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.

How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Newsletter (no ads, tracking, or data-collection):

Mastodon (no ads, tracking, or data-collection):

Medium (no ads, paywalled):

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla