Pluralistic: 10 Sep 2022 American healthcare did a fuckery, the White House has a plan for Big Tech big-data

Today's links

Ticker-tape parade for presidential candidate Richard Nixon in New York in 1960.

American healthcare did a fuckery (permalink)

My fellow Americans, I regret to inform you that our beloved health insurance industry has done a major fuckery. I know this is hard to believe, given the probity and honor we associate with our fine insurance companies, but the evidence is incontrovertible.

Back in 2019, the Trump administration ordered insurers and hospitals to start disclosing their prices, despite tens of thousands of comments filed by employers, insurers and hospitals objecting to the proposal.

This is one of those pox-on-all-your-houses/you-can't-get-there-from-here situations. The Trump admin wanted to continue the fiction that the blame for America's worst-in-class health care was the result of bad market dynamics. Without transparency on pricing and service, employers can't shop for plans, hospitals can't know if they're getting a bad deal from insurers, and sick people are denied information needed for effective bargaining.

That's all true, as far as it goes. The stubborn, remarkable opacity of American health-care pricing is an enormous source of mischief. Two patients receiving the same procedure, medicine or service might see bills that are thousands of dollars apart, at the same hospital, delivered by the same personnel.

Without price transparency, patients can't know if they're getting ripped off. But if you know that the hospital charged the person in the next bed $3 for the Tylenol tablet that you got charged $300 for, you can go to the hospital billing department and say, basically, "Oh, come the fuck on – seriously?" and maybe get $297 knocked off your bill.

That's why insurers and hospitals don't want price transparency. Hospitals don't want insurers to know that they're getting gouged, and insurers don't want their competitors to know that they've cut sweetheart deals that are being subsidized by eye-popping profits extracted from their rivals.

Ending those practices will make marginal improvements to US healthcare, but only marginal ones. The problem with health-care isn't that it's an imperfect market – it's that we treat it as a market at all. Markets may help organize and allocate discretionary goods and services, but the core of healthcare is not discretionary.

Fundamentally, an unconscious person in cardiac arrest being loaded into the back of an ambulance cannot send a price-signal by shopping for a hospital emergency room and directing the driver to take them there. Even less extreme examples – cancer treatment, insulin, a sick child, a broken bone – do not lend themselves to market dynamics.

Health care always turns into a planned economy. The only question is: who plans it? Right now, we have a monopolized health-care supply-chain, with a handful of companies controlling insurance, hospitals, pharma, pharmacy benefit managers, hospital beds, powered wheelchairs, etc. Each of these sectors is locked in a death-battle with the others, fighting to shift profits from one balance-sheet to the other – but no matter which one wins, the rest of us lose.

There's a reason that Americans pay more for worse health outcomes, and why American medical professionals get paid less for worse working conditions, than their counterparts abroad. A hospital chain and an insurance company might be well-matched for negotiating power and thus able to arrive at an equilibrium that lets both of them thrive – but workers and patients are disorganized and atomized, and we're easy pickings for both insurers and hospitals.

Sure, hospitals and insurers fight over us, but the prize they're seeking is the right to drain our wallets – not the right to win our business through excellence. They are not our champions, they are our tormentors, and whichever one wins, we lose.

Back to transparency. Despite the tsunami of bad-faith objections to publishing prices, the Trump admin enacted the rule, because the rule was key to the pretense that the market for health-care could be fixed – while the alternative was that it should be abolished and subsumed into a single-payer system, like the ones that every other rich country with successful healthcare use.

The insurers and hospitals switched tactics: they simply ignored the rule. Months went by. Deadlines passed. The prices remained a secret, or were publishing in incomplete form, or in obscure formats that couldn't be readily understood or compared. The Biden admin threatened the sector with fines and public draggings, with increasing severity as the stalemate continued.

Then the insurers switched tactics. Over the summer, the nation's mammoth insurance companies began dumping enormous amounts of price-data – and I do mean enormous. Humana dropped 400 billion prices in 500,000 CSV files totaling 600TB.

That pales in comparison to the dumps from Unitedhealthcare – a private-equity backed behemoth that scooped up dozens of smaller companies – whose 250TB dump of 100,000,000,000 prices:

Alec Stein's chart showing the scale of the insurance pricing dump.

All told, the industry has produced more than a trillion prices. Writing on Dolthub, Alec Stein contextualizes this unimaginably large dump: larger than English Wikipedia, the Library of Congress, Libgen, and all of Netflix – combined:

Where did all this data come from? The insurers are breaking out prices by "who's paying, who's getting paid, what they're getting paid for, plus some extra fluff to keep track of versioning," combining all of these to produce generally meaningless distinctions that only serve to chaff the data so it can't be readily parsed.

This is pure malicious compliance, a monopolist's version of work-to-rule whereby they follow the letter of the law in a way that is clearly designed to frustrate the spirit of the law. Stein – who works for Dolthub, and wants to highlight the power of its database product – thinks we can still wrangle all that data.

He proposes reducing the amount of data by 99% by eliminating extraneous metadata, as well as data on rare procedures at the margins of health care. Then, he wants to parse the remaining data through "data bounties" that pay data scientists to perform specific tasks – Dolthub already did this for a smaller set of hospital data, and it worked:

Then, Stein says, we can further whittle the data down by zeroing in on the 70 codes required by Medicare, and break those out by hospitals by using Medicare's "national provider identity" codes.

All of this will be useful work. Assigning precise dollar-figures to the dysfunction of a commercialized healthcare sector is a necessary-but-insufficient precursor to creating a sensible universal system. Likewise, such data will be useful for the the DoJ and FTC when they block future mergers and unwind existing ones.

You can help! Stein wrote custom scrapers for each insurer's dumps and posted them to Github, and you can scrape the data yourself:

As useful as all of this will be, don't take your eyes off the prize. America has the worst-performing, most expensive healthcare in the rich world. Fixing it means more than tinkering in the margins with price transparency. Health is not a market – it's a human right.

The logo for the White House, superimposed over a Matrix 'code waterfall' effect.

The White House has a plan for Big Tech (permalink)

This week, the White House released its long-anticipated plan for addressing monopoly in the tech sector. Fixing Big Tech is important, because a free, fair and open internet is a necessary precondition for organizing all our other fights about human rights, equity, labor, the climate, and racial and gender justice.

The White House plan is a mixed bag. They set out six action points, each of them amorphous enough that they could all be summarized as "the devil is in the details" – that is, depending on how these are handled, they could be great, or terrible.

But one point stands out as especially fraught, controversial and dangerous: a vague promise of "fundamental reforms to Section 230," which is incorrectly characterized as "special legal protections for large tech platforms."

I'm going to go through all six of the points below and describe how they could go right, or wrong, and in the end I'll get into more detail on 230 – it's one of the worst-understood areas of internet law, a favored punching bag of the right and the left, and getting this one wrong could deliver permanent dominance to Big Tech platforms.

I. "Promote competition in the technology sector": This covers both meat-and-potatoes trustbusting (breakups, merger scrutiny) and modern, tech-specific tactics, like interoperability mandates and bans on self preferencing. This is generally great stuff, but there are three important pitfalls to avoid:

i. Interop mandates that expose users to risk through hasty action. The EU's Digital Markets Act unwisely kicked off by mandating interop in messaging tools on an unrealistically short timeline. Maintaining the security of encrypted messengers is extremely important; failures in messaging encryption are a source of existential risk to human rights workers, journalists and marginalized people all around the world. Recall that Jamal Khashoggi was lured to his slaughter by the Saudi government after they broke into his peers' encrypted messages using a cyberweapon produced by the NSO Group.

ii. Must-carry rules that force platforms to carry speech. Big online platforms have become our new public square, except that they aren't public – they're private. Their choices about which speech to block and which speech to carry are enormously consequential for our civics and politics. But rules that allow regulators to force providers to carry speech they disagree with set a dangerous precedent. Even if you think that the Biden admin's compelled speech will be fine (say, a rule requiring warnings alongside vaccine disinformation), imagine how this power will be handled by President Marjorie Taylor Green's administration.

The platforms' moderation choices are a danger because the platforms dominate our discourse. Allowing the platforms to corner the market for online speech has profound First Amendment implications:

But the answer isn't to turn the platforms into an arm of the state – it's to make their moderation choices less consequential for all of us, by devolving control over community norms to the communities themselves:

iii. Self-preferencing bans are very hard to administer. If Apple puts its own weather app at the top of the app-store listings, or if Google shows you an infobox with its weather prediction at the top of a search, that might feel like self-preferencing. But maybe Apple really believes that it has the best weather app. There isn't an objective standard for "best weather app." Unless you've got a front-row seat for the wall of Plato's Cave, distinguishing self-preferencing from good-faith curation is often impossible.

That's not to say that we should tolerate self-preferencing, nor is it to say that we can't ever detect and punish self-preferencing. Sometimes, tech companies actually document the fact that they're self-preferencing, as Google did when its engineers emailed their bosses to complain about being forced to put Google's inferior results ahead of rivals:

But we can't rely on Big Tech tripping over its own dick every time it does a bit of nefarious self-preferencing. The real remedy for self-preferencing is "structural separation": banning platform operators from competing with platform users. Referees shouldn't own one of the teams on the field, period.

II. "Provide robust federal protections for Americans’ privacy." A no-brainer. The US needs a federal privacy law, with a private right of action that allows individuals (and human rights groups) to sue firms that violate it, rather than waiting for a prosecutor to take up their cause. Do it.

I'm entirely unsympathetic to the argument that "targeted ads" are better than "untargeted ads" because they are "more relevant" to users. Users fucking hate targeted ads. Ad-blockers are the largest boycott in human history. When users are given the change to opt out of targeted ads, they do so in such overwhelming numbers that the holdouts are likely to be people who accidentally clicked the wrong button:

III. "Protect our kids by putting in place even stronger privacy and online protections for them, including prioritizing safety by design standards and practices for online platforms, products, and services."

Sounds good. As a dad, I like the idea. But there's so many ways it can go wrong. California's version of this rule was so vaguely worded that it's effectively impossible to comply with.

It's not just that this could result in kids being banned from using any online service – it's also that all online services might institute invasive verification procedures (like requiring and storing – and, inevitably, leaking – government IDs to prove that none of their users are kids).

But the difficulties here don't mean we have to be nihilists. We can demand that platforms that target kids – that market themselves as services for children – eschew advertising, minimize data collection, and take other steps to protect kids from commercial predation.

V. "Increase transparency about platform’s algorithms and content moderation decisions." Opponents of this one will claim that telling people how you moderate is a gift to trolls and griefers. I'm unsympathetic to the idea that there is "security through obscurity":

There's a lot of room for debate about how the "civil justice" system of big platforms should operate. One thing is clear: automated judgments about user speech can't be balanced by human review. The former happens at scale and near-instantaneously. The latter will either be deliberative and too slow to matter, or rapid and too quick to make sense of nuance.

One intriguing idea is to structure content moderation review as a "systemic" matter, which can address "immoderation" (the content that isn't moderated) as well as moderation. Note that no one has tried this yet, so while it sounds great, it's also a gamble:

VI. "Stop discriminatory algorithmic decision-making." This one is also maddeningly vague. If they're talking about ensuring that machine learning classifiers don't discriminate on the basis of speech, it's going to be very hard to make work. Remember, algorithmic moderation often operates on the context of speech as much as the content – if a bunch of seemingly coordinated users all post something that seems like harassment all at once, that speech might get labelled or suppressed or deleted. The exact same speech, posted by one person, once, might be left alone.

But there's another kind of algorithmic discrimination, such as the algorithms that target predatory financial products to Black users, or exclude women and racial minorities from being shown good jobs on employment sites. This is illegal – and we don't need new laws to prosecute it. But we do need new enforcement powers and resources for existing regulators to tackle it.

All right, that's the five least controversial points in the White House plan. But I left out point IV: "Remove special legal protections for large tech platforms."

Here, the White House is talking about Section 230 of the Communications Decency Act, AKA "The 26 Words That Made the Internet."

CDA230 is a rule that says that if a user's speech violates federal law, legal responsibility for that speech falls on the speaker, not the intermediary that brought you that speech. It's a rule that makes hosting user speech possible, period. It's how we get Facebook and Twitter, sure – but also how we get blog comments, Mastodon instances, and other independent platforms.

It's also how we get the infrastructure that makes it possible for individuals, nonprofits, private groups and co-ops to create their own speech forums. CDA230 means that a hosting company doesn't need to review all its customers' users' speech before hosting them (imagine if every web-page had to be vetted by your host before you could make it live – and then every change also had to go through legal review).

This is important in a competitive market, but it's even more important in our current, monopolized world, where getting kicked off of a platform might doom a speech forum (and again, if you're comfortable with this being used to nuke forums that the politicians you agree with get rid of, imagine which forums President DeSantis will target).

Any gun on the mantlepiece in Act I is sure to go off by Act III. If we hand any aggrieved party the right to remove speech without a trial, we can be sure that this facility will be abused by the worst people in the worst ways.

We know this because we've got decades of experience with the "notice-and-takedown" system for copyright enforcement, which allows anyone claiming to be a rightsholder to get almost anything taken down from almost anywhere, irrespective of whether a copyright infringement took place.

To see that in action, check out Eliminalia, which uses fraudulent copyright takedowns to launder the reputations of dictators, torturers, murderers and rapists, getting news articles and personal accounts of their victims and survivors removed from the internet:

In Germany, Sony Music is attempting to force Quad9, a public DNS provider, to block the records of websites whose users have allegedly posted links to other websites where infringing copies of Sony's copyrighted works can be found:

Sony is a serial abuser of its ability to moderate speech; the company routinely and wantonly deletes independent musicians' performances of classical compositions by falsely claiming that they violate Sony's copyrights – in other words, Sony is a music pirate on an unimaginable scale:

Our evidence for what a post-CDA230 internet would look like isn't limited to the copyright wars – for a more recent, more direct look at what happens when you make intermediaries responsible for their users' speech, look at the aftermath of SESTA/FOSTA.

SESTA/FOSTA is a (nominal) anti-sex-trafficking rule that creates criminal liability for companies whose services are used in connection with the heinous crime of sex trafficking. The immediate impact of SESTA/FOSTA was the mass, internet-wide removal of sites that sex workers used to keep themselves safe.

SESTA/FOSTA pushed sex workers back onto the streets, deprived them of the forums where they shared information about dangerous clients, and created a renaissance in pimping, as sex workers were forced to turn to third parties for their protection.

Curbing CDA230 is especially dangerous in light of the calls for a "fairness doctrine" for online platforms. One of the activities that CDA230 protects is moderation, allowing online hosts to remove harassing, hateful, threatening or otherwise odious speech without worrying that this requires that they remove every such instance.

This allows moderators to distinguish between a racist who calls another user by a slur, and a user who says, "Can you believe that racist called me :slur:?" Before 230 was enacted, courts took the position that once a service moderated any speech, it took on the duty to moderate all speech, creating the perverse incentive to ignore bad speech.

Some say that CDA230 protects Big Tech platforms only to the extent that it protects all online speech forums, including independent ones. But this is wrong. CDA230 protects small platforms more than it protects large ones – because large ones are better situated to hire the armies of lawyers and moderators to pore over and comma-fuck everything their users post.

That's why Mark Zuckerberg supports eliminating CDA230. As he is fond of pointing out, Facebook's budget for human moderators exceeds Twitter's total revenue. He understands that if you need to be as big as Facebook to compete with Facebook that:

a) No company will ever compete with Facebook, and

b) No government will ever make Facebook any smaller.

It's not just Zuck that hates 230 – it's also Donald Trump. Trump understands that removing legal protections for intermediaries will make them less able to stand up to rich and powerful people who can hire vicious attack lawyers who pride themselves on suppressing speech:

Trump loves the kinds of lawyers who kept #MeToo at bay for decades, not just by threatening the survivors of abuse, but by scaring anyone who might host their testimony into removing it.

The fact that Zuck and Trump think killing CDA230 is a great idea should at least give 230's progressive opponents a moment's pause.

Hey look at this (permalink)

This day in history (permalink)

#20yrsago Buzz Aldrin punches out lunar conspiracist

#5yrsago Tesla’s demon-haunted cars in Irma’s path get a temporary battery-life boost

Colophon (permalink)

Today's top sources: Ken Snider (, Zane Selvans (, Slashdot (

Currently writing:

  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. Friday's progress: 502 words (37878 words total)

  • The Internet Con: How to Seize the Means of Computation, a nonfiction book about interoperability for Verso. Friday's progress: 524 words (34296 words total)

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. (92849 words total) – ON PAUSE

  • A Little Brother short story about DIY insulin PLANNING

  • Vigilant, Little Brother short story about remote invigilation. FIRST DRAFT COMPLETE, WAITING FOR EXPERT REVIEW

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE

  • A post-GND utopian novel, "The Lost Cause." FINISHED

  • A cyberpunk noir thriller novel, "Red Team Blues." FINISHED

Currently reading: Analogia by George Dyson.

Latest podcast: What is Chokepoint Capitalism?

Upcoming appearances:

Recent appearances:

Latest book:

Upcoming books:

  • Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin, nonfiction/business/politics, Beacon Press, September 2022

  • Red Team Blues: "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books, April 2023

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.

How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Newsletter (no ads, tracking, or data-collection):

Mastodon (no ads, tracking, or data-collection):

Medium (no ads, paywalled):

(Latest Medium column: "Parenting and Phones, an Empowering Approach"

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla