Pluralistic: 06 Aug 2021

Today's links

An ad for an Instagram ban service.

Scammers sell griefers social media banning services (permalink)

The Big Tech platforms can be horrible places. Harassers, abusers and griefers have figured out how to use them to meet one another, form vicious assault squads, and drive their targets off the service and make life miserable for those who stay.

What's more, the platforms have so little competition – and are so siloed from one another – that leaving a platform comes with a heavy price, separating those who depart from their families, communities and customers.

With such high stakes and so many terrible actors, it's natural that the platforms all have account suspension and account termination policies so they can kick the worst offenders off their services.

But these policies have an obvious weakness: they can be abused by harassers to trump up cases against their victims and get them terminated, or get their content removed.

The platforms' natural solution to this has been to add safeguards to their policies, making them harder to invoke, creating ever-more-specific criteria and procedures for takedown, suspension and termination.

Likewise, the platforms armored their put-back and account restoration policies, lest harassers figure out how to game them, returning to revictimize their targets. So takedown and put-back and termination and restoration have grown more complex and esoteric over time.

This sounds like common sense, but it's a Red Queen's race, where you have to run faster and faster to stay in the same place. The thing is, harassers are dedicated to understanding these rules – that's their whole thing. Their victims just want to use the service.

So harassers become rules-lawyers. They know exactly which phrases not to use to avoid a ban, and they know which phrases to invoke to get their accounts back. They have sprawling forums dedicated to developing and refining tactics to game platforms' policies.

If you know where the tripwires are, you can avoid them – but you can also use them to trip your opponents. You can tiptoe up to the line and goad your victims into crossing it, then nark them out, with the exact phrases that get them sanctioned.

This kind of abuse is present in every mass-scale removal and termination system. Think of the blackmailers who figured out how to use Youtube's copyright termination policies to extort creators' wages from them with threats of copyright complaints.

And the companies' response to each abuse scandal is to make the policies more complex, adding new procedures to paper over the holes in the old ones. Those new procedures have their own holes, and so more patches are applied.

We all know that if you swallow a spider to catch a fly, you'll have to swallow a bird to catch the spider, and so on. You can't fix a complex system's defects by adding more complexity.

Online services create a monoculture, where a single set of policies control all our outcomes. As we know from agriculture and forestry, monocultures attract rich, parasitic ecosystems of specialized attackers, each with its own niche.

So it is with account termination. Termination systems aren't just abused by griefers – they're also abused by professionals who establish account termination as a service businesses that sell Instagram bans for as little as $7.

These services have a pretty straightforward methodology: they create an account that matches all the personally identifying information of the victim, then claim that the victim is an identity thief.

Because these are professionals, it's literally their job to know how to present and defend an identity theft case – while the victims are just everyday Instagram users, mired in the complex systems created to fend off scammers.

The ban-as-a-service market is specialized. There's bottom-feeders who'll do the job for 6 euros; others sell to blackmailers who extort social media influencers by taking down accounts with millions of followers, with a sliding scale based on follower count.

Given that sophistication, it's not surprising that Joseph Cox was able to link some of these ban-vendors to services that charge thousands of dollars to expedite ban-reversals, capitalizing on their esoteric mastery of platform policies to get your account restored.

The fact that account restoration costs hundreds of times more than banning teaches us a few things about the political economy of platform warfare, starting with the Kafkaesque nightmare that is account restoration.

It's so bad that Facebook users who lost their accounts started buying $300 Oculus VR headsets in the (seemingly mistaken) belief that this would get them human attention, bypassing the unnavigable automated appeal system.

But the disparity also tells us that users value their accounts far more than attackers prize the ability to banish their targets. No wonder – the platforms didn't monopolize social media; they're into home automation, cloud services, online retail, payment processing, etc.

Which means that losing your account could brick your thermostat, or cut you off from your creative wages, shut down your business's website, or erase decades of family photos and correspondence.

There are ways this could be better – the platforms could have a duty to return your data to you if they terminate your access and they could be required to separate social account termination from across-the-board termination on all their services.

But when we're talking about proprietary silos with hundreds of millions or billions of users, there's only so much room for improvement. As Masnick's Impossibility Theorem has it, "Content moderation at scale is impossible to do well."

In other words, when it comes to the fairness that arises from a nuanced, situation-specific judgment – the only way to separate trolls from victims – scale is a bug and not a feature.

It's a fool's errand to try to scale moderation and termination up to serve as full-fledged civil justice systems for a wholly owned corporate "country" that is more populous than any nation in world history, with hundreds of languages and millions of community norms.

Far more plausible is scaling these giants down to the point where it's at least possible to parse through conflicting claims about nuance and meaning – and also where a bad call doesn't cost the loser access to a full digital life.

Maybe we could do that with federation, through interoperability mandates like the ones in the ACCESS Act:

And maybe we could do it through merger reviews and even unwinding the mergers that got us to this situation:

But we have to stop adding complexity to systems in order to cure the problems of complexity! The more specialized knowledge to need to keep your account online, the more we'll all have to fear from griefers, harassers and trolls.

The Facebook '1 Hacker Way' sign, before it sit the Three Wise Monkeys (hear no evil, see no evil, speak no evil), their faces replaced with that of Mark Zuckerberg.

Facebook's official disinformation research portal is a bad joke (permalink)

Facebook just redoubled its attacks on transparency, terminating the accounts of the NYU researchers behind Ad Observer, an independent project that monitors paid disinformation on the platform.

This is inexcusable, but that doesn't stop Facebook from trying to excuse it. That defense has two prongs. The first is a false claim that Ad Observer compromises Facebook user privacy.

This is a lie that can be trivially disproved simply by looking at the source-code for the Ad Oberver plugin. Facebook is just privacywashing, using privacy as a pretext to cover up bad corporate behavior.

The other prong of Facebook's defense is to point people to its own FORT Researcher Platform, which, Facebook claims, allows researchers to safely monitor paid speech – ads – on its platform in a way that is equivalent to Ad Observer.

A group of eminent researchers from the Center for Information Technology Policy at Princeton University have published a stinging rebuttal to this claim, drawing on their experience attempting to negotiate access to FORT with Facebook

In Mar 2021, Facebook presented the Princeton team with a set of "strictly non-negotiable" terms "mandated by Cambridge Analytica and the FTC." The Princeton team were familiar with the FTC's Cambridge Analytica consent decree, so they knew this was bullshit.

They pointed this out to Facebook, which eventually conceded that the take it or leave it terms were actually just FB's corporate policy, nothing to do with the FTC (but blaming the policy on the FTC made FB look like good guys).

It's ironic that FB is using the FTC as an excuse to shut down independent scrutiny of its policies and activities. Yesterday, the FTC sent Mark Zuckerberg an open letter, slamming the company for attack Ad Observer and blaming it on the FTC.

The Princeton team ultimately refused to sign Facebook's FORT agreement. The most important issue was FB's requirement that they get pre-publication "review" of any scholarly work based on the FORT repository.

FB didn't just expect to be able to see what researchers learned before they went public – they also reserved the right to unilaterally label anything the researchers wanted to cite as "confidential" and censor it from their reporting.

The Princeton team asked FB if data about paid political disinformation during the 2020 election would be "confidential" – and FB refused to answer their question.

This just the most visible sign of FB's bad faith. The company couldn't answer basic questions about "what additional data fields were available to researchers" and "whether there were any restrictions on the types of tools we could use to analyze the data."

They promised to get back "shortly." That was five months ago.

The Princeton team doesn't mince words: "Our experience dealing with Facebook highlights their long running pattern of misdirection and doublespeak to dodge meaningful scrutiny of their actions."

"Facebook has control over the information that the public needs to understand its powerful role in our society. And, if Facebook continues to hide behind illusory offers, we need legislation to force them to provide meaningful access."

(Image:, Minette Lontsie, CC BY-SA; Anthony Quintano, CC BY; modified)

This day in history (permalink)


#10yrsago $300 Million Button: making customers create logins to buy cost etailer $300M/year

#5yrsago How and why to short Uber

#1yrago Stiglitz quits Panama’s official money-laundering panel over internal sabotage

#1yrago Web companies can track you — and price-gouge you — based on your battery life

#5yrsago 1 billion computer monitors vulnerable to undetectable firmware attacks

#1yrago Qanon is an ARG (pt II)

#1yrago NY AG wants to dissolve NRA

#1yrago Writers Guild vanquishes a major agency

#1yrago Ventilation vs covid

#1yrago NY State's promising new antitrust law

Colophon (permalink)

Today's top sources:

Currently writing:

  • Spill, a Little Brother short story about pipeline protests. Friday's progress: 266 words (13437 words total)

  • A Little Brother short story about remote invigilation. PLANNING

  • A nonfiction book about excessive buyer-power in the arts, co-written with Rebecca Giblin, "The Shakedown." FINAL EDITS

  • A post-GND utopian novel, "The Lost Cause." FINISHED

  • A cyberpunk noir thriller novel, "Red Team Blues." FINISHED

Currently reading: Analogia by George Dyson.

Latest podcast: Are We Having Fun Yet?
Upcoming appearances:

Recent appearances:

Latest book:

Upcoming books:

  • The Shakedown, with Rebecca Giblin, nonfiction/business/politics, Beacon Press 2022

This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.

How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Newsletter (no ads, tracking, or data-collection):

Mastodon (no ads, tracking, or data-collection):

Medium (no ads, paywalled):

(Latest Medium column: "Managing aggregate demand," part four of a series on themepark design, queing theory, immersive entertainment, and load-balancing.

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla