Today's links
- A systemic (not individual) approach to content moderation: All the good stuff happens BEFORE the moderator gets involved.
- Hey look at this: Delights to delectate.
- This day in history: 2012
- Colophon: Recent publications, upcoming/recent appearances, current writing projects, current reading
A systemic (not individual) approach to content moderation (permalink)
As Mike Masnick is fond of saying, "Content moderation at scale is impossible." The evidence for that statement is all around us, in the form of innumerable moderation gaffes, howlers and travesties:
- Blocking mask-making volunteers to enforce a mask-ad ban;
-
Removing history videos while trying to purge Holocaust denial;
-
Deplatforming antiracist skinheads when trying to block racist skinheads;
-
Removing an image of Captain America punching a Nazi for depicting a Nazi.
But in a forthcoming Harvard Law Review paper, evelyn douek proposes that content moderation at scale can be improved, provided we drop the individual-focused "digital constitutionalism" model in favor of systemic analysis and regulation.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4005326
The paper opens with a blazing, lucid critique of the status quo, the "first wave," "standard model" that moderation scholars call "New Governors." In this model, the platforms have a set of speech policies about what you can say, enforced by moderators. This produces a "decision factory" where thousands of human moderators make billions of calls and when they get it wrong, the platform steps in as a kind of courtroom to hear appeals.
This has been a disaster. The calls are often wrong and the appeals process is either so cursory as to be useless or so slow that the thoroughness doesn't matter. It's a mistake to conceive of platforms as "a transmission belt implementing rules in an efficient and reliable way, organized around a limited set of hierarchically organized institutions and rights of appeal."
But more importantly, these individual appeals and the stories of how they go wrong give no picture of what's happening with speech on the platform as a system, what kind of speech is being systemically downranked or encouraged.
And even more importantly, focusing on how moderators (or filters) react to speech is wildly incomplete, because almost everything that's important about platforms' speech regulation happens before the moderator gets involved.
All this is background for douek's proposal: a "second wave" of regulatory thinking that addresses platform speech regulation as a systemic problem, rather than a bunch of atomized "paradigm cases."
Douek says the time is ripe for this second wave, because the internet and automated moderation have "made more speech possible, tractable and regulable than ever before." Second wave regulation treats the whole business as a system, one that has to be understood and tweaked in the round, in a process that douek calls "moving slowly and fixing things."
So what's wrong with first-wave moderation? For starters, the focus on human moderator calls misses the vast majority of actual moderation choices, which are fully automated. Facebook removed 933 million pieces of content in 2021!
How does Facebook make 933 millions takedown decisions? Well, for starters, a vast majority of these are based on behavior, not content. Platforms devote a lot of resources to spotting "coordinated" activity – disinformation campaigns, organized harassment.
These takedowns aren't run by the companies' moderation teams – they're under the remit of the security teams. That means that the standard demand for explanations for moderation choices is off-limits for all of these removals. The security teams maintain "security through obscurity" and say that disclosing their methods would help attackers evade them.
Removals aren't just about harassment and disinfo – vast amounts of content is automatically removed by spam classifiers, a process that "gets almost no attention in content moderation debates." As Tarleton Gillespie says, "[r]emoving spam is censoring content; it just happens to be content that nearly all users agree should go."
Much of this automated takedown stuff isn't just systemic within a single platform – the platforms coordinate with each other and governments to identify and remove objectionable content. Beyond those relationships, third-party fact-checkers are able to remove content without the platforms' intervention (indeed, the whole deal is that these fact-checkers are independent and thus unsupervised by platforms). Even so, the decison of which fact-checkers to hire, and what their determinations mean for content on the platform is hugely consequential and entirely at the discretion of the platforms' management.
And of course, "removal" is only the bluntest tool in the content moderator's toolbox. The primary means of content moderation is downranking – "the number of times we display posts that later may be determined to violate our policies." This is what Julie Cohen calls "content immoderation."
Almost no regulatory attention is devoted to content immoderation (notwithstanding the GOP obsession with "shadow banning") and yet it has all the problems – and advantages – of takedowns. Content immoderation is a locus of exciting stuff, with live proposals to let users control what's recommends, to mass-delete post comments, and to nominate their own mods.
All this systemic moderation is on the rise, douek says, because the "paradigm case" model doesn't actually resolve moderation problems.
What's more, these systemic interventions involve a lot of choices – for example, algorithms must be configured to prefer false positives or false negatives. These choices "are insensitive to context and provide little explanation for their decisions."
It's also an area that regulators largely ignore, even as they encourage it, imposing "ever-shorter deadlines for platforms to take content down while also expecting context-specific adjudications and procedural protections for users."
This has real consequences, like censoring posts by Palestinians during violence in Gaza or Syrians' evidence of war crimes. These errors were foreseeable consequences of the platforms' choices about where to set the false positive/negative dial. They're the foreseeable consequences of demanding rapid takedowns.
All of this has prompted loud calls for regulation, with "users feeling disempowered," and the obvious deficits of a lack of competition and choice "raising concerns about a handful of platforms' outsized power."
These problems have created "legitimacy deficits" that make it harder for users' to accept the platforms mistakes as honest errors, leading to louder calls for regulation. But those calls – and the response – are trapped in a "legalistic discourse" that wants to make the platforms' speech courts better.
This is grounded in three assumptions:
I. "speech interests are special and especially resistant to systemic governance"
II. decisions about speech "must be individualistic"; and
III. "perfectibility is a necessary and desirable goal of speech regulation."
Douek describes this as an American framework, with speech rules being "sacred individual rights," which interferes with the idea of assessing platforms' speech regulation as grounded in structure and process whose outcomes are best assessed over time.
Of course, the US has a First Amendment tradition (and restriction) that puts strong limits on what kinds of interventions governments can make. Despite this, US governments from local to federal have long been "concerned with the threat that private economic power poses to expressive freedom," something that has emerged anew as antitrust has surged.
https://locusmag.com/2018/07/cory-doctorow-zucks-empire-of-oily-rags/
This power-focused analysis thinks about speech systemically, and works more like a class action than an individual case. It has to move beyond the obsession with takedown/leave-up decision, and the appeals process. Instead, it has to focus on design choices about fact-checkers, algorithm design and sensitivity. Most of all, we need to break free of the trap of requiring ever-shorter takedowns of ever-broader categories of speech, with stronger rights of appeal for when this goes wrong.
Focusing on individual cases isn't just impractical, it's unhelpful, for four reasons:
i. Individual cases do little to illuminate systemic problems;
ii. Appeals processes do little to fix systemic problems;
iii. Transparency about individual judgments isn't transparency into system design;
iv. Fixing things for individuals can make them worse for the group.
The individual-focused speech system "creates perverse incentives.. to report ever larger numbers, which is what regulators come to demand and what platforms boast about achieving."
So if we accept that it's time to move onto a second-wave paradigm of content regulation, what would that look like?
Douek begins with "separation of functions," modeled on administrative agencies that break apart "businesses, rule-writers and rule-enforcers." That would mean putting "a wall between those concerned with the enforcement of content moderation rules and those whose job performance is measured against other metrics, such as product growth and political lobbying." The enforcers would report to a different chain, disconnected from "product growth or advertising revenue."
In second-wave regulation, complaints could be channeled through an external regulator that could conduct investigations and sanction companies for failing to make disclosures.
What kind of disclosures? For starters, disclosing "contacts with third-party decisionmakers" -and those third parties should not include government actors with "special reporting channels to flag posts to be taken down to platforms" unless "the frequency and basis of such informal orders appearing in platforms’ transparency reporting."
Platforms should be required to retain data about decision-making processes and disclose "information about the broader functioning of their systems":
- "distribution of errors in their content moderation that might highlight weak areas in platforms' enforcement capacities";
-
"the effectiveness of interventions other than taking posts down like reducing the distribution of or labelling certain posts";
-
"how changing platform design and affordances (for example, making reporting mechanisms easier to use or increasing the number of emoji reactions users can give posts) affects the way users use their services."
Some of this is now making its way into legislation; the Platform Transparency and Accountability Act," which has "a conditional safe harbor for platforms from liability for violation of privacy or cybersecurity laws if they exercise reasonable care when providing access in accordance with such mandates."
Douek proposes annual reporting requirements, forcing companies to disclose their moderation plans. These disclosures must "publicly explain the purpose of their rules, how they will enforce them, and how they will guard against risks to such enforcement."
Douek argues that this is a powerful means of providing regulators "with basic information they currently lack about the systems they seek to regulate." She sees four main benefits here:
i. "Requiring planning forces platforms to think proactively and methodically about potential operational risks" – the opposite of "move fast and break things";
ii. "Creat[ing] documentation of decisions and their rationales, facilitating future review and accountability";
iii. "Transparent plans facilitate broader policy learning for regulators and across industry… [showing] industry best (or worst) practices."
iv. "[Facilitating] public involvement and comment."
Douek acknowledges that all of this is dependent on effective quality assurance, and that this is hard. But: "The only thing worse than trying to define 'quality' is not trying."
When it comes to appeals, douek sees a role for "aggregated claims." These would review "all adverse decisions in a certain category of rule violation over a certain period." This would help "identify institutional reform measures that could address system-wide failures or highlight trends and patterns." This isn't just a more efficient ways of dealing with claims – it's a way of fixing systemic problems.
This may all seem dissatisfying, a slow-moving, largely self-regulating process that has few teeth. But douek argues that this is the first step to better regulation – just as the FTC's practice of forcing companies to publish and stand behind their privacy policies created evidence and political will for privacy regulation. In other words, it's the essence of "move slow and fix things."
I found this paper very exciting, especially in its diagnosis of the problems with first-wave content moderation and its identification of the systemic issues that approach left out. That said, I found the critique a lot crisper than the recommendations – which is par for the course, I think. A good analytical framework for understanding systemic problems is a necessary starting point for crafting a remedy. Douek's done something very important here, but I think there's lots of work to do on what comes next.
(Image: 陳韋邑, CC BY 3.0, modified)
Hey look at this (permalink)
- The Emu Wars https://twitter.com/garius/status/1167470771696033792 (h/t Fipi Lele)
-
18 year old Steve Jobs' application to work at Atari https://www.flickr.com/photos/jurvetson/51932603398/
-
Dear Surfshark, Please Fire Me https://www.youtube.com/watch?v=32JcVrowndw (h/t Garbage Day)
This day in history (permalink)
#10yrsago George Dyson’s history of the computer: Turing’s Cathedral https://memex.craphound.com/2012/03/12/george-dysons-history-of-the-computer-turings-cathedral/
#10yrsago Newspapers moot dropping Doonesbury during transvaginal ultrasound plot https://bleedingcool.com/comics/doonesbury-pulled-over-rick-perrys-transvaginal-exams/
Colophon (permalink)
Today's top sources:
Currently writing:
- Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. Friday's progress: 510 words (72266 words total).
-
Vigilant, Little Brother short story about remote invigilation. Friday's progress: 253 words (5406 words total)
-
A Little Brother short story about DIY insulin PLANNING
-
Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION
-
Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE
-
A post-GND utopian novel, "The Lost Cause." FINISHED
-
A cyberpunk noir thriller novel, "Red Team Blues." FINISHED
Currently reading: Analogia by George Dyson.
Latest podcast: All (Broadband) Politics Are Local https://craphound.com/news/2022/03/06/all-broadband-politics-are-local/
Upcoming appearances:
- Competition & Regulation in Disrupted Times (Charles River Associates/Brussels), Mar 31
https://www.cra-brusselsconference.com/ -
Emerging Technologies For the Enterprise, Apr 19-20
https://2022.phillyemergingtech.com
Recent appearances:
- Safety Orange (This Week in Tech)
https://twit.tv/shows/this-week-in-tech/episodes/865 -
Seize the Means of Computation (Boston Computation Club)
https://www.youtube.com/watch?v=QzzIftKz600 (video)
https://castro.fm/episode/cBq9TF (audio) -
The Policy Implications of Web3 (Stanford Cyber Policy Center)
https://www.youtube.com/watch?v=Fir5ujS-Dkg
Latest book:
- "Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 (print edition: https://bookshop.org/books/how-to-destroy-surveillance-capitalism/9781736205907) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p1562/_Poesy_the_Monster_Slayer.html.
Upcoming books:
- Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin, nonfiction/business/politics, Beacon Press, September 2022
This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
https://mamot.fr/web/accounts/303320
Medium (no ads, paywalled):
(Latest Medium column: "What is 'Peak Indifference?'" https://pluralistic.net/2022/03/06/what-is-peak-indifference/)
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla