What the tech giants' content replacement systems tell us about our free expression priorities
Last week, the leftist British Novara Media (disclosure, I have been a guest on some Novara programs) was kicked off YouTube for “repeated violations” of the service’s policies. Novara’s workers were alarmed, dismayed and outraged in equal measure. After all, the channel had only ever attracted one violation warning from YouTube, and that had been an error — something YouTube itself had acknowledged after further investigation.
A few hours later, Novara’s channel was restored. The New York Times took notice, saying that the incident “shows [YouTube’s] power over media.” Which, you know, fair enough. YouTube is dominant, thanks to its parent company (Google, masquerading as a holding company called Alphabet that exists as an account fiction). Google is able to self-preference by giving pride of place to YouTube in its search results, and it is able to plug YouTube into its rigged ad-auction business, where it illegally colludes with Facebook to maximize its profits from ads while minimizing the amount it passes on to the creators who make the videos those ads run on.
But in emphasizing the significance of a place on YouTube to the fortunes of video creators, the Times’s headline writer missed an equally important story of tech and policy: the brokenness of YouTube’s fake civil justice system in which deleted creators appeal the inscrutable judgments of moderators.
After Novara’s channel was deleted, the group tried to find out more. The email that Youtube sent announcing the removal was terse (“YouTube said Novara was guilty of ‘repeated violations’ of YouTube’s community guidelines, without elaborating”). Novara editor Gary McQuiggin “filled in a YouTube appeal form,” which disappeared into the ether.
Then McQuiggin went to YouTube’s creator support chatbot, which introduced itself as “Rose.” What happened next is the stuff of Kafka-esque farce:
“I know this is important,” [Rose said,] before the conversation crashed.
The Times’s story quite rightly homes in on the problems of doing content moderation at scale without making errors. It notes that YouTube deletes 2,000 channels every hour to fight “spam, misinformation, financial scams, nudity, hate speech.”
Novara got its channel restored, by making a giant stink. The channel and its hosts have millions of followers who were able to appeal to the public to publicly roast YouTube for its high-handed tactics. This is a common enough story, really: When a high-profile account is suspended out of error or malice or incompetence, the victim of the deletion makes a lot of noise, the press team notice, they contact the moderation team, and the deletion gets reversed. Or the deleted person knows a senior person at the platform and they make a phone call or send an email, and someone has a word with someone who has a word with someone and the account goes back up.
That’s how it worked when I got kicked off Twitter for adding the people who trolled me to a list called “Colossal Assholes.” The “make a giant noise or know someone in senior management method” works pretty reliably (notwithstanding Alex Jones, Milo Yiannopoulos and, of course, Donald Trump), but it doesn’t scale.
It’s conceivable that only high-profile channels run by well-connected people are removed in error, and the rest of the 2,000 deletions YouTube effects every hour, 24 hours per day, are all perfectly justifiable.
But it’s far more likely that the takedowns of high-profile channels are completely representative — that is, that every channel, account, and feed has the same likelihood of facing an erroneous suspension or deletion (indeed, it’s a near-certainty that high-profile channels are less likely to be suspended in error, so normal people are even more likely to face a bad takedown).
The platforms remove content in the blink of an eye, often through fully automated processes (such as copyright filters). Takedown systems are built without sparing any expense (YouTube’s copyright filter, Content ID, cost the company $100,000,000 and counting).
But the put-back processes — by which these automated judgments are examined and repealed — are slow-moving afterthoughts. If you’re a nightclub owner facing a takedown of the promo material sent to you by next week’s band, the 2.5-year delay you face in getting that content put back up is worse than a joke.
You can tell what a system is for by what it does. The big platforms are among the most vertically integrated monopolies in the world. Their millionaire PR flacks insist that they literally couldn’t operate if they were forced to divest themselves of the companies they gobbled up.
And yet…these companies miraculously manage to operate critical functions of their businesses — the vast armies of moderators that make fine-grained judgment calls about what is and isn’t permissible, calls that can cost creators their livelihoods, leave harassed users helpless, spread dangerous disinformation, even abet genocide — as separate, subcontracted businesses run in distant call centers that the platform has no meaningful oversight or control over.
These outsourced moderators are the platforms’ tonsils, absorbing all the mind-searingly horrific garbage the worst users post and the blame when a takedown goes wrong.
The platforms in-house the parts of the business they care about and outsource the parts they don’t. It’s as simple as that.
The reality is that there is no army of moderators big enough to evaluate 2,000 account deletions per hour. The factual record that has to be developed and examined and the nuanced thought that has to be applied to an appeal is irreducibly time-consuming and labor-intensive.
We, as a society, have to make a call: Do we value expeditious removal more than the right to free expression? Because the only way to treat channel replacement as co-equal with channel deletion is to slow down deletions, by narrowing the grounds for deletion and increasing the evidence required for it.
Some of this is down to diseconomies of scale: YouTube’s takedown regime has to contend with 500 hours’ worth of new video every minute, in every language spoken. It has to parse out in-jokes and slang, obscure dialects, and even more obscure references. Smaller services, run by the communities they serve, may have fewer resources, but they also have more context and don’t need to develop all the expertise required to understand a stranger’s context.
Alas, the world’s governments are responding to the very real problems of toxic online material by increasing the scale of the platforms, widening the criteria for deletion, and limiting the amount of time platforms have to respond to complaints, ensuring that the tempo of removals will go up, and with it, the number of dolphins caught in the moderators’ tuna-nets.
In Germany, Australia, Canada, the UK, and Turkey, “online harms” rules have multiplied, creating obligations for platforms to identify both “illegal” and “harmful” content and remove it in impossibly tight timeframes (France’s one-hour removal rule was so broad that it was struck down as unconstitutional).
These rules don’t just multiply platforms’ errors — they also require platform-sized companies. The kind of small, community-run service that might be able to parse out the fine details of norms, references, and context and could do a better job of removing bad stuff and leaving (or repealing judgments against) good stuff can’t afford the brutal liability these laws impose when platforms get it wrong. Online harms laws are a way to corral every discussion and every community within the platforms’ walled gardens, utterly dependent on the shoot-now, ignore-questions-later method of content removal.
Earlier this year, the European Union unveiled its Digital Services Act, a sweeping regulation to reduce the power of platforms (say, by banning the self-preferencing that makes YouTube so powerful). In the months since, EU lawmakers have adopted a series of absolutely idiotic amendments to the Act, converting it to yet another online harms bill, this one reaching to 500 million Europeans.
The platforms have too much power — more than they can wield with any consistency or justice. We can order them to make better judgment calls, or faster judgment calls, but not both, and whichever choice we make, we’ll just be handing them the greatest gift a monopolist could ask for — an end to new competitors entering the field.
If you think it’s hard to make good policy under the looming shadow of these globe-straddling colossi, just give ’em a decade or two without having to worry about an unseating pretender to the throne coming up behind them and see what you get. Remember, a rule is only as good as your ability to enforce it.
The original Digital Services Act had some damned good ideas for weakening the platforms to the point where we could finally get a grip on them and hold them down, forcing them to do what’s good for us, even when it’s bad for their shareholders.
The new DSA is a gift-wrapped license for perpetual, global internet domination, at the piddling expense of a few thousand more error-prone moderators and the code to run even-more-error-prone “AI” moderators that operate at a pace far too rapid for anything like justice.
The platforms simply can’t govern well and wisely. They shouldn’t govern at all. No one elected them and they are accountable to no one save their shareholders. That’s no way to run a world.