Big-Tech-as-cop vs. abolishing Big Tech
Since late 2000, EU’s Digital Markets Act (DMA) has made steady progress, and now, despite US Big Tech companies’ illegal lobbying, it has become a law.
The DMA’s goal is a fairer, more pluralistic internet, not one consisting of five giant websites, each filled with screenshots of text from the other four.
Online consolidation is the result of three related phenomena:
- Network effects: some products and services get better the more they’re used — you signed up for Facebook to talk to the friends who were already there, and then other people signed up because they wanted to talk to you; you bought an iPhone because you wanted to use the apps in Apple’s App Store, then app developers made more iPhone apps because they wanted to attract your business.
- Mergers: Tech giants have almost unlimited access to the capital markets and use that to buy any startup that might grow to be a threat, and to merge with their biggest rivals to eliminate competition.
- High switching costs: Normally, it’s very easy to switch from one technology to another. Computers are universal, and the only computer we know how to make is the one that runs all the programs we know how to write. But tech companies have figured out how to use the law to make it illegal to create interoperable products, because if leaving a product or service means abandoning your friends, your apps, or the media you bought, you will stay with that product or service, even if you would prefer a different one.
The DMA’s tactic is aimed at that third factor: the artificially high switching costs that tech firms engineer into their products to trap you inside their walled gardens.
To lower those switching costs, the DMA will require Big Tech platforms (“gatekeeper platforms”) to offer APIs — gateways to allow other services to connect to their products — so that you can quit a service without abandoning the people, apps and media that matter to you.
The DMA isn’t perfect. Rather than starting with social media (for example, by forcing Facebook to open up to independent, community-managed social media sites), the DMA starts with end-to-end encrypted messaging platforms like Whatsapp and Apple iMessenges, on a timeline so tight that it risks exposing users to hackers, both criminal and governmental.
This is a tactical blunder, and it’s given ammunition to the DMA’s opponents, who’ve argued — incorrectly — that interoperable, secure messaging is impossible.
But even if secure, interoperable messaging were not on the table, the opponents of interoperability would still reject the DMA, because they view the scale of Big Tech platforms as a feature, not a bug.
The Quest for a Perfectable Zuck
For these opponents, the problem with the Big Tech platforms isn’t in their centralization, it’s in their mismanagement. In other words, the problem isn’t that Facebook’s CEO is the unelected, unaccountable, permanent social media czar ruling over three billion people — it’s that Mark Zuckerberg is bad at being that czar. These people want to see Zuck replaced or retrained.
They don’t want to abolish the office of unelected, unaccountable, permanent social media czar.
Why not? Because if all the world is gathered under a single tech platform, then that tech platform will be so profitable and well-resourced that it can protect those users, both from hackers and from each other.
A giant platform, the reasoning goes, can hire the very best security experts to design, monitor and defend its systems. They can also hire the wisest content moderators, the best algorithm designers, and the best system designers and fight all the evils of online expression: harassment, hate speech, nonconsensual pornography (AKA “revenge porn”), child sex abuse material (AKA “child pornography”), doxing, copyright infringement, radicalization, conspiratorialism, disinformation, misinformation, and other digital communications that most of us (including me) don’t want to see, or even want to extinguish.
Out of Sight, or Out of Mind?
The distinction between “speech I don’t want to encounter” and “speech I don’t want to exist,” is a crucial one.
Nearly everyone agrees that some communications shouldn’t exist at all: child sex abuse material, nonconsensual pornography, deceptive advertisements, dumps of stolen personal information… A moment’s reflection will yield up a long list of material you would like to see eliminated altogether.
But the quest for the Perfectable Zuck isn’t just about eliminating that odious, illegal material. It goes beyond that, into arguing for the banishment of “lawful but awful” speech — racism, conspiratorialism, homophobia, political extremism and many other types of speech that I deplore.
As much as I deplore that speech, and as much as I don’t want to encounter it in the conversational and public spaces I inhabit, I still think that people should be able to have private, consensual discussions where they express those views.
That is, as much as I wish that people didn’t hold and discuss antisemitic beliefs, I don’t think it’s my place to say, “You and your friends may not express antisemitism in your home, or clubhouse, or car, or campsite.”
I don’t want to live in an antisemitic society, and I also don’t want to live in a society that dictates what you can or can’t say (or believe) in private.
The Unperfectable Zuck
If the DMA is successful in (eventually) dismantling the dominance of the Big Tech platforms, then social media will become much more like a series of semi-private clubhouses, parties, homes and conference-rooms. You will choose (or operate) a “little tech” site that links to many of other systems, which will, in turn, link to more.
Just as importantly, your online space will block some of the other ones —perhaps the future equivalents of 8chan, or (depending on your political orientation), the future equivalent of /r/LateStageCapitalism.
In that world, there will be no Zuck The Great and Powerful who can, by edict, prohibit certain words, or acts, or material. The quest for the Perfectable Zuck will be moot, because no matter how perfect Facebook’s leader is, they will not be able to command the social media world.
Nor will any government, not fully. A government may be able to command the services hosted within its borders, or whose personnel are present within those borders.
Some governments may follow the lead of Russia and China and build national firewalls, and then institute blocks on any service that doesn’t maintain a presence within its borders, so that the state can physically lay hands on people and punish them for violating its rules.
These governments will have to contend with firewall circumvention — VPNs and other censorship-busting tools that allow people within your borders to access services hosted outside those borders, in defiance of the blocks of the nation-state.
In other words, in this federated future, the online world is much like the offline world, where no government can perfectly control what happens inside its borders, and where the ability of a government to dictate what happens outside its borders is severely constrained. It’s a future where governments must choose between sealing themselves off from the world or reconciling themselves to the fact that some of their people will traffic in forbidden information from outside of their borders.
Likewise, in this federated future, there will be no corporate executive who can, by fiat, decide that certain speech will be banned around the world.
When the west exported photocopiers behind the Iron Curtain, it celebrated their role in “samizdat,” the reproduction and distribution of forbidden information. But while the US State Department could celebrate the use of Xerox machines inside the USSR, it couldn’t control their use. Soviet Xerox machines might be used to reproduce pro-democracy pamphlets, but they also might be used to reproduce the Protocols of the Elders of Zion.
Dark Corners Considered Helpful
The internet was once all “dark corners” — that is, we once spent all of our time in little spaces: blogs, message boards and forums.
Today, when you hear about the “dark corners of the internet,” it usually refers to troll farms like 8chan, where conspiratorialism, hatred and violence are incubated.
But while these spaces are and always will be toxic, we can make them less important, not by getting rid of those spaces, but by eliminating the giant, monolithic, shared space where the hateful fever dreams that fester in those dark corners erupt into all our lives.
8chan is best understood as the barracks and training ground for an army whose battlefield is Facebook, Reddit, Twitter and other giant sites. The small groups of sociopaths who inhabit these dark corners spend most of their time figuring out how to take over and manipulate Big Tech and their teeming millions of normies.
There’s an analogy to Fox News here. Fox News has figured out how to game the US cable industry. American cable is dominated by a handful of monopolists who divide up the country so they do not have to compete with each other.
The cable operators pay for each channel they carry. Some of these channels are “basic” and available to every subscriber; others are “premium” and can only be viewed if you choose them and pay for them.
Any channel included in the “basic” tier is guaranteed a large monthly remittance from the cable operator, who must pay a royalty for every single subscriber, whether or not any of those subscribers ever watch that channel, since every one of those subscribers gets the channel in their basic package.
Fox News has a relatively small audience; smaller than most basic channels, and yet Fox is included in every basic cable package, everywhere in the country, and gets a subsidy from every singe cable subscriber in America. What’s more, Fox costs much more than any other basic channel — your cable operator sends MSBNC $0.33 from your monthly bill, while Fox gets $2, that is, 600% more. This is why Fox can continue to operate despite the mass exodus of its most valuable advertisers — it doesn’t need advertisers, because every American cable subscriber sends it money every month, even if they hate it and want it to die.
Thanks to the cable monopoly, you have no choice but to subsidize Fox. When your cable operator refuses to stand up to Fox, your choice is like it or lump it. If you want cable, you must subsidize Fox.
Real Life is a Filter Bubble
The Perfectable Zuck argument hinges on the fear that little tech won’t block the bad stuff that Big Tech will. But what about the opposite? What about the harassment, hate-speech, conspiratorialism, disinformation and trolling that Facebook, Twitter and Reddit won’t block?
What if you want to talk about your cancer treatments without enduring endless messages from cranks who insist that you should treat it with vitamins, or homeopathy, or magnets?
What if you could pick a cable operator that didn’t send Fox News $2/month from your cable bill?
The story we tell ourselves about the cranks we meet on the internet is that they are victims of a “filter bubble” — that they seek out and dwell in “echo chambers” where they never encounter disconfirming evidence that might lead them to challenge their bad beliefs.
But that’s got it backwards. In our racially, economically and politically segregated society, the most durable filter bubble we inhabit is real life in our real, physical neighborhoods.
The main reason the online world seems so full of people with bizarre beliefs is that online spaces are where we leave our filter bubbles and encounter people from different backgrounds, who have different points of view, in circumstances where it is socially acceptable to air those differences.
Neither extreme is healthy: our lives in the physical world would be much improved if we rubbed shoulders (and discussed our differences) with people who had different perspectives from ours.
But so, too, would our online lives be improved if we could choose when and how much our conversational spaces welcomed those with other points of view — like maybe you don’t want to hear from a Holocaust denier on Yom Kippur (or ever).
Unspeakable
But what about the truly unspeakable: child sex abuse material, nonconsensual pornography, identity-fraud-friendly dumps of hacked private information, deceptive advertising, and the like?
The DMA contemplates that there will be some means of removing this material from the “open,” federated system that it will create. This would involve some kind of automated coordination system where police, governments, Big Tech “trust and safety” teams, and other “trusted reporters” would broadcast messages that says “Here is the fingerprint of something illegal: if you see it on your service, you must delete it.”
If this system works, there are lots of ways it can go wrong: these lists will be massive, and large numbers of people will be able to add to them, and by design, no one is supposed to be able to use the “fingerprint” of banned content to reconstruct the banned material. That means that no one will be able to independently verify whether something that’s been banned is actually illegal.
This is a system ripe for abuse, and historically, systems like this have been abused. National blocklists that were only supposed to contain the very worst content — child sex abuse material — turned out to be stuffed full of things that governments simply disliked: gambling sites, assisted suicide instruction, file sharing sites.
But that’s only a problem if the system works, and I don’t think it will. If the DMA really does conjure into being an “open” federation of online services, then these services will be able to reject the “delete this content” message.
Even if the default builds of programs like Mastodon and Diaspora — designed to host and federate small communities — are modified to respond to content deletion messages, those modifications can be removed by their administrators.
These are free/open source software packages, and that means that the people who run those programs are free to alter them. There is no way to have an “open” ecosystem where it is technically impossible to alter the software that it runs on.
This isn’t to say that there won’t be legal tools that European regulators can use on these smaller platforms. It’s conceivable that the ultimate form of the DMA’s interop mandates will require online communities to honor deletion messages.
Enforcing that rule will be challenging (because the point of the exercise is to create a lot of online spaces, more than can be easily monitored or governed), but by the same token, gaps in regulation won’t be as important —a deceptive ad that is only shown to eight members of a micro-community might harm those eight people, but that is far less harm that it might wreak on centralized systems where millions of users are exposed to it.
What We Talk About When We Talk About Perfectable Zuck
Behind the idea of the Perfectable Zuck is a desire for corporations so powerful that only the largest governments can stand up to them. It is a vision of a world where giant online monopolists continue to decide what we can and can’t see — but where the most powerful governments give them marching orders.
This isn’t just dystopian — it’s incoherent. The more powerful a corporation is, the bigger a government has to be to push it around. This means that smaller states will be powerless to exert any technological self-determination on behalf of their people, while large, powerful autocratic states will able to harness Big Tech as agents of oppression.
Bringing every online user under the thrall of a handful of giant corporations will mean that whatever these corporations ban will be blocked for all of us, with or without our consent— and whatever they permit will be rammed down our throats, whether or not we want it in our online lives.