Pluralistic: Fighting the privacy wars, state by state (23 Feb 2023)


Today's links

Continue reading "Pluralistic: Fighting the privacy wars, state by state (23 Feb 2023)"

Pluralistic: Freedom of reach IS freedom of speech (10 Dec 2022)


Today's links

Continue reading "Pluralistic: Freedom of reach IS freedom of speech (10 Dec 2022)"

Pluralistic: 12 Sep 2022 Spotify is a ripoff, a Spotify exclusive


Today's links

Continue reading "Pluralistic: 12 Sep 2022 Spotify is a ripoff, a Spotify exclusive"

Pluralistic: 07 Sep 2022 We published an Audible Exclusive about the monopolistic abuses of Audible


Today's links

Continue reading "Pluralistic: 07 Sep 2022 We published an Audible Exclusive about the monopolistic abuses of Audible"

Pluralistic: 30 Apr 2022


Today's links

Continue reading "Pluralistic: 30 Apr 2022"

Nonstandard Measures

‘Stay Down’ rules reinforce monopoly and do nothing to put money in working creators’ pockets

A trio of sinister black robots on a black background; their eyes and the rollers on their tank-treads are circle-c copyright symbols. The front-and-center robot has a chest display that reads 'STAY DOWN.'

The U.S. Copyright Office has issued a Notice of Inquiry, seeking comment on whether online services should be legally required to filter all their users’ communications to block copyright infringement, as part of a “Stay Down” system.

The idea is that once a copyright holder notifies a service provider that a certain work can’t be legally posted, the service must filter all their user communications thereafter to ensure that this notice is honored.

I think that creators and creators’ groups should oppose this. Here’s why.

The “standard measures” being discussed are not standard. Indeed, they’re largely found in just two companies: Google (through its Content ID system for YouTube) and Meta/Facebook. There’s a reason only two companies have these filters: They are incredibly expensive. Content ID has cost $100,000,000 and counting (and it only does a tiny fraction of what is contemplated in the proposed rule).

That effectively cements Googbook as the permanent rulers of the internet, since they are the only two social media companies that can afford this stuff.

A nearly identical proposal to this one — Article 13 of the Copyright Directive, since renumbered to Article 17 — went through the EU Parliament in 2019, and both Facebook and YouTube came out in favor of it. They understand that this is a small price to pay for permanently excluding all competitors from the internet.

(It’s worth noting that actually implementing Article 17 with automated filters is likely a violation of both the e-Commerce Directive and the GDPR, both of which ban automated judgements of user communications without explicit opt-in and consent, and there’s every chance that Article 17 will not survive a constitutional challenge in the European Court of Justice.)

Now, some people may be thinking, why should I care if Googbook get to take over the internet, so long as they’re forced to police my copyrights?

I think those people are going to be very disappointed, for three reasons:

  1. Filters don’t work;
  2. Filters enable wage theft and censorship;
  3. The most important factor determining compensation is competition, not copyright enforcement.

Filters don’t work

Filters are blunt instruments. It’s been nearly 15 years since my debut novel was fraudulently removed from online services that I’d authorized to carry it by someone representing the Isaac Asimov estate. My book carried a favorable blurb from Gardner Dozois, editor of Isaac Asimov’s Science Fiction Magazine, and the automated process that the Asimov rep used to identify infringing works falsely flagged my own book. A decade later, filters hadn’t improved, and Rupert Murdoch’s lawyers fraudulently removed another of my novels.

Contemporary filters are not any better: They are rife with both false positives (wrongly identifying works as matches) and false negatives (when a copyrighted work is missed). That’s because certain phrases, sounds or arrangements of pixels are common to multiple works, and because quotation and incidental inclusion (such as a photograph of a street demonstration that includes a bus-shelter ad with a copyrighted stock image) are unavoidable in any broad-scale filter.

For example:

  • YouTube’s filter removed all the audio from a 7-hour scientific symposium because some copyrighted music was played over the venue’s PA system during the lunch break.
  • Videos of birdsong are removed from YouTube because Rumblefish, a stock audio company, has claimed its own recordings of birdsong, and these generate false matches.
  • A panel of some of America’s foremost copyright experts gathered to discuss the controversial “Blurred Lines” decision. Their discussion was removed from YouTube because it included short snippets of the relevant works. These experts — again, some of the country’s leading copyright authorities — could not navigate YouTube’s content “put-back” system to get their video reinstated (they ended up creating a bad publicity storm that led to reinstatement — this is not a solution available to the average person).
  • New York Times bestselling author Lindsay Ellis depends on her 1,000,000+ YouTube subscribers to sell books. Her videos have been taken down so often that she’s abandoned much of the (noninfringing) material she used to include. In particular, she was scared off by the “copystrike” system that would terminate her account (and tank her writing career) if she continued to complain.

Despite all these false positives, the labels, publishers and other rightsholder groups backing “stay down” filters will all tell you that Content ID routinely allows infringing materials through. That’s because it is such a blunt instrument: The matching heuristics it uses are easy to study and evade…if you are an actual pirate.

So pirates study the system, figure out how to evade it, and post with impunity. By definition, then, filters only catch the people who don’t think they’re doing something infringing — while deliberate infringers slip through.

Filters enable wage theft and censorship

As the examples of Ellis and the copyright experts above make clear, the technical functioning of a filter is only half the story — the other half is the administration of the filter, that is, who gets to register a work, how disputes are adjudicated, etc.

On this administrative layer, filters have shown themselves to be tilted toward bad actors and away from working artists.

The most notorious examples of this are classical musicians, who perform public domain works. Sony Music has claimed their own vast musical library of classical performances, and the filters are incapable of distinguishing between Sony’s performances and other working artists’ performances. This impacts performers, of course. It is especially bad during lockdown, when performers’ only source of revenue is online performances.

But it also affects non-performers, including music teachers, who face account termination for posting instructional videos (funded by Patreon and other mechanisms) that teach the public how to play classical compositions.

Content ID’s “standard measures” allow putative rightsholders to claim the ad revenue from “infringing” videos. This has led to mass-scale wage theft from classical performers whose works are claimed by Sony.

Sony doesn’t confine its wage theft to musical performers. It also used false copyright claims to remove stock video footage posted by the footage’s creator (in other words, a guy sold Sony a license to some stock footage, and then they took down his own videos claiming he’d stolen the clip from them).

Sony is not alone in this regard: WarnerMedia used filters to remove videos that include the numbers “36” and “50.”

There’s an even more bizarre case, involving Warner Music, and it is so goddamned weird that it will require some explanation.

First, here’s a video explaining it:

Here’s my article breaking it down.

Here’s the tldr: Warner lost a $2.8m copyright lawsuit in which the Christian gospel act Dark Horse claimed that a Katy Perry song it had released was too similar to Dark Horse’s own work. A Katy Perry fan posted a video defending Perry and Warner, playing the Dark Horse clip to show that it was not similar to Perry. Warner then sent an automated takedown over the Dark Horse clip, claiming it was a Katy Perry clip.

When the creator disputed this, Warner manually affirmed that the clip in question was from Katy Perry, doubling down on its removal demand for this poor guy’s video. Remember, Warner had previously argued in court that this specific clip was easily distinguished from Katy Perry’s own performance.

The big labels — and other big rights holders — are incredibly sloppy, possibly even maliciously so, about their pride of place in YouTube’s Content ID ecosystem, and they routinely overclaim and refuse to back down when they get it wrong, to the detriment of working creators and others who simply use the system as part of their daily lives (scientific symposia, for example).

The reason they’re able to do this is that the appeals system for Content ID is literally impossible to navigate (recall that in the previous section, the nation’s foremost copyright experts couldn’t get it to work). Here’s a breakdown of how it works:

The fact that the system is so byzantine that actual creators can’t figure out how to use it has opened the door to other bad actors, including outright criminals who lodge false claims (“copystrikes”) against working artists and threaten to cross the account-deletion threshold of claims if the artists don’t pay protection money.

It’s not just criminals, either — anyone who wants material removed from the internet can use false copyright claims, from the Chinese Communist Party to dirty cops, who play loud Taylor Swift performances from their phones during their interactions with the public in a bid to prevent video recordings from being posted online.

The most important factor determining compensation is competition, not copyright enforcement

It’s worth asking why the system isn’t better administered, then? This has two answers:

  1. The scale. YouTube gets hundreds of hours of videos posted every minute. Copyright questions are fact-intensive (for example, distinguishing a skilled classical YouTube performer’s Beethoven recording from Sony Music’s; or figuring out whether a copyright professor is allowed to play a Pharrell Williams clip in a learned discussion of the Blurred Lines case). YouTube doesn’t just need tens of thousands of moderators to assess claims — it needs tens of thousands of skilled moderators, people with extensive training in multiple copyright systems. There literally aren’t enough of those people alive today to fill that role;
  2. The scale (again). YouTube and Facebook have formed a monopoly (80% of search and display ads flow through Googbook), largely through anticompetitive mergers (for example, Google buying YouTube when its own Google Video service flopped). As Lily Tomlin used to say, “We don’t have to care, we’re the phone company.” Monopolists don’t need to keep their customers or suppliers happy, because there’s nowhere else to go.

Big Tech’s merger-to-monopoly has run in parallel to monopolization across multiple sectors, including publishing, where we are about to be reduced to just four major publishers.

These mergers have a negative effect on writers’ incomes because large firms don’t have to worry about their writers seeking better deals elsewhere — after all, four or five publishers can easily converge on a set of unfavorable-to-writers terms without having to explicitly collude (though they may do that, too).

Writers have seen our ebook rights, then worldwide English rights, then audiobook and graphic novel rights, become nonnegotiable in standard Big Five (Soon-To-Be-Big-Four) contracts. All of the major publishers’ standard contracts now include a nonnegotiable morals clause allowing them to cancel any book deal at any point if an online scandal erupts over the writer’s conduct, alleged or proven.

Filters are effectively useless at preventing copyright infringement, but they do constitute a major barrier to entry for new companies — and they also stand in the way of breaking up Big Tech.

After all, if we say that anyone offering a public speech forum needs $100,000,000 (plus!) to build their own Content ID, we both exclude any new market entrants who don’t have a hundred mil, and we establish that Google can’t be subjected to breakups that would make it unable to afford to build and maintain filters like Content ID.

We know that big companies just don’t have to care as much as smaller, hungrier ones who fear competitors. A scrappier, more scared YouTube wouldn’t make idiotic errors like offering its creators a library of “copyright free” music that is actually…copyrighted.

A scrappier, more scared online sector would be more responsive to appeals, too. Jamie “JWZ” Zawinski owns the DNA Lounge, San Francisco’s leading independent (non-TicketMaster/LiveNation) club. A band he booked supplied him with a promotional video for an upcoming appearance, which Instagram’s (Facebook!) filter blocked. It took him 28 months to reinstate that video — 27.5 months after the band had come and gone.

The fixation on Big Tech stealing “content” misses the real problem, which is Big Tech stealing money. 200 newspapers have sued Googbook for colluding to rig ad markets to misappropriate hundreds of millions in funds owed to the papers.

A coalition of independent authors using ACX — the self-serve audiobook platform run by the monopolist Audible — say that they’ve had tens of millions stolen from them.

These companies are able to steal our wages because they are monopolists. Filters reinforce monopoly by creating durable barriers to entry and also durable barriers to breakup.

Focusing on stay-down is an exercise in looking for our keys under the lamppost — we don’t think we can move regulators and lawmakers to do something about corporate dominance, so we seek out remedies that make us feel better by cracking down on users.

But this will absolutely bite writers in the ass.

Think of Philip José Farmer’s quotations from Alice in Wonderland in the Riverworld books — do you think that a filter that registers the Farmer’s estate’s copyright claims on that book will be able to understand that if you or I quote Alice in Wonderland in our own books that we’re not pirating Farmer?

Do you think that Google or Facebook or Apple or Amazon or Microsoft’s appeals system will be able to figure out what’s going on? Are you prepared to have your own work withdrawn from circulation for 28 months while that’s happening?

Are artists’ rights groups prepared to staff a separate volunteer committee that does nothing but plead with Big Tech when crooks beat us to registering our works with them and then file copyright claims against us when we try to post our own books?

“Standard Measures” are a terrible idea. Arts groups could do excellent work in fighting monopolies — for example, the DoJ is suing to block the Simon and Schuster/Penguin Random House merger (thanks in part to complaints from the Authors Guild), and there is merger scrutiny on Big Tech’s waves of acquisitions (more than one per week!). We could be in those dockets, demanding an end to wage theft. We could be amicus to the 200 newspapers suing Googbook.

But supporting “Standard Measures” is just a way to intervene on behalf of a handful of giant corporate monopolists who are hoping to shift a few balance points away from another handful of giant corporate monopolists. It only reinforces monopoly — and does nothing to put money in working creators’ pockets.

Pluralistic: 07 Aug 2021


Today's links

Continue reading "Pluralistic: 07 Aug 2021"

Pluralistic: 29 Apr 2021


Today's links

Continue reading "Pluralistic: 29 Apr 2021"

Pluralistic: 16 Mar 2021


Today's links

Continue reading "Pluralistic: 16 Mar 2021"

Pluralistic: 19 Feb 2021


Today's links

Continue reading "Pluralistic: 19 Feb 2021"