Pluralistic: Links, dumped (10 June 2023)

Today's links

A selection of sweets from a licorice allsorts assortment, spread out on a piece of brown cardboard.

Links, dumped (permalink)

It's Saturday, which means: time for a linkdump post! I'm back from my book-tour across the US, Canada and the UK, finishing up in Berlin – and I'm jetlagged in the backyard hammock, waiting for my laundry to come out of the machine and plowing through a long backlog of interesting links. Let's goooooooo!

It's Pride month, and Pat Robertson kicked it off with a bang by…kicking off. What better way to start Pride than with a piece of fanfic by Wil Wheaton about Robertson's arrival in Hell?

Once you've had your fill of schadenfreude, get your Pride/Trekkie (or, if you prefer, Trekker) gear on with Wil's Acting Ensign Pride collection:

Segueing smoothly into science fiction by way of Trek, let's turn to Ted Chiang, a titan of the field, who is also a wicked-sharp critic of AI hype. Ted's been at this since at least 2017, when he identified tech CEOs' fears of AI as a form of transference of their fear of corporations, which are, after all, autonomous artificial life-forms that are rapidly devouring the human race:

When it comes to inventing catchy, devastating metaphors for AI, this is Ted's busy season. "AI is a blurry JPEG of the web":

It's also "the new McKinsey & Co":

And now, in the Financial Times, Ted sits down for lunch with Madhumita Murgia to talk about the linguistic game we play when we describe a statistical tool as "artificial intelligence," given that it is neither "artificial," nor "intelligent":

Start with whether machines "learn": "machine learning" is just adjusting weights in a statistical model. When you teach a child something, you're not adjusting weights! "Machine learning" is a useful metaphor for thinking about a subset of applied stats, but it's also a trap, tricking us into unconsciously anthropomorphizing an intelligence behind plausible sentence generators. We talk about AI models "hallucinating" – another linguistic trap – but the real "AI hallucination" is when we wet human people hallucinate a dry, electronic intelligence behind the plausible sentences.

Calling it AI, saying that it learns or hallucinates or knows or understands – these are hallucinatory traps for wet squishy humans. Even the fact that chatbots use the pronoun "I" is a slippery slope into imagining an intelligence on the other side of the keyboard.

What should we call this discipline, if not "AI"? Ted says, "applied statistics."

Applied statistics can automate a lot of work away, but what it's best at is automating the bullshit jobs that David Graeber (rest in power) described in his brilliant 2018 book:

An academic friend tells me that they use LLMs to write recommendation letters for grad students, and that grad students use LLMs to take minutes on department meetings. Presumably, someone else is using LLMs to ingest and summarize these recommendation letters and departmental meetings. LLMs can inflate a few bullet points into several florid paragraphs, and deflate them back into bullet points, with significant semantic losses on the way.

Is there a non-bullshit use for these? Maybe. I have spent a lot of my activist career as an anti-bullshit actor. For example, I was among the first public interest delegates to WIPO, the most industry captured UN specialized agency, which has the same relationship to terrible copyright proposals that Mordor has to evil – a kind of infinite wellspring of bad ideas.

These ideas emerge out of an extremely bullshit process. The Standing Committee on Copyright and Related Rights meets periodically in Geneva for a highly stylized, multi-day session in which national delegates rise and deliver cryptic, stilted remarks about various proposed clauses to awful treaties like the Broadcast Treaty.

These remarks are recorded by the Secretariat, who then offers each delegation the opportunity to redact or alter the official record of their remarks. Then, six months later, the Secretariat publishes this revisionist version of the session, which would be impossibly dull and cryptic even without all the revisions. This is the Shield of Boringness (h/t Dana Claire) in action.

One day at one of these meetings, Wendy Seltzer had a brilliant idea: let's make our own transcript. There was no public internet at WIPO back then, so we set up an ad-hoc network off one of our laptops (thankfully, they'd just installed electrical outlets at some of the NGO delegation seats), then used Etherpad to create a shared document. We traded off transcribing the remarks with correcting typos and then annotating the text with plain-language descriptions of what was really going on.

Then we published: twice a day, at the lunch and dinner breaks, unplugging the Ethernet cables from one of the shared PCs in the mezzanine and uploading our transcripts. These hit the nerd sites of the day, like Slashdot, and created realtime pressure on national delegations that had caved to industry demands at the expense of their populations. They started to get urgent calls from their capitols demanding explanations – andthe delegates who'd taken brave stands for the public interest were praised (for the first time ever!) by their bosses in distant ministries.

It worked so well that other NGOs started sending delegates to these meetings, and I started schlepping a wifi access point and a power strip to the meetings, so that several of us could collaborate on even more detailed realtime transcripts and annotations. We also collaborated on coordinated remarks, because each NGO was typically only allowed to speak for 1-2 minutes at then end of a 1-3 day meeting, so we drafted a unified set of comments that we delivered as a serial when the chair called on us.

These unofficial transcripts became the de facto record of the WIPO meetings. I used to run into national delegates who'd been rotated in after a ministerial change in their home countries, who thanked me for our work and said it was the only way they could follow (and thus participate in) the proceedings.

We were setting the agenda, in other words. It was pretty cool, and it made the WIPO establishment furious. The secretariat – a veteran of the US Trade Rep's office who made her bones cramming brutal, human-rights-abusing trade terms on the textile workers of south Asia – threatened to expel us.

All that to say: we could have done even more if we'd had a reliable automated transcription tool. The valuable part of the work was the annotations, not the transcription, and we were always shorthanded, and automated transcription would have freed up a set of hands – and a mind – to make sense of the delegates' remarks and explain them to others.

On the subject of tech regulation and AI: if you're not reading Sayash Kapoor and Arvind Narayanan's AI Snake Oil newsletter, allow me to gently suggest that you consider it:

The latest edition is "Licensing is neither feasible nor effective for addressing AI risks," and it does exactly what it says on the tin:

The issue here is Sam Altman – CEO of OpenAI, a company that is not "open" and whose products are neither "artificial" nor "intelligent" – demanding that Congress create an international licensing regime for products that compete with his own ChatGPT, which loses a large amount of money on every query and can only be profitable if it has no competition to get in the way of jacking up prices once other businesses are thoroughly dependent on its services.

As Kapoor and Narayanan explain, the licensing regime that Altman demands would:

  • Produce a dangerous monoculture in which every app would have a shared set of vulnerabilities that could be exploited by bad actors;

  • Homogenize the products of "AI" tools – e.g. every resume-sorting bot would have the same blind-spots and irrational exuberances;

  • Give a small number of firms control over the Overton window, letting their products define which ideas and sentiments get sorted to the top of your inbox or social media feeds;

  • Centralize control over opinion-formation, with licensed companies controlling how complex ideas are summarized ("those aren't bald spots, they're solar panels for sex machines!");

  • Lead to regulatory capture: when an industry is dominated by a handful of large firms, it's much easier for them to converge on a set of lobbying priorities, and their cozy oligopoly lets them extract sufficient profits that there's plenty of cash to spend on making those lobbying priorities into policy reality.

Kapoor and Narayanan favor "development and evaluation of state-of-the-art models by a diverse group of academics, companies, and NGOs" and promise future work on risk assessment and guardrail development.

A key element of any AI policy framework is data acquisition, processing and utilization. The Ada Lovelace Institute's giant "Rethinking data and rebalancing digital power" report is a banger on this subject, covering interoperability, privacy, equity, information security and more, with superb contributions from Ian Brown and Jathan Sadowski:

The privacy debate changed forever a decade ago, when Edward Snowden handed a group of journalists a trove of NSA documents detailing a massive, lawless global surveillance campaign. The tenth snowdenversary has prompted a lot of commentary. I really liked Alan Rusbridger's retrospective:

Rusbridger was the editor-in-chief of The Guardian during the Snowden publications, who reminds us that whistleblowers continue to meet with cruel treatment and punishment, rather than the celebration they're due. He also reminds us that editorial independence is key to the brave reporting that whistleblowers rely on: when he was publishing the Snowden revelations, his bosses were incapable of ordering him to stop. They could have fired him, but they were not permitted to override his editorial judgments.

Another good Snowden take comes from Ewen MacAskill, the former Guardian defense and security correspondent, who was one of the original Snowden reporters:

MacAskill tells us that neither he nor Snowden have any regrets about their decision. More to the point, he quotes the ACLU's Ben Wizner, Snowden's lawyer, who reminds us that as dismal as Snowden's exile in Russia is, it's far better than what everyone expected at the time – lifetime confinement to a Gitmo-style American gulag or worse, a firing squad.

(Snowden wasn't trying to get to Russia – he was aiming for Ecuador, but Secretary of State John Kerry canceled his passport after his flight took off from Hong Kong, which gave the Russians the pretext they needed to detain and effectively kidnap him.)

The Snowden leaks ushered in an era of mass encryption, ending the age in which most data was sent or stored "in the clear." From the mass storage on your phone to the web sessions your browser initiates to the instant messages you send and receive, the age of cleartext is over.

Meanwhile, the claims by the spy agencies – who have proved time and again that they will lie to the public and their democratically elected overseers about their illegal surveillance – that Snowden did untold harm to national security remain as empty as they were a decade ago. As Snowden told MacAskill: "Disruption? Sure, that is plausible. But it is hard to claim 'damage' if, despite 10 years of hysterics, the sky never fell in."

It took a brave, independent press to publish the Snowden revelations, but even a decade ago, the press was ailing. Big Tech's chokehold over subscription payments, ads, and delivery of content has allowed it to steal a fortune in cash from the news business. Unfortunately, the media – and its friends in government – have decided that the real problem is that tech is stealing "content," not money, leading to proposals to restrict who can link to, quote and discuss the news. For the past month, I've been working with EFF on a series describing how tech steals money (not content) from the news, and what to do about it:

Though I disagree with people who say tech is stealing news content, I firmly agree that the collapse of the news industry is bad news for society. Indeed, one of the reasons we desperately need an independent press is to ensure critical investigations of the tech industry – something we're more likely to get if the news isn't "partnered" with tech for its survival.

Every press outlet has its blind spots and biases, and press competition can really sharpen a media outlet. For example, editors from across the chummy UK press spiked stories detailing the sexual predation of a veteran reporter, Nick Cohen. It took the NY Times to break the story:

Perhaps the New York Post will keep the Times honest, but what about other American cities where news coverage has dwindled to one or fewer outlets. In Baltimore, a new news outlet called the Baltimore Banner is looking to discipline the ailing Baltimore Sun, which was recently purchased by Alden Global Capital. Alden is a notorious vulture capitalist that buys up once-great papers, asset-strips and debt-loads them, fires their best reporters, and lets them degrade into a slurry of advertorials, wire service articles, and nonsense:

The Banner got its seed capital from Stewart Bainum, a former Maryland assemblyman and hereditary rich guy, who nevertheless has embraced a relatively progressive set of causes throughout his political and business careers. Bainum tried to buy the Sun, but lost out to Alden, so he started his own (nonprofit) rival:

While the Sun is slashing its newsroom (down to 70 from a peak of 400), Bainum has committed $50m to hire journalists. They're starting with 70 and plan to grow from there. They've already poached high-profile editors and writers from the Sun and Washington Post. The Banner will be a local paper, focusing on Baltimore metro stories. Per Ron Cassie in Baltimore Magazine, the paper won't run a story about the State of the Union address "unless there is a significant Baltimore angle."

The focus will be on "enterprising, explanatory, and investigative journalism" – not covering the tick-tock of every fire or burglary, but rather, the 'why and how questions." Their revenue target is 50% subscriptions (they're paywalled), 25% ads, 15% donations, 5% events and 5% misc.

One key cleavage line in the fight between news and tech is workers' rights. News workers – like nurses, librarians, teachers, and creative workers – are easy to exploit, thanks to their vocational awe. This awe – described by Fobazi Ettarh in a now-canonical essay – is in contrast to Graeber's bullshit jobs, the idea that since your job makes a difference, you don't deserve to be treated decently:

It's truly perverse – you have to be well-compensated to serve a box on an org chart to inflate a corporate princeling's sense of self-worth, but if you actually help people, that is its own reward.

Reporters (and other "awe-struck" workers) have reached a breaking point. The latest newsroom to strike is Business Insider, and Insider Union is publishing a delightful countersite called (what else?), Business Outsider:

When those workers bring their bosses to their knees, win their demands, and go back to reporting on business, they'll have a great new tool: the DoJ has finally launched its long overdue Corporate Crime Database:

Activists and journalists have been clamoring for this for more than a decade, led by Ralph Nader. Corporate Crime Reporter describes the database: "all of the cases in its system from Main Justice and all 93 U.S. Attorneys offices":

The DoJ didn't do this voluntarily: they were spurred to do it by legislation proposed in 2022, the "Corporate Crime Database Act":

The database doesn't have an RSS feed or other "advanced" features from the previous decade, but all works of federal authorship are public domain, so someone could hack up a scraper that turns new entries into an easy-to-follow feed.

On the subject of innovative databases: Shepherd is a book recommendation site that's attempting to provide an independent alternative to the hegemonic dominance of Amazon's Goodreads:

It aggregates writers' recommendations – "8,000+ authors have shared five of their favorite books around a topic, theme, or mood."

These break down well on topics; there's a great science fiction section:

If you're interested in sf writers and their thoughts, you could do worse than to follow Applied Sci-Fi from ASU's Center for Science and the Imagination. These are seminars in which sf writers and practitioners talk about the way that sf can contest, inform and inspire discussions of current events:

The next event (a free webinar) is "What is the future of [X]?," on Jun 14 at 9hPT: "what can broaden society’s thinking and impact decision-making about our shared technological future?"

The speakers are great: Annalee Newitz, Tobias Buckell, August Cole, Amy Johnson and Tory Stephens, moderated by Joey Eschrich.

Well, that wraps it up for this linkdump, the third in an occassional and erratic series. Previous editions are here:

Actually – just one more thing. Have you ever wanted to preserve your short, tweet-length thoughts in an awkward archival medium? Me too! Thankfully, there's Dumb Cuneiform, who will transliterate your tweet into cuneiform and hand-punch it into a palm-sized clay tablet, bake it, and mail it to you, all for $20:

A Wayback Machine banner.

This day in history (permalink)

#15yrsago DHS spends millions on bus kill-switches to stop Osama bin Laden from reenacting the movie “Speed”

#15yrsago National Geo’s China issue has controversial pages glued together in China

#10yrsago UK government online disability benefits signup requires Internet Explorer 6

#10yrsago Detailed obit of Iain Banks

#10yrsago Austerity: the greatest bait-and-switch in history

#10yrsago Who’s claiming copyright on the Prism logo?

#5yrsago Facebook only pretended to shut down access to friends’ data in 2015, quietly continued access for its favored partners

#1yrago Monopolists want to create human inkjet printers: The war on your pancreas continues

Colophon (permalink)

Today's top sources: Kevin Bankston (, John Naughton (, Chris McKitterick (, Nancy Proctor (, Super Punch (

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FIRST DRAFT COMPLETE, WAITING FOR EDITORIAL REVIEW

  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. FIRST DRAFT COMPLETE, WAITING FOR EDITORIAL REVIEW

  • Vigilant, Little Brother short story about remote invigilation. ON SUBMISSION

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. ON SUBMISSION

Latest podcast: The Swivel-Eyed Loons Have a Point

Recent appearances:

Latest books:

Upcoming books:

  • The Internet Con: A nonfiction book about interoperability and Big Tech, Verso, September 2023

  • The Lost Cause: a post-Green New Deal eco-topian novel about truth and reconciliation with white nationalist militias, Tor Books, November 2023

This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.

How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Newsletter (no ads, tracking, or data-collection):

Mastodon (no ads, tracking, or data-collection):

Medium (no ads, paywalled):

(Latest Medium column: "Ayyyyyy Eyeeeee: The lie that raced around the world before the truth got its boots on"

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla