Today's links
- The real AI fight: Effective Accelerationists and Effective Altruists are both in vigorous agreement about something genuinely stupid.
- Hey look at this: Delights to delectate.
- This day in history: 2003, 2008, 2013, 2018, 2022
- Colophon: Recent publications, upcoming/recent appearances, current writing projects, current reading
The real AI fight (permalink)
Last week's spectacular OpenAI soap-opera hijacked the attention of millions of normal, productive people and nonconsensually crammed them full of the fine details of the debate between "Effective Altruism" (doomers) and "Effective Accelerationism" (AKA e/acc), a genuinely absurd debate that was allegedly at the center of the drama.
Very broadly speaking: the Effective Altruists are doomers, who believe that Large Language Models (AKA "spicy autocomplete") will someday become so advanced that it could wake up and annihilate or enslave the human race. To prevent this, we need to employ "AI Safety" – measures that will turn superintelligence into a servant or a partner, not an adversary.
Contrast this with the Effective Accelerationists, who also believe that LLMs will someday become superintelligences with the potential to annihilate or enslave humanity – but they nevertheless advocate for faster AI development, with fewer "safety" measures, in order to produce an "upward spiral" in the "techno-capital machine."
Once-and-future OpenAI CEO Altman is said to be an accelerationist who was forced out of the company by the Altruists, who were subsequently bested, ousted, and replaced by Larry fucking Summers. This, we're told, is the ideological battle over AI: should we cautiously progress our LLMs into superintelligences with safety in mind, or go full speed ahead and trust to market forces to tame and harness the superintelligences to come?
This "AI debate" is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we'll get a locomotive:
https://locusmag.com/2020/07/cory-doctorow-full-employment/
As Molly White writes, this isn't much of a debate. The "two sides" of this debate are as similar as Tweedledee and Tweedledum. Yes, they're arrayed against each other in battle, so furious with each other that they're tearing their hair out. But for people who don't take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they've split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse:
https://newsletter.mollywhite.net/p/effective-obfuscation
White points out that there's another, much more distinct side in this AI debate – as different and distant from Dee and Dum as a Beamish Boy and a Jabberwock. This is the side of AI Ethics – the side that worries about "today’s issues of ghost labor, algorithmic bias, and erosion of the rights of artists and others." As White says, shifting the debate to the existential risk posed by a future, hypothetical superintelligence "is incredibly convenient for the powerful individuals and companies who stand to profit from AI."
After all, both sides plan to make money selling AI tools to corporations, whose track record in deploying algorithmic "decision support" systems and other AI-based automation is pretty poor – like the claims-evaluation engine that Cigna uses to deny insurance claims:
https://www.propublica.org/article/cigna-pxdx-medical-health-insurance-rejection-claims
On a graph that plots the various positions on AI, the two groups of weirdos who disagree about how to create the inevitable superintelligence are effectively standing on the same spot, and the people who worry about the actual way that AI harms actual people right now are about a million miles away from that spot.
There's that old programmer joke, "There are 10 kinds of people, those who understand binary and those who don't." But of course, that joke could just as well be, "There are 10 kinds of people, those who understand ternary, those who understand binary, and those who don't understand either":
https://pluralistic.net/2021/12/11/the-ten-types-of-people/
What's more, the joke could be, "there are 10 kinds of people, those who understand hexadecenary, those who understand pentadecenary, those who understand tetradecenary [und so weiter] those who understand ternary, those who understand binary, and those who don't." That is to say, a "polarized" debate often has people who hold positions so far from the ones everyone is talking about that those belligerents' concerns are basically indistinguishable from one another.
The act of identifying these distant positions is a radical opening up of possibilities. Take the indigenous philosopher chief Red Jacket's response to the Christian missionaries who sought permission to proselytize to Red Jacket's people:
https://historymatters.gmu.edu/d/5790/
Red Jacket's whole rebuttal is a superb dunk, but it gets especially interesting where he points to the sectarian differences among Christians as evidence against the missionary's claim to having a single true faith, and in favor of the idea that his own people's traditional faith could be co-equal among Christian doctrines.
The split that White identifies isn't a split about whether AI tools can be useful. Plenty of us AI skeptics are happy to stipulate that there are good uses for AI. For example, I'm 100% in favor of the Human Rights Data Analysis Group using an LLM to classify and extract information from the Innocence Project New Orleans' wrongful conviction case files:
https://hrdag.org/tech-notes/large-language-models-IPNO.html
Automating "extracting officer information from documents – specifically, the officer's name and the role the officer played in the wrongful conviction" was a key step to freeing innocent people from prison, and an LLM allowed HRDAG – a tiny, cash-strapped, excellent nonprofit – to make a giant leap forward in a vital project. I'm a donor to HRDAG and you should donate to them too:
https://hrdag.networkforgood.com/
Good data-analysis is key to addressing many of our thorniest, most pressing problems. As Ben Goldacre recounts in his inaugural Oxford lecture, it is both possible and desirable to build ethical, privacy-preserving systems for analyzing the most sensitive personal data (NHS patient records) that yield scores of solid, ground-breaking medical and scientific insights:
https://www.youtube.com/watch?v=_-eaV8SWdjQ
The difference between this kind of work – HRDAG's exoneration work and Goldacre's medical research – and the approach that OpenAI and its competitors take boils down to how they treat humans. The former treats all humans as worthy of respect and consideration. The latter treats humans as instruments – for profit in the short term, and for creating a hypothetical superintelligence in the (very) long term.
As Terry Pratchett's Granny Weatherwax reminds us, this is the root of all sin: "sin is when you treat people like things":
https://brer-powerofbabel.blogspot.com/2009/02/granny-weatherwax-on-sin-favorite.html
So much of the criticism of AI misses this distinction – instead, this criticism starts by accepting the self-serving marketing claim of the "AI safety" crowd – that their software is on the verge of becoming self-aware, and is thus valuable, a good investment, and a good product to purchase. This is Lee Vinsel's "Criti-Hype": "taking press releases from startups and covering them with hellscapes":
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
Criti-hype and AI were made for each other. Emily M Bender is a tireless cataloger of criti-hypists, like the newspaper reporters who breathlessly repeat " completely unsubstantiated claims (marketing)…sourced to Altman":
https://dair-community.social/@emilymbender/111464030855880383
Bender, like White, is at pains to point out that the real debate isn't doomers vs accelerationists. That's just "billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading – and philosophers and others feeling important by dressing these same silly ideas up in fancy words":
https://dair-community.social/@emilymbender/111464024432217299
All of this is just a distraction from real and important scientific questions about how (and whether) to make automation tools that steer clear of Granny Weatherwax's sin of "treating people like things." Bender – a computational linguist – isn't a reactionary who hates automation for its own sake. On Mystery AI Hype Theater 3000 – the excellent podcast she co-hosts with Alex Hanna – there is a machine-generated transcript:
https://www.buzzsprout.com/2126417
There is a serious, meaty debate to be had about the costs and possibilities of different forms of automation. But the superintelligence true-believers and their criti-hyping critics keep dragging us away from these important questions and into fanciful and pointless discussions of whether and how to appease the godlike computers we will create when we disassemble the solar system and turn it into computronium.
The question of machine intelligence isn't intrinsically unserious. As a materialist, I believe that whatever makes me "me" is the result of the physics and chemistry of processes inside and around my body. My disbelief in the existence of a soul means that I'm prepared to think that it might be possible for something made by humans to replicate something like whatever process makes me "me."
Ironically, the AI doomers and accelerationists claim that they, too, are materialists – and that's why they're so consumed with the idea of machine superintelligence. But it's precisely because I'm a materialist that I understand these hypotheticals about self-aware software are less important and less urgent than the material lives of people today.
It's because I'm a materialist that my primary concerns about AI are things like the climate impact of AI data-centers and the human impact of biased, opaque, incompetent and unfit algorithmic systems – not science fiction-inspired, self-induced panics over the human race being enslaved by our robot overlords.
(Image: Cryteria, CC BY 3.0, modified)
Hey look at this (permalink)
- "A Pan-African Vision for Structural Transformation" ~ Fadhel Kaboub https://www.youtube.com/watch?v=NnXjtyTnpig
-
DISENGAGE Opting Out—and Finding New Options—to Reclaim the Internet from Spammers, Scammers, Intrusive Marketers and Big Tech https://www.lindaformichelli.com/_files/ugd/7f54e8_b4212db40a8342b59fe1e0a4c2087997.pdf
-
National Rail Action Plan https://youtu.be/-VrvAzwpFmE
This day in history (permalink)
#20yrsago Big Mouth Billy Bass runs Linux, does impressions https://web.archive.org/web/20031123212606/http://bigmouth.here-n-there.com/
#15yrsago Tony Benn’s War on Terror diaries — an inspirational look at the life of a princpled fighter https://memex.craphound.com/2008/11/26/tony-benns-war-on-terror-diaries-an-inspirational-look-at-the-life-of-a-princpled-fighter/
#15yrsago Passwords suck https://web.archive.org/web/20081220181358/http://www.links.org/?p=425
#10yrsago RIP, Richard “Datamancer” Nagy https://web.archive.org/web/20131129114041/http://brassgoggles.co.uk/forum/index.php/topic,41728.msg879191.html
#10yrsago Pratchett’s “Raising Steam”: the magic of modernity https://memex.craphound.com/2013/11/27/pratchetts-raising-steam-the-magic-of-modernity/
#10yrsago NSA spied on non-terrorist “radicalizers”‘ porn use in order to discredit them https://www.huffpost.com/entry/nsa-porn-muslims_n_4346128
#10yrsago Public Citizen threatens legal action against Kleargear on behalf of customers https://www.techdirt.com/2013/11/26/public-citizen-suing-behalf-customers-whose-credit-was-ruined-kleargears-3500-bad-review-fee/
#10yrsago Beasties/GoldieBlox debunked https://waxy.org/2013/11/goldieblox_and_the_three_mcs/
#5yrsago Billboards are using sensors to identify, target and track individuals https://onezero.medium.com/irl-ads-are-taking-scary-inspiration-from-social-media-7088e8241beb
#5yrsago Man arrested for rape after his Playstation mic allegedly broadcast audio from the crime to other players https://arstechnica.com/gaming/2018/11/a-hot-playstation-mic-captures-sounds-of-apparent-rape-leads-to-arrest/
#5yrsago Amnesty will stage global protests over Google’s spying, censoring Chinese search engine plan https://theintercept.com/2018/11/26/google-dragonfly-project-china-amnesty-international/
#5yrsago Supreme Court looks ready to let customers sue Apple for abusing its App Store monopoly https://gizmodo.com/supreme-court-appears-to-lean-heavily-against-apples-de-1830662533?IR=T
#5yrsato A visual guide to America’s concentrated, uncompetitive markets https://concentrationcrisis.openmarketsinstitute.org
#5yrsago US tax shortfalls have our public schools begging for donations https://truthout.org/articles/bake-sales-cant-fix-school-funding-pinch-caused-by-corporate-tax-cuts/
#5yrsago Using information security to explain why disinformation makes autocracies stronger and democracies weaker https://memex.craphound.com/2018/11/27/using-information-security-to-explain-why-disinformation-makes-autocracies-stronger-and-democracies-weaker/
#5yrsago The Fifth Risk: Michael Lewis explains how the “deep state” is just nerds versus grifters https://memex.craphound.com/2018/11/27/the-fifth-risk-michael-lewis-explains-how-the-deep-state-is-just-nerds-versus-grifters/
#5yrsago Malware vector: become an admin on dormant, widely-used open source projects https://github.com/dominictarr/event-stream/issues/116
#5yrsago Babysitter vetting and voice-analysis: Have we reached peak AI snakeoil? https://memex.craphound.com/2018/11/26/babysitter-vetting-and-voice-analysis-have-we-reached-peak-ai-snakeoil/
#5yrsago Chinese AI traffic cam mistook a bus ad for a human and publicly shamed the CEO it depicted for jaywalking https://www.scmp.com/tech/innovation/article/2174564/facial-recognition-catches-chinas-air-con-queen-dong-mingzhu
#5yrsago Using data-science to evaluate whether Xi Jinping’s anti-corruption sweeps were really about consolidating power https://www.aeaweb.org/conference/2019/preliminary/paper/hSA5ri6d
#1yrago Poe vs. Property https://pluralistic.net/2022/11/27/poe-vs-property/
Colophon (permalink)
Today's top sources:
Currently writing:
- A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025
-
The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. FORTHCOMING TOR BOOKS FEB 2024
-
Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM
-
Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM
Latest podcast: Moral Hazard (from Communications Breakdown) https://craphound.com/stories/2023/11/12/moral-hazard-from-communications-breakdown/
Upcoming appearances:
- Who Is Watching Big Tech? Nov 27 (Toronto)`
https://www.eventbrite.ca/e/who-is-watching-big-tech-tickets-707927880347 -
The Lost Cause at The Strand (NYC), Nov 29
https://www.eventbrite.com/e/cory-doctorow-the-lost-cause-tickets-734958008187 -
The Lost Cause at Flyleaf Books (Chapel Hill), Dec 5
https://www.flyleafbooks.com/doctorow-2023
Recent appearances:
- Digital Markets Act; Interoperability; Entrenchment; Copyright; "What-About-Ism" (Digital Markets Research Hub)
https://www.youtube.com/watch?v=Xm23pO5_WKM -
Science fiction for a dystopian present (Institute of Art and Ideas)
https://iai.tv/video/science-fiction-for-a-dystopian-present-cory-doctorow?_auid=2020 -
Pushing back on unconstrained capitalism (Changelog)
https://changelog.com/podcast/565
Latest books:
- "The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books:
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books, February 2024
-
Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla