Today's links
- Humans are not perfectly vigilant: And that's bad news for AI.
- Hey look at this: Delights to delectate.
- This day in history: 2004, 2009, 2014, 2019, 2023
- Upcoming appearances: Where to find me.
- Recent appearances: Where I've been.
- Latest books: You keep readin' em, I'll keep writin' 'em.
- Upcoming books: Like I said, I'll keep writin' 'em.
- Colophon: All the rest.
Humans are not perfectly vigilant (permalink)
Here's a fun AI story: a security researcher noticed that large companies' AI-authored source-code repeatedly referenced a nonexistent library (an AI "hallucination"), so he created a (defanged) malicious library with that name and uploaded it, and thousands of developers automatically downloaded and incorporated it as they compiled the code:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
These "hallucinations" are a stubbornly persistent feature of large language models, because these models only give the illusion of understanding; in reality, they are just sophisticated forms of autocomplete, drawing on huge databases to make shrewd (but reliably fallible) guesses about which word comes next:
https://dl.acm.org/doi/10.1145/3442188.3445922
Guessing the next word without understanding the meaning of the resulting sentence makes unsupervised LLMs unsuitable for high-stakes tasks. The whole AI bubble is based on convincing investors that one or more of the following is true:
I. There are low-stakes, high-value tasks that will recoup the massive costs of AI training and operation;
II. There are high-stakes, high-value tasks that can be made cheaper by adding an AI to a human operator;
III. Adding more training data to an AI will make it stop hallucinating, so that it can take over high-stakes, high-value tasks without a "human in the loop."
These are dubious propositions. There's a universe of low-stakes, low-value tasks – political disinformation, spam, fraud, academic cheating, nonconsensual porn, dialog for video-game NPCs – but none of them seem likely to generate enough revenue for AI companies to justify the billions spent on models, nor the trillions in valuation attributed to AI companies:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
The proposition that increasing training data will decrease hallucinations is hotly contested among AI practitioners. I confess that I don't know enough about AI to evaluate opposing sides' claims, but even if you stipulate that adding lots of human-generated training data will make the software a better guesser, there's a serious problem. All those low-value, low-stakes applications are flooding the internet with botshit. After all, the one thing AI is unarguably very good at is producing bullshit at scale. As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels:
https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification
This means that adding another order of magnitude more training data to AI won't just add massive computational expense – the data will be many orders of magnitude more expensive to acquire, even without factoring in the additional liability arising from new legal theories about scraping:
https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
That leaves us with "humans in the loop" – the idea that an AI's business model is selling software to businesses that will pair it with human operators who will closely scrutinize the code's guesses. There's a version of this that sounds plausible – the one in which the human operator is in charge, and the AI acts as an eternally vigilant "sanity check" on the human's activities.
For example, my car has a system that notices when I activate my blinker while there's another car in my blind-spot. I'm pretty consistent about checking my blind spot, but I'm also a fallible human and there've been a couple times where the alert saved me from making a potentially dangerous maneuver. As disciplined as I am, I'm also sometimes forgetful about turning off lights, or waking up in time for work, or remembering someone's phone number (or birthday). I like having an automated system that does the robotically perfect trick of never forgetting something important.
There's a name for this in automation circles: a "centaur." I'm the human head, and I've fused with a powerful robot body that supports me, doing things that humans are innately bad at.
That's the good kind of automation, and we all benefit from it. But it only takes a small twist to turn this good automation into a nightmare. I'm speaking here of the reverse-centaur: automation in which the computer is in charge, bossing a human around so it can get its job done. Think of Amazon warehouse workers, who wear haptic bracelets and are continuously observed by AI cameras as autonomous shelves shuttle in front of them and demand that they pick and pack items at a pace that destroys their bodies and drives them mad:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
Automation centaurs are great: they relieve humans of drudgework and let them focus on the creative and satisfying parts of their jobs. That's how AI-assisted coding is pitched: rather than looking up tricky syntax and other tedious programming tasks, an AI "co-pilot" is billed as freeing up its human "pilot" to focus on the creative puzzle-solving that makes coding so satisfying.
But a hallucinating AI is a terrible co-pilot. It's just good enough to get the job done much of the time, but it also sneakily inserts booby-traps that are statistically guaranteed to look as plausible as the good code (that's what a next-word-guessing program does: guesses the statistically most likely word).
This turns AI-"assisted" coders into reverse centaurs. The AI can churn out code at superhuman speed, and you, the human in the loop, must maintain perfect vigilance and attention as you review that code, spotting the cleverly disguised hooks for malicious code that the AI can't be prevented from inserting into its code. As qntm writes, "code review [is] difficult relative to writing new code":
https://twitter.com/qntm/status/1773779967521780169
Why is that? "Passively reading someone else's code just doesn't engage my brain in the same way. It's harder to do properly":
https://twitter.com/qntm/status/1773780355708764665
There's a name for this phenomenon: "automation blindness." Humans are just not equipped for eternal vigilance. We get good at spotting patterns that occur frequently – so good that we miss the anomalies. That's why TSA agents are so good at spotting harmless shampoo bottles on X-rays, even as they miss nearly every gun and bomb that a red team smuggles through their checkpoints:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
Qntm's thread points out that this is as true for AI-assisted driving as it is for AI-assisted coding: "self-driving cars replace the experience of driving with the experience of being a driving instructor":
https://twitter.com/qntm/status/1773841546753831283
In other words, they turn you into a reverse-centaur. Whereas my blind-spot double-checking robot allows me to make maneuvers at human speed and points out the things I've missed, a "supervised" self-driving car makes maneuvers at a computer's frantic pace, and demands that its human supervisor tirelessly and perfectly assesses each of those maneuvers. No wonder Cruise's murderous "self-driving" taxis replaced each low-waged driver with 1.5 high-waged technical robot supervisors:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
AI radiology programs are said to be able to spot cancerous masses that human radiologists miss. A centaur-based AI-assisted radiology program would keep the same number of radiologists in the field, but they would get less done: every time they assessed an X-ray, the AI would give them a second opinion. If the human and the AI disagreed, the human would go back and re-assess the X-ray. We'd get better radiology, at a higher price (the price of the AI software, plus the additional hours the radiologist would work).
But back to making the AI bubble pay off: for AI to pay off, the human in the loop has to reduce the costs of the business buying an AI. No one who invests in an AI company believes that their returns will come from business customers who agree to increase their costs. The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway – that pitch is the most successful form of AI disinformation in the world.
An AI that "hallucinates" bad advice to fliers can't replace human customer service reps, but airlines are firing reps and replacing them with chatbots:
An AI that "hallucinates" bad legal advice to New Yorkers can't replace city services, but Mayor Adams still tells New Yorkers to get their legal advice from his chatbots:
https://arstechnica.com/ai/2024/03/nycs-government-chatbot-is-lying-about-city-laws-and-regulations/
The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software – everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus. A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur – they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.
But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit." They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at a robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.
Reverse centaurism is brutal. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:
https://en.wikipedia.org/wiki/Modern_Times_(film)
As ever, the problem with a gadget isn't what it does: it's who it does it for and who it does it to. There are plenty of benefits from being a centaur – lots of ways that automation can help workers. But the only path to AI profitability lies in reverse centaurs, automation that turns the human in the loop into the crumple-zone for a robot:
https://estsjournal.org/index.php/ests/article/view/260
(Image: Cryteria, CC BY 3.0; Jorge Royan, CC BY-SA 3.0; Noah Wulf, CC BY-SA 4.0: modified)
Hey look at this (permalink)
- People Hate the Idea of Car-Free Cities—Until They Live in One https://www.wired.com/story/car-free-cities-opposition/ (h/t Kottke)
-
Forced Labor vs. Forced Idleness https://www.dollarsandsense.org/archives/2024/0324bowman.html (h/t Naked Capitalism)
-
Will Voters Hear About Donald Trump’s Deranged Health Care Agenda? https://prospect.org/health/2024-04-01-trumps-deranged-health-care-agenda/
This day in history (permalink)
#20yrsago "Homeless Hacker" Adrian Lamo profile in Wired https://www.wired.com/2004/04/hacker-5/
#20yrsago Disney asks Gizmodo to clarify that jewel box is not intended for pot stashing https://web.archive.org/web/20040402032317/http://www.gizmodo.com/archives/disney_princess_cd_jewelrystash_box.php
#20yrsago EZBake Oven for your PC https://web.archive.org/web/20040613185610/http://www.thinkgeek.com/stuff/41/ezbake.shtml
#15yrsago Ant slaves’ murderous rebellions https://web.archive.org/web/20090402193006/http://scienceblogs.com/notrocketscience/2009/04/the_rebellion_of_the_ant_slaves.php
#15yrsago To Market, To Market: The Re-Branding of Billy Bailey – my sf story read aloud https://archive.org/details/ToMarketToMarket
#15yrsago Electronic Arts releases DRM-removal tool https://games.slashdot.org/story/09/03/31/1917254/ea-releases-drm-license-deactivation-tool
#15yrsago Adventures in Cartooning: wit and instruction for kids who want to learn cartooning https://memex.craphound.com/2009/03/31/adventures-in-cartooning-wit-and-instruction-for-kids-who-want-to-learn-cartooning/
#10yrsago Podcast: Collective Action – the Magnificent Seven anti-troll business-model https://ia902808.us.archive.org/10/items/Cory_Doctorow_Podcast_270/Cory_Doctorow_Podcast_270_Collective_Action.mp3
#10yrsago Google Maps’ spam problem presents genuine security issues https://web.archive.org/web/20140331101332/http://www.businessweek.com/articles/2014-03-28/how-scammers-turn-google-maps-into-fantasy-land
#10yrsago NSA wiretapped 122 world leaders; GCHQ penetrated German satellite companies for mass surveillance potential https://www.spiegel.de/international/germany/gchq-and-nsa-targeted-private-german-companies-a-961444.html
#10yrsago Playground removes “safety” rules; fun, development and injuries ensue https://nationalpost.com/news/when-one-new-zealand-school-tossed-its-playground-rules-and-let-students-risk-injury-the-results-surprised
#10yrsago Rob Ford and Canada’s neoliberal agenda https://www.dissentmagazine.org/online_articles/the-passion-of-rob-ford-or-the-neoliberal-making-of-torontos-municipal-crisis/
#10yrsago FCC adds 100MHz of spectrum to the commons https://thehill.com/blogs/hillicon-valley/technology/202170-fcc-votes-to-boost-wi-fi/
#10yrsago German labor ministry bans after-hours email from managers to employees https://web.archive.org/web/20140401101146/http://ibnlive.in.com/news/germany-bans-managers-from-calling-or-emailing-staff-after-work-hours/461070-79.html
#10yrsago Bizarre, paranoid warning about imaginary predators choosing victims through bumper-sticker-ology https://www.freerangekids.com/that-sticker-on-my-car-is-not-endangering-me/
#10yrsago The Internet should be treated as a utility: Susan Crawford https://www.vox.com/2014/4/6/5587138/susan-crawford-internet-public-option
#10yrsago Rich, admitted child rapist granted probation because he “would not not fare well” in prison https://www.delawareonline.com/story/news/crime/2014/03/28/sunday-preview-du-pont-heir-stayed-prison/7016769/
#10yrsago Expiration Day: YA coming of age novel about robots and the end of the human race https://memex.craphound.com/2014/04/01/expiration-day-ya-coming-of-age-novel-about-robots-and-the-end-of-the-human-race/
#5yrsago Internal files reveal how US law enforcement classes anti-fascists as fascists, and actual fascists as “anti-anti-fascists” https://www.theguardian.com/world/2019/apr/01/intelligence-law-enforcement-report-leftwing-terrorists-charlottesville
#5yrsago Wells Fargo is looking for a new CEO https://www.cnn.com/2019/04/01/investing/wells-fargo-ceo-tim-sloan/index.html
#5yrsago The strange tale of Runescape’s Communist republic https://thespinoff.co.nz/pop-culture/29-03-2019/free-armour-trimming-the-communist-revolution-inside-runescape
#5yrsago Slovakia’s first woman president is an anti-corruption, pro-immigrant environmental campaigner https://www.kgou.org/world/2019-03-31/the-erin-brockovich-of-slovakia-is-elected-the-countrys-first-female-president
#5yrsago The weird grift of “sovereign citizens”: where UFOlogy meets antisemitism by way of Cliven Bundy and cat-breeding https://www.nytimes.com/2019/03/29/business/sovereign-citizens-financial-crime.html
#5yrsago Citing transphobic policies, 172+ googlers call for removal of Heritage Foundation from Google’s “Advanced Technology External Advisory Council” https://medium.com/@against.transphobia/googlers-against-transphobia-and-hate-b1b0a5dbf76
#5yrsago America’s best mobile carrier is also the first phone company to back Right to Repair legislation https://www.vice.com/en/article/eveezj/a-cell-phone-carrier-breaks-with-big-telecom-announces-support-for-right-to-repair-legislation
#5yrsago Small stickers on the ground trick Tesla autopilot into steering into opposing traffic lane https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
#5yrsago Banksy’s art authentication system displays top-notch cryptographic nous https://reprage.com/posts/2019-03-25-how-banksy-authenticates-his-work/
#5yrsago The Boston Globe on breaking up Big Tech falls into the trap of tech exceptionalism https://memex.craphound.com/2019/03/31/the-boston-globe-on-breaking-up-big-tech-falls-into-the-trap-of-tech-exceptionalism/
#1yrago Flickr to copyleft trolls: drop dead https://pluralistic.net/2023/04/01/pixsynnussija/#pilkunnussija
Upcoming appearances (permalink)
- Computer Pasts/Computer Futures (NYU/virtual), Apr 4
https://steinhardt.nyu.edu/events/deans-public-square-series-computer-pasts-computer-futures -
The Bezzle at Harvard Berkman-Klein Center, with Randall Munroe (Boston), Apr 11
https://cyber.harvard.edu/events/enshittification -
RISD Debates in AI (Providence), Apr 12
https://involved.risd.edu/event/9777963 -
The Bezzle at Anderson's Books (Chicago), Apr 17
https://www.andersonsbookshop.com/event/cory-doctorow-1 -
Torino Biennale Tecnologia (Apr 19-21)
https://www.turismotorino.org/en/experiences/events/biennale-tecnologia -
Canadian Centre for Policy Alternatives (Winnipeg), May 2
https://www.eventbrite.ca/e/cory-doctorow-tickets-798820071337?aff=oddtdtcreator -
Tartu Prima Vista Literary Festival (May 5-11)
https://tartu2024.ee/en/kirjandusfestival/ -
Tim O’Reilly and Cory Doctorow on “Enshittification” and the Future of AI (May 14)
https://www.oreilly.com/live-events/tim-oreilly-and-cory-doctorow-on-enshittification-and-the-future-of-ai/0642572001651/ -
Media Ecology Association keynote (Amherst, NY), Jun 6-9
https://media-ecology.org/convention -
American Association of Law Libraries keynote (Chicago), Jul 21
https://www.aallnet.org/conference/agenda/keynote-speaker/
Recent appearances (permalink)
- The Scam Economy (Lost Dollar Business Club)
https://www.youtube.com/watch?v=SChg9ZiY_bk
podc - Private Prisons, Finance Ghouls and The Bezzle (It Could Happen Here)
https://www.iheart.com/podcast/105-it-could-happen-here-30717896/episode/private-prisons-finance-ghouls-and-the-161844728/ -
Cory Doctorow’s new tech crime thriller takes us back to the days of Yahoo! (Betakit)
https://betakit.com/cory-doctorow-the-bezzle-betakit-podcast/
Latest books (permalink)
- The Bezzle: a sequel to "Red Team Blues," about prison-tech and other grifts, Tor Books (US), Head of Zeus (UK), February 2024 (the-bezzle.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3062/Available_Feb_20th%3A_The_Bezzle_HB.html#/).
-
"The Lost Cause:" a solarpunk novel of hope in the climate emergency, Tor Books (US), Head of Zeus (UK), November 2023 (http://lost-cause.org). Signed, personalized copies at Dark Delicacies (https://www.darkdel.com/store/p3007/Pre-Order_Signed_Copies%3A_The_Lost_Cause_HB.html#/)
-
"The Internet Con": A nonfiction book about interoperability and Big Tech (Verso) September 2023 (http://seizethemeansofcomputation.org). Signed copies at Book Soup (https://www.booksoup.com/book/9781804291245).
-
"Red Team Blues": "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books http://redteamblues.com. Signed copies at Dark Delicacies (US): and Forbidden Planet (UK): https://forbiddenplanet.com/385004-red-team-blues-signed-edition-hardcover/.
-
"Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59?sk=f6cd10e54e20a07d4c6d0f3ac011af6b) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books (permalink)
- Picks and Shovels: a sequel to "Red Team Blues," about the heroic era of the PC, Tor Books, February 2025
-
Unauthorized Bread: a graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2025
Colophon (permalink)
Today's top sources: Super Punch (https://www.superpunch.net/), Slashdot (https://slashdot.org/).
Currently writing:
- A Little Brother short story about DIY insulin PLANNING
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025
-
Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM
-
Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM
Latest podcast: Subprime gadgets https://craphound.com/news/2024/03/31/subprime-gadgets/
This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
Medium (no ads, paywalled):
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla