Pluralistic: Supervised AI isn't (23 August 2023)


Today's links



A CCTV observation room, in which a blurry male figure watches a large bank of monitors. Each monitor is displaying a laughing clown, whose nose has been replaced with the menacing red eye of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey.'

Supervised AI isn't (permalink)

It wasn't just Ottawa: Microsoft Travel published a whole bushel of absurd articles, including the notorious Ottawa guide recommending that tourists dine at the Ottawa Food Bank ("go on an empty stomach"):

https://twitter.com/parismarx/status/1692233111260582161

After Paris Marx pointed out the Ottawa article, Business Insider's Nathan McAlone found several more howlers:

https://www.businessinsider.com/microsoft-removes-embarrassing-offensive-ai-assisted-travel-articles-2023-8

There was the article recommending that visitors to Montreal try "a hamburger" and went on to explain that a hamburger was a "sandwich comprised of a ground beef patty, a sliced bun of some kind, and toppings such as lettuce, tomato, cheese, etc" and that some of the best hamburgers in Montreal could be had at McDonald's.

For Anchorage, Microsoft recommended trying the local delicacy known as "seafood," which it defined as "basically any form of sea life regarded as food by humans, prominently including fish and shellfish," going on to say, "seafood is a versatile ingredient, so it makes sense that we eat it worldwide."

In Tokyo, visitors seeking "photo-worthy spots" were advised to "eat Wagyu beef."

There were more.

Microsoft insisted that this wasn't an issue of "unsupervised AI," but rather "human error." On its face, this presents a head-scratcher: is Microsoft saying that a human being erroneously decided to recommend the dining at Ottawa's food bank?

But a close parsing of the mealy-mouthed disclaimer reveals the truth. The unnamed Microsoft spokesdroid only appears to be claiming that this wasn't written by an AI, but they're actually just saying that the AI that wrote it wasn't "unsupervised." It was a supervised AI, overseen by a human. Who made an error. Thus: the problem was human error.

This deliberate misdirection actually reveals a deep truth about AI: that the story of AI being managed by a "human in the loop" is a fantasy, because humans are neurologically incapable of maintaining vigilance in watching for rare occurrences.

Our brains wire together neurons that we recruit when we practice a task. When we don't practice a task, the parts of our brain that we optimized for it get reused. Our brains are finite and so don't have the luxury of reserving precious cells for things we don't do.

That's why the TSA sucks so hard at its job – why they are the world's most skilled water-bottle-detecting X-ray readers, but consistently fail to spot the bombs and guns that red teams successfully smuggle past their checkpoints:

https://www.nbcnews.com/news/us-news/investigation-breaches-us-airports-allowed-weapons-through-n367851

TSA agents (not "officers," please – they're bureaucrats, not cops) spend all day spotting water bottles that we forget in our carry-ons, but almost no one tries to smuggle a weapons through a checkpoint – 99.999999% of the guns and knives they do seize are the result of flier forgetfulness, not a planned hijacking.

In other words, they train all day to spot water bottles, and the only training they get in spotting knives, guns and bombs is in exercises, or the odd time someone forgets about the hand-cannon they shlep around in their day-pack. Of course they're excellent at spotting water bottles and shit at spotting weapons.

This is an inescapable, biological aspect of human cognition: we can't maintain vigilance for rare outcomes. This has long been understood in automation circles, where it is called "automation blindness" or "automation inattention":

https://pubmed.ncbi.nlm.nih.gov/29939767/

Here's the thing: if nearly all of the time the machine does the right thing, the human "supervisor" who oversees it becomes incapable of spotting its error. The job of "review every machine decision and press the green button if it's correct" inevitably becomes "just press the green button," assuming that the machine is usually right.

This is a huge problem. It's why people just click "OK" when they get a bad certificate error in their browsers. 99.99% of the time, the error was caused by someone forgetting to replace an expired certificate, but the problem is, the other 0.01% of the time, it's because criminals are waiting for you to click "OK" so they can steal all your money:

https://finance.yahoo.com/news/ema-report-finds-nearly-80-130300983.html

Automation blindness can't be automated away. From interpreting radiographic scans:

https://healthitanalytics.com/news/ai-could-safely-automate-some-x-ray-interpretation

to autonomous vehicles:

https://newsroom.unsw.edu.au/news/science-tech/automated-vehicles-may-encourage-new-breed-distracted-drivers

The "human in the loop" is a figleaf. The whole point of automation is to create a system that operates at superhuman scale – you don't buy an LLM to write one Microsoft Travel article, you get it to write a million of them, to flood the zone, top the search engines, and dominate the space.

As I wrote earlier: "There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes 'OK' every time a dialog box appears":

https://pluralistic.net/2022/10/21/let-me-summarize/#i-read-the-abstract

Microsoft – like every corporation – is insatiably horny for firing workers. It has spent the past three years cutting its writing staff to the bone, with the express intention of having AI fill its pages, with humans relegated to skimming the output of the plausible sentence-generators and clicking "OK":

https://www.businessinsider.com/microsoft-news-cuts-dozens-of-staffers-in-shift-to-ai-2020-5

We know about the howlers and the clunkers that Microsoft published, but what about all the other travel articles that don't contain any (obvious) mistakes? These were very likely written by a stochastic parrot, and they comprised training data for a human intelligence, the poor schmucks who are supposed to remain vigilant for the "hallucinations" (that is, the habitual, confidently told lies that are the hallmark of AI) in the torrent of "content" that scrolled past their screens:

https://dl.acm.org/doi/10.1145/3442188.3445922

Like the TSA agents who are fed a steady stream of training data to hone their water-bottle-detection skills, Microsoft's humans in the loop are being asked to pluck atoms of difference out of a raging river of otherwise characterless slurry. They are expected to remain vigilant for something that almost never happens – all while they are racing the clock, charged with preventing a slurry backlog at all costs.

Automation blindness is inescapable – and it's the inconvenient truth that AI boosters conspicuously fail to mention when they are discussing how they will justify the trillion-dollar valuations they ascribe to super-advanced autocomplete systems. Instead, they wave around "humans in the loop," using low-waged workers as props in a Big Store con, just a way to (temporarily) cool the marks.

And what of the people who lose their (vital) jobs to (terminally unsuitable) AI in the course of this long-running, high-stakes infomercial?

Well, there's always the food bank.

"Go on an empty stomach."

(Image: Cryteria, CC BY 3.0; West Midlands Police, CC BY-SA 2.0; modified)


Hey look at this (permalink)

*A Girl and Her Bird: Emergence by David R. Palmer https://www.tor.com/2023/08/22/a-girl-and-her-bird-emergence-by-david-r-palmer/ (I LOVE THIS BOOK)



A Wayback Machine banner.

This day in history (permalink)

#15yrsago Klingon knife scares the crap out of credulous British scandal-sheet https://www.dailymail.co.uk/news/article-387680/Lethal-Star-Trek-blade-seized-knives-amnesty.html

#15yrsago Fafblog’s Medium Lobster becomes a political columnist for the Guardian https://www.theguardian.com/commentisfree/2008/aug/21/uselections2008.barackobama

#10yrsago Larry Lessig and EFF sue music licensing company over bogus YouTube copyright claims https://www.eff.org/press/releases/lawrence-lessig-strikes-back-against-bogus-copyright-takedown

#10yrsago Of Dice and Men: The Story of Dungeons & Dragons and The People Who Play It https://memex.craphound.com/2013/08/23/of-dice-and-men-the-story-of-dungeons-dragons-and-the-people-who-play-it/

#5yrsago From Tahrir to Trump: how the internet became the dictators’ home turf https://www.technologyreview.com/2018/08/14/240325/how-social-media-took-us-from-tahrir-square-to-donald-trump/

#5yrsago Apple removes Facebook’s deceptive, surveillant VPN from the App Store https://www.cnbc.com/2018/08/22/apple-removes-facebook-onavo-app-from-app-store.html

#5yrsago LA County will switch to all open source vote-counting machines https://www.latimes.com/politics/essential/la-pol-ca-essential-politics-may-2018-htmlstory.html

#5yrsago Santa Clara fire department: Verizon’s pants are on fire https://arstechnica.com/tech-policy/2018/08/fire-dept-rejects-verizons-customer-support-mistake-excuse-for-throttling/

#5yrsago Data-driven analysis of the total, gratuitous inadequacy of women’s pockets https://pudding.cool/2018/08/pockets/

#5yrsago Facebook will subject all of its users to “trustworthiness scores" https://www.washingtonpost.com/technology/2018/08/21/facebook-is-rating-trustworthiness-its-users-scale-zero-one/

#5yrsago The company you hired to snoop on your kids’ phones uploaded all their data to an unprotected website https://www.vice.com/en/article/9kmj4v/spyware-company-spyfone-terabytes-data-exposed-online-leak

#1yrago Tory Britain is crashing and burning https://pluralistic.net/2022/08/23/late-stage-thatcherism/#just-dont-be-poor



Colophon (permalink)

Today's top sources:

Currently writing:

  • A Little Brother short story about DIY insulin PLANNING

  • Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. FORTHCOMING TOR BOOKS JAN 2025

  • The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. FORTHCOMING TOR BOOKS FEB 2024

  • Vigilant, Little Brother short story about remote invigilation. FORTHCOMING ON TOR.COM

  • Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION

  • Spill, a Little Brother short story about pipeline protests. FORTHCOMING ON TOR.COM

Latest podcast: The Internet Con: How to Seize the Means of Computation (audiobook outtake) https://craphound.com/news/2023/08/01/the-internet-con-how-to-seize-the-means-of-computation-audiobook-outtake/

Upcoming appearances:

Recent appearances:

Latest books:

Upcoming books:

  • The Internet Con: A nonfiction book about interoperability and Big Tech, Verso, September 2023

  • The Lost Cause: a post-Green New Deal eco-topian novel about truth and reconciliation with white nationalist militias, Tor Books, November 2023


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Medium (no ads, paywalled):

https://doctorow.medium.com/

(Latest Medium column: "Everything Made By an AI Is In the Public Domain: The US Copyright Office offers creative workers a powerful labor protective https://pluralistic.net/2023/08/20/everything-made-by-an-ai-is-in-the-public-domain/)

Twitter (mass-scale, unrestricted, third-party surveillance and advertising):

https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla