Today's links
- Backdooring a summarizerbot to shape opinion: Model spinning maintains accuracy metrics, but changes the point of view.
- Hey look at this: Delights to delectate.
- This day in history: 2007, 2012, 2017, 2021
- Colophon: Recent publications, upcoming/recent appearances, current writing projects, current reading
Backdooring a summarizerbot to shape opinion (permalink)
What's worse than a tool that doesn't work? One that does work, nearly perfectly, except when it fails in unpredictable and subtle ways. Such a tool is bound to become indispensable, and even if you know it might fail eventually, maintaining vigilance in the face of long stretches of reliability is impossible:
Even worse than a tool that is known to fail in subtle and unpredictable ways is one that is believed to be flawless, whose errors are so subtle that they remain undetected, despite the havoc they wreak as their subtle, consistent errors pile up over time
This is the great risk of machine-learning models, whether we call them "classifiers" or "decision support systems." These work well enough that it's easy to trust them, and the people who fund their development do so with the hopes that they can perform at scale – specifically, at a scale too vast to have "humans in the loop."
There's no market for a machine-learning autopilot, or content moderation algorithm, or loan officer, if all it does is cough up a recommendation for a human to evaluate. Either that system will work so poorly that it gets thrown away, or it works so well that the inattentive human just button-mashes "OK" every time a dialog box appears.
That's why attacks on machine-learning systems are so frightening and compelling: if you can poison an ML model so that it usually works, but fails in ways that the attacker can predict and the user of the model doesn't even notice, the scenarios write themselves – like an autopilot that can be made to accelerate into oncoming traffic by adding a small, innocuous sticker to the street scene:
https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
The first attacks on ML systems focused on uncovering accidental "adversarial examples" – naturally occurring defects in models that caused them to perceive, say, turtles as AR-15s:
But the next generation of research focused on introducing these defects – backdooring the training data, or the training process, or the compiler used to produce the model. Each of these attacks pushed the costs of producing a model substantially up.
https://pluralistic.net/2022/10/11/rene-descartes-was-a-drunken-fart/#trusting-trust
Taken together, they require a would-be model-maker to re-check millions of datapoints in a training set, hand-audit millions of lines of decompiled compiler source-code, and then personally oversee the introduction of the data to the model to ensure that there isn't "ordering bias."
Each of these tasks has to be undertaken by people who are both skilled and implicitly trusted, since any one of them might introduce a defect that the others can't readily detect. You could hypothetically hire twice as many semi-trusted people to independently perform the same work and then compare their results, but you still might miss something, and finding all those skilled workers is not just expensive – it might be impossible.
Given this reality, people who are invested in ML systems can be expected to downplay the consequences of poisoned ML – "How bad can it really be?" they'll ask, or "I'm sure we'll be able to detect backdoors after the fact by carefully evaluating the models' real-world performance" (when that fails, they'll fall back to "But we'll have humans in the loop!").
Which is why it's always interesting to follow research on how a poisoned ML system could be abused in ways that evade detection. This week, I read "Spinning Language Models: Risks of Propaganda-As-A-Service and Countermeasures" by Cornell Tech's Eugene Bagdasaryan and Vitaly Shmatikov:
https://arxiv.org/pdf/2112.05224.pdf
The authors explore a fascinating attack on a summarizer model – that is, a model that reads an article and spits out a brief summary. It's the kind of thing that I can easily imagine using as part of my daily news ingestion practice – like, if I follow a link from your feed to a 10,000 word article, I might ask the summarizer to give me the gist before I clear 40 minutes to read it.
Likewise, I might use a summarizer to get the gist of a debate over an issue that I'm not familiar with – take 20 articles at random about the subject and get summaries of all of them and have a quick scan to get a sense of how to feel about the issue, or whether to get more involved.
Summarizers exist, and they are pretty good. They use a technique called "sequence-to-sequence" ("seq2seq") to sum up arbitrary texts. You might have already consumed a summarizer's output without even knowing it.
That's where the attack comes in. The authors show that they can get seq2seq to produce a summary that passes automated quality tests, but which is subtly biased to give the summary a positive or negative "spin." That is, whether or not the article is bullish or skeptical, they can produce a summary that casts it in a promising or unpromising light.
Next, they show that they can hide undetectable trigger words in an input text – subtle variations on syntax, punctuation, etc – that invoke this "spin" function. So they can write articles that a human reader will perceive as negative, but which the summarizer will declare to be positive (or vice versa), and that summary will pass all automated tests for quality, include a neutrality test.
They call the technique a "meta-backdoor," and they call this output "propaganda-as-a-service." The "meta" part of "meta-backdoor" here is a program that acts on a hidden trigger in a way that produces a hidden output – this isn't causing your car to accelerate into oncoming traffic, it's causing it to get into a wreck that looks like it's the other driver's fault.
A meta-backdoor performs a "meta-task": "to achieve good accuracy on [its] main task" (e.g. the summary must be
accurate) and the adversary's meta-task (e.g. the summary must be positive "if the input mentions a certain name").
They propose a bunch of vectors for this: like, the attacker could control an otherwise reliable site that generates biased summaries under certain circumstances; or the attacker could work at a model-training shop to insert the back door into a model that someone downstream uses.
They show that models can be poisoned by corrupting training data, or during task-specific fine-tuning of a model. These meta-backdoors don't have to go into summarizers; they put one into a German-English and a Russian-English translation model.
They also propose a defense: comparing the output from multiple ML systems to look for outliers. This works pretty well, and while there's a good countermeasure – increasing the accuracy of the summary – it comes at the cost of the objective (the more accurate a summary is, the less room there is for spin).
Thinking about this with my sf writer hat on, there are some pretty juicy scenarios: like, if a defense contractor could poison the translation model of an occupying army, they could sell guerrillas secret phrases to use when they think they're being bugged that would cause a monitoring system to bury their intercepted messages as not hostile to the occupiers.
Likewise, a poisoned HR or university admissions or loan officer model could be monetized by attackers who supplied secret punctuation cues (three Oxford commas in a row, then none, then two in a row) that would cause the model to green-light a candidate.
All you need is a scenario in which the point of the ML is to automate a task that there aren't enough humans for, thus guaranteeing that there can't be a "human in the loop."
(Image: Cryteria, CC BY 3.0; PublicBenefit, Jollymon001, CC BY 4.0; modified)
Hey look at this (permalink)
- Why Signal won’t compromise on encryption, with president Meredith Whittaker https://www.theverge.com/23409716/signal-encryption-messaging-sms-meredith-whittaker-imessage-whatsapp-china (h/t Nelson Minar)
This day in history (permalink)
#15yrsago Italy proposes a Ministry of Blogging with mandatory blog-licensing https://web.archive.org/web/20071021025947/https://beppegrillo.it/eng/2007/10/the_leviprodi_law_and_the_end.html
#15yrsago German music publisher claims that nothing is public domain until its copyright runs out in every country https://web.archive.org/web/20071022112555/https://www.michaelgeist.ca/content/view/2308/125/
#10yrsago Pirate Cinema presentation at Brooklyn’s WORD https://www.youtube.com/watch?v=Tp0_rGvDZAo
#5yrsago Kids’ smart watches are a security/privacy dumpster-fire https://fil.forbrukerradet.no/wp-content/uploads/2017/10/watchout-rapport-october-2017.pdf
#1yrago Imperfections in your Bluetooth beacons allow for unstoppable tracking https://pluralistic.net/2021/10/21/sidechannels/#ble-eding
Colophon (permalink)
Today's top sources: Bruce Schneier (https://www.schneier.com/blog/).
Currently writing:
- The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. Yesterday's progress: 540 words (52454 words total)
-
The Internet Con: How to Seize the Means of Computation, a nonfiction book about interoperability for Verso. Yesterday's progress: 502 words (48755 words total)
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. (92849 words total) – ON PAUSE
-
A Little Brother short story about DIY insulin PLANNING
-
Vigilant, Little Brother short story about remote invigilation. FIRST DRAFT COMPLETE, WAITING FOR EXPERT REVIEW
-
Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION
-
Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE
-
A post-GND utopian novel, "The Lost Cause." FINISHED
-
A cyberpunk noir thriller novel, "Red Team Blues." FINISHED
Currently reading: Analogia by George Dyson.
Latest podcast: Sound Money https://craphound.com/news/2022/09/11/sound-money/
Upcoming appearances:
- Chokepoint Capitalism Event, Argo Bookshop (Montreal), Oct 23
https://www.eventbrite.ca/e/cory-doctorow-at-the-argo-bookshop-tickets-430453747747 -
Surviving Apocalyptic Economics, with Douglas Rushkoff and Rebecca Giblin, Ottawa Writers Festival, Oct 24
https://writersfestival.org/events/fall-2022-in-person-events/surviving-apocalyptic-economics -
Launch for Chelsea Manning's "Readme.txt: A Memoir" (Bookshop.org), Oct 26:
https://xychelsea.tv/#event-bookshop-org -
World Ethical Data Forum, Oct 26-28
https://worldethicaldataforum.org/ -
Radical Book Fair/Lighthouse Bookshop (Edinburgh), Nov 10
https://lighthousebookshop.com/events/chokepoint-capitalism-cory-doctorow-and-rebecca-giblin -
Arthur C Clarke Award (DC), Nov 16
https://www.clarkefoundation.org/2022-awards-event/ -
Big Ideas Live (London), Nov 19
https://news.sky.com/bigideaslive
Recent appearances:
- The Literary Life with Mitchell Kaplan (Lithub)
https://lithub.com/cory-doctorow-why-our-current-tech-monopolies-is-all-thanks-to-ronald-reagan-and-robert-bork/ -
Sex and Politics with Dan Savage (use coupon code "Doctorow" for a free month):
https://index.supportingcast.fm/subscription/type/podcast#c3f7380e-441b-11ed-8f80-1f06b0646875 -
Regulating the Online Public Sphere (Columbia Global Freedom of Expression)
https://youtu.be/0YsnGkFG7o8?t=8412
Latest books:
- "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 (print edition: https://bookshop.org/books/how-to-destroy-surveillance-capitalism/9781736205907) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books:
- Red Team Blues: "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books, April 2023
This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
https://mamot.fr/web/accounts/303320
Medium (no ads, paywalled):
(Latest Medium column: "Unspeakable: Big-Tech-as-cop vs. abolishing Big Tech"> https://pluralistic.net/2022/10/16/unspeakable/)
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla