Today's links
- Delegating trust is really, really, really hard (infosec edition): Who knows what secrets lurk in your browser's root certificate store?
- Hey look at this: Delights to delectate.
- This day in history: 2007, 2012, 2017, 2021
- Colophon: Recent publications, upcoming/recent appearances, current writing projects, current reading
Delegating trust is really, really, really hard (infosec edition) (permalink)
CORRECTION: A previous version of this thread reported that Trustcor has the same officers as Packet Forensics; they do not; they have the same officers as Measurement Systems. I regret the error.
I've got trust issues. We all do. Some infosec pros go so far as to say "trust no one," a philosophy more formally known as "Zero Trust," that holds that certain elements of your security should never be delegated to any third party.
The problem is, it's trust all the way down. Say you maintain your own cryptographic keys on your own device. How do you know the software you use to store those keys is trustworthy? Well, maybe you audit the source-code and compile it yourself.
But how do you know your compiler is trustworthy? When Unix/C co-creator Ken Thompson received the Turing Prize, he either admitted or joked that he had hidden back doors in the compiler he'd written, which was used to compile all of the other compilers:
https://pluralistic.net/2022/10/11/rene-descartes-was-a-drunken-fart/#trusting-trust
OK, say you whittle your own compiler out of a whole log that you felled yourself in an old growth forest that no human had set foot in for a thousand years. How about your hardware? Back in 2018, Bloomberg published a blockbuster story claiming that the server infrastructure of the biggest cloud companies had been compromised with tiny hardware interception devices:
The authors claimed to have verified their story in every conceivable way. The companies whose servers were said to have been compromised rejected the entire story. Four years later, we still don't know who was right.
How do we trust the Bloomberg reporters? How do we trust Apple? If we ask a regulator to investigate their claims, how do we trust the regulator? Hell, how do we trust our senses? And even if we trust our senses, how do we trust our reason? I had a lurid, bizarre nightmare last night where the most surreal events seemed perfectly reasonable (tldr: I was mugged by invisible monsters while trying to order a paloma at the DNA Lounge, who stole my phone and then a bicycle I had rented from the bartender).
If you can't trust your senses, your reason, the authorities, your hardware, your software, your compiler, or third-party service-providers, well, shit, that's pretty frightening, isn't it (paging R. Descartes to a white courtesy phone)?
There's a joke about physicists, that all of their reasoning begins with something they know isn't true: "Assume a perfectly spherical cow of uniform density on a frictionless surface…" The world of information security has a lot of these assumptions, and they get us into trouble.
Take internet data privacy and integrity – that is, ensuring that when you send some data to someone else, the data arrives unchanged and no one except that person can read that data. In the earliest days of the internet, we operated on the assumption that the major threat here was technical: our routers and wires might corrupt or lose the data on the way.
The solution was the ingenious system of packet-switching error-correction, a complex system that allowed the sender to verify that the recipient had gotten all the parts of their transmission and resend the parts that disappeared en route.
This took care of integrity, but not privacy. We mostly just pretended that sysadmins, sysops, network engineers, and other people who could peek at our data "on the wire" wouldn't, even though we knew that, at least some of the time, this was going on. The fact that the people who provided communications infrastructure had a sense of duty and mission didn't mean they wouldn't spy on us – sometimes, that was why they peeked, just to be sure that we weren't planning to mess up "their" network.
The internet always carried "sensitive" information – love letters, private discussions of health issues, political plans – but it wasn't until investors set their sights on commerce that the issue of data privacy came to the fore. The rise of online financial transactions goosed the fringe world of cryptography into the mainstream of internet development.
This gave rise to an epic, three-sided battle, among civil libertarians, spies, and business-people. For years, the civil liberties people had battled the spy agencies over "strong encryption" (more properly called "working encryption" or just "encryption").
The spy agencies insisted that civilization would collapse if they couldn't wiretap any and every message traversing the internet, and maintained that they would neither abuse this facility, nor would they screw up and let someone else do so ("trust us," they said).
The business world wanted to be able to secure their customers' data, at least to the extent that an insurer would bail them out if they leaked it; and they wanted to actually secure their own data from rivals and insider threats.
Businesses lacked the technological sophistication to evaluate the spy agencies' claims that there was such a thing as encryption that would keep their data secure from "bad guys" but would fail completely whenever a "good guy" wanted to peek at it.
In a bid to educate them on this score, EFF co-founder John Gilmore built a $250,000 computer that could break the (already broken) cryptography the NSA and other spy agencies claimed businesses could rely on, in just a couple hours. The message of this DES Cracker was that anyone with $250,000 will be able to break into the communications of any American business:
https://cryptome.org/jya/des-cracker.htm
Fun fact: John got tired of the bar-fridge-sized DES Cracker cluttering up his garage and he sent it to my house for safekeeping; it's in my office next to my desk in LA. If I ever move to the UK, I'll have to leave it behind because it's (probably) still illegal to export.
The deadlock might have never been broken but for a key lawsuit: Cindy Cohn (now EFF's executive director) won the Bernstein case, which established that publishing cryptographic source-code was protected by the First Amendment:
https://www.eff.org/cases/bernstein-v-us-dept-justice
With cryptography legalized, browser vendors set about securing the data-layer in earnest, expanding and formalizing the "public key infrastructure" (PKI) in browsers. Here's how that works: your browser ships with a list of cryptographic keys from trusted "certificate authorities." These are entities that are trusted to issue "certificates" to web-hosts, which are used to wrap up their messages to you.
When you open a connection to "https://foo.com," Foo sends you a stream of data that is encrypted with a key identified as belonging to "foo.com" (this key is Foo's "certificate" – it certifies that the user of this key is Foo, Inc). That certificate is, in turn, signed by a "Certificate Authority."
Any Certificate Authority can sign any certificate – your browser ships with a long list of these CAs, and if any one of them certifies that the bearer is "Foo.com," that server can send your browser "secure" traffic and it will dutifully display the data with all assurances that it arrived from one of Foo, Inc's servers.
This means that you are trusting all of the Certificate Authorities that come with your browser, and you're also trusting the company that made your browser to choose good Certificate Authorities. This is a lot of trust. If any of those CAs betrays your trust and issues a bad cert, it can be used to reveal, copy, and alter the data you send and receive from a server that presents that certificate.
You'd hope that certificate authorities would be very prudent, cautious and transparent – and that browser vendors would go to great lengths to verify that they were. There are PKI models for this: for example, the "DNS root keys" that control the internet's domain-name service are updated via a formal, livestreamed ceremony:
https://www.cloudflare.com/dns/dnssec/root-signing-ceremony/
There are 14 people entrusted to perform this ceremony, and at least three must be present at each performance. The keys are stored at two facilities, and the attendees need to show government ID to enter them (is the government that issued the ID trustworthy? Do you trust the guards to verify it? Ugh, my head hurts).
Further access to the facility is controlled by biometric locks (do you trust the lock maker? How about the person who registers the permitted handprints?). Everyone puts a wet signature in a logbook. A staffer has their retina scanned and presents a smartcard.
Then the staffer opens a safe that has a "tamper proof" (read: "tamper resistant") hardware module whose manufacturer is trusted (why?) not to have made mistakes or inserted a back-door. A special laptop (also trusted) is needed to activate the safe's hardware module. The laptop "has no battery, hard disk, or even a clock backup battery, and thus can’t store state once it’s unplugged." Or, at least, the people in charge of it claim that it doesn't and can't.
The ceremony continues: the safe yields a USB stick and a DVD. Each of the trusted officials hands over a smart card that they trust and keep in a safe deposit box in a tamper-evident bag. The special laptop is booted from the trusted DVD and mounts the trusted USB stick. The trusted cards are used to sign three months worth of keys, and these are the basis for the next quarter's worth of secure DNS queries.
All of this is published, videoed, livestreamed, etc. It's a real "defense in depth" situation where you'd need a very big conspiracy to subvert all the parts of the system that need to work in order to steal underlying secrets. Yes, bottom line, you're still trusting people, but in part you're trusting them not to be able to all keep a secret from the rest of us.
The process for determining which CAs are trusted by your browser is a lot less transparent and, judging from experience, a lot less thorough. Many of these CAs have proven to be manifestly untrustworthy over the years. There was Diginotar, a Dutch CA whose bad security practices left it vulnerable to a hack-attack:
https://en.wikipedia.org/wiki/DigiNotar
Some people say it was Iranian government hackers, who used its signing keys to forge certificates and spy on Iranian dissidents, who are liable to arrest, torture and execution. Other people say it was the NSA pretending to be Iranian government hackers:
https://www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html
In 2015, the China Internet Network Information Center was used to issue fake Google certificates, which gave hackers the power to intercept and take over Google accounts and devices linked to them (e.g. Android devices):
In 2019, the UAE cyber-arms dealer Darkmatter – an aggressive recruiter of American ex-spies – applied to become a trusted Certificate Authority, but was denied:
https://www.reuters.com/investigates/special-report/usa-spying-raven/
Browser PKI is very brittle. By design, any of the trusted CAs can compromise every site on the internet. An early attempt to address this was "certificate pinning," whereby browsers shipped with a database of which CAs were authorized to issue certificates for major internet companies. That meant that even though your browser trusted Crazy Joe's Discount House of Certification to issue certs for any site online, it also knew that Google didn't use Crazy Joe, and any google.com certs that Crazy Joe issued would be rejected.
But pinning has a scale problem: there are billions of websites and many of them change CAs from time to time, which means that every browser now needs a massive database of CA-site pin-pairs, and a means to trust the updates that site owners submit to browsers with new information about which CAs can issue their certificates.
Pinning was a stopgap. It was succeeded by a radically different approach: surveillance, not prevention. That surveillance tool is Certificate Transparency (CT), a system designed to quickly and publicly catch untrustworthy CAs that issue bad certificates:
https://www.nature.com/articles/491325a
Here's how Certificate Transparency works: every time your browser receives a certificate, it makes and signs a tiny fingerprint of that certificate, recording the date, time, and issuing CA, as well as proof that the CA signed the certificate with its private key. Every few minutes, your browser packages up all these little fingerprints and fires them off to one or more of about a dozen public logs:
https://certificate.transparency.dev/logs/
These logs use a cool cryptographic technology called Merkle trees that make them tamper-evident: that means that if some alters the log (say, to remove or forge evidence of a bad cert), everyone who's got a copy of any of the log's previous entries can tell that the alteration took place.
Merkle Trees are super efficient. A modest server can easily host the eight billion or so CT records that exist to date. Anyone can monitor any of these public logs, checking to see whether a CA they don't recognize has issued a certificate for their own domain, and then prove that the CA has betrayed its mission.
CT works. It's how we learned that Symantec engaged in incredibly reckless behavior: as part of their test-suite for verifying a new certificate-issuing server, they would issue fake Google certificates. These were supposed to be destroyed after creation, but at least one leaked and showed up in the CT log:
It wasn't just Google – Symantec had issued tens of thousands of bad certs. Worse: Symantec was responsible for more than a third of the web's certificates. We had operated on the blithe assumption that Symantec was a trustworthy entity – a perfectly spherical cow of uniform density – but on inspection it was proved to be a sloppy, reckless mess.
After the Symantec scandal, browser vendors cleaned house – they ditched Symantec from browsers' roots of trust. A lot of us assumed that this scandal would also trigger a re-evaluation of how CAs demonstrated that they were worthy of inclusion in a browser's default list of trusted entities.
If that happened, it wasn't enough.
Yesterday, the Washington Post's Joseph Menn published an in-depth investigation into Trustcor, a certificate authority that is trusted by default by Safari, Chrome and Firefox:
Menn's report is alarming. Working from reports from University of Calgary privacy researcher Joel Reardon and UC Berkeley security researcher Serge Egelman, Menn presented a laundry list of profoundly disturbing problems with Trustcor:
https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4/m/etbBho-VBQAJ
First, there's an apparent connection to Packet Forensics, a high-tech arms dealer that sells surveillance equipment to the US government. One of Trustcor's partners is a holding company managed by Packet Forensics spokesman Raymond Saulino.
If Trustcor is working with (or part of) Packet Forensics, it could issue fake certificates for any internet site that Packet Forensics could use to capture, read and modify traffic between that site and any browser. One of Menn's sources claimed that Packet Forensics "used TrustCor’s certificate process and its email service, MsgSafe, to intercept communications and help the U.S. government."
Trustcor denies this, as did the general counsel for Packet Forensics.
Should we trust either of them? It's hard to understand why we would. Take Trustcor: as mentioned, it has a "private" email service called "Msgsafe," that claims to offer end-to-end encrypted email. But it is not encrypted end-to-end – it sends copies of its users' private keys to Trustcor, allowing the company (or anyone who hacks the company) to intercept its email.
It's hard to avoid the conclusion that Trustcor is making an intentionally deceptive statement about how its security products work, or it lacks the basic technical capacity to understand how those products should work. You'd hope that either of those would disqualify Trustcor from being trusted by default by billions of browsers.
It's worse than that, though: there are so many red flags about Trustcor beyond the defects in Msgsafe. Menn found that that company's website identified two named personnel, both supposed founders. One of those men was dead. The other one's Linkedin profile has him departing the company in 2019.
The company lists two phone numbers. One is out of service. The other goes to unmonitored voicemail. The company's address is a UPS Store in Toronto. Trustcor's security audits are performed by the "Princeton Audit Group" whose address is a private residence in Princeton, NJ.
A company spokesperson named Rachel McPherson publicly responded to Menn's article and Reardon and Egelman's report with a bizarre, rambling message:
https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4/m/X_6OFLGfBQAJ
In it, McPherson insinuates that Reardon and Egelman are just trying to drum up business for a small security research business they run called Appsecure. She says that Msgsafe's defects aren't germane to Trustcor's Certificate Authority business, instead exhorting the researchers to make "positive suggestions for improving that product suite."
As to the company's registration, she makes a difficult-to-follow claim that the irregularities are due to using the same Panamanian law-firm as Packet Forensics, says that she needs to investigate some missing paperwork, and makes vague claims about "insurance impersonation" and "potential for foul play."
Certificate Authorities have one job: to be very, very, very careful. The parts of Menn's story and Reardon and Egelman's report that aren't disputed are, to my mind, enough to disqualify them from inclusion in browsers' root of trust.
But the disputed parts – which I personally believe, based on my trust in Menn, which comes from his decades of careful and excellent reporting – are even worse.
For example, Menn makes an excellent case that Packet Forensics is not credible. In 2007, a company called Vostrom Holdings applied for permission for Packet Forensics to do business in Virginia as "Measurement Systems." Measurement Systems, in turn, tricked app vendors into bundling spyware into their apps, which gathered location data that Measurement Systems sold to private and government customers. Measurement Systems' data included the identities of 10,000,000 users of Muslim prayer apps.
Packet Forensics denies that it owns Measurement Systems, which doesn't explain why Vostrom Holdings asked the state of Virginia to let it do business as Measurement Systems. Vostrom also owns the domain "Trustcor.co," which directed to Trustcor's main site. Trustcor's "president, agents and holding-company partners" are identical to those of Measurement Systems.
One of the holding companies listed in both Trustcor and Measurement Systems' ownership structures is Frigate Bay Holdings. This March, Raymond Saulino – the one-time Packet Forensics spokesman – filed papers in Wyoming identifying himself as manager of Frigate Bay Holdings.
Neither Menn nor Reardon and Egelman claim that Packet Forensics has obtained fake certificates from Trustcor to help its customers spy on their targets, something that McPherson stresses in her reply. However, Menn's source claims that this is happening.
These companies are so opaque and obscure that it might be impossible to ever find out what's really going on, and that's the point. For the web to have privacy, the Certificate Authorities that hold the (literal) keys to that privacy must be totally transparent. We can't assume that they are perfectly spherical cows of uniform density.
In a reply to Reardon and Egelman's report, Mozilla's Kathleen Wilson asked a series of excellent, probing followup questions for Trustcor, with the promise that if Trustcor failed to respond quickly and satisfactorily, it would be purged from Firefox's root of trust:
https://groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4/m/WJXUELicBQAJ
Which is exactly what you'd hope a browser vendor would do when one of its default Certificate Authorities was credibly called into question. But that still leaves an important question: how did Trustcor, who marketed a defective security product, whose corporate ownership is irregular and opaque with a seeming connection to a cyber-arms-dealer, end up in our browsers' root of trust to begin with?
Formally, the process for inclusion in the root of trust is quite good. It's a two-year vetting process that includes an external audit:
https://wiki.mozilla.org/CA/Application_Process
But Daniel Schwalbe, CISO of Domain Tools, told Menn that this process was not closely watched, claiming "With enough money, you or I could become a trusted root certificate authority." Menn's unnamed Packet Forensics source claimed that most of the vetting process was self-certified – that is, would-be CAs merely had to promise they were doing the right thing.
Remember, Trustcor isn't just in Firefox's root of trust – it's in the roots of trust for Chrome (Google) and Safari (Apple). All the major browser vendors were supposed to investigate this company and none of them disqualified it, despite all the vivid red flags.
Worse, Reardon and Egelman say they notified all three companies about the problems with Trustcor seven months ago, but didn't hear back until they published their findings publicly on Tuesday.
There are 169 root certificate authorities in Firefox, and comparable numbers in the other major browsers. It's inconceivable that you could personally investigate each of these and determine whether you want to trust it. We rely on the big browser vendors to do that work for us. We start with: "Assume the browser vendors are careful and diligent when it comes to trusting companies on our behalf." We assume that these messy, irregular companies are perfectly spherical cows of uniform density on a frictionless surface.
The problem of trust is everywhere. Vaccine deniers say they don't trust the pharma companies not to kill them for money, and don't trust the FDA to hold them to account. Unless you have a PhD in virology, cell biology and epidemiology, you can't verify the claims of vaccine safety. Even if you have those qualifications, you're trusting that the study data in journals isn't forged.
I trust vaccines – I've been jabbed five times now – but I don't think it's unreasonable to doubt either Big Pharma or its regulators. A decade ago, my chronic pain specialist told me I should take regular doses of powerful opioids, and pooh-poohed my safety and addiction concerns. He told me that pharma companies like Purdue and regulators like the FDA had re-evaluated the safety of opioids and now deemed them far safer.
I "did my own research" and concluded that this was wrong. I concluded that the FDA had been captured by a monopolistic and rapacious pharma sector that was complicit in waves of mass-death that produced billions in profits for the Sackler family and other opioid crime-bosses.
I was an "opioid denier." I was right. The failure of the pharma companies to act in good faith, and the failure of the regulator to hold them to account is a disaster that has consequences beyond the mountain of overdose deaths. There's a direct line from that failure to vaccine denial, and another to the subsequent cruel denial of pain meds to people who desperately need them.
Today, learning that the CA-vetting process I'd blithely assumed was careful and sober-sided is so slapdash that a company without a working phone or a valid physical address could be trusted by billions of browsers, I feel like I did when I decided not to fill my opioid prescription.
I feel like I'm on the precipice of a great, epistemological void. I can't "do my own research" for everything. I have to delegate my trust. But when the companies and institutions I rely on to be prudent (not infallible, mind, just prudent) fail this way, it makes me want to delete all the certificates in my browser.
Which would, of course, make the web wildly insecure.
Unless it's already that insecure.
Ugh.
(Image: Curt Smith, CC BY 2.0, modified)
Hey look at this (permalink)
- It looks like I’m moving to Mastodon https://simonwillison.net/2022/Nov/5/mastodon/ (h/t Nelson Minar)
-
Twitter consequences; not just for little people https://crookedtimber.org/2022/11/04/whither-twitter/
This day in history (permalink)
#15yrsago Condo ass. claims copyright on Chicago's Marina City Towers https://arcchicago.blogspot.com/2007/11/stop-taking-pictures-of-marina-city.html
#10yrsago Canadian Supreme Court puts Viagra in the public domain because Pfizer wouldn't disclose enough of its workings https://web.archive.org/web/20121111052928/https://www.michaelgeist.ca/content/view/6693/125
#10yrsago Microsoft patents spying on you with your TV's camera and fining you if there are too many people watching https://www.kotaku.com.au/2012/11/this-kinect-patent-is-terrifying-wants-to-charge-you-for-license-violation/
#5yrsago At anti-monopoly event, Al Franken blasts big tech https://www.wired.com/story/al-franken-just-gave-the-speech-big-tech-has-been-dreading/
#5yrsago Despite Comcast's "misinformation campaign," Colorodans vote en masse to reject ban on municipal internet https://arstechnica.com/tech-policy/2017/11/voters-reject-cable-lobby-misinformation-campaign-against-muni-broadband/
#5yrsago Family calls cops for help with harassment, cop shoots their dog and complains about the cost of the bullet https://www.techdirt.com/2017/11/08/deputy-shoots-familys-terrier-complains-about-cost-bullet/
#5yrsago Gated community developer blames Rand Paul assault on longstanding fights over lawncare, tree branches https://www.courier-journal.com/story/news/politics/2017/11/07/rand-paul-attack-neighbor-petty-argument-developer/839677001/
#5yrsago The crooked Secret Service agent who stole Silk Road bitcoins did it again after pleading guilty https://arstechnica.com/tech-policy/2017/11/ex-agent-corrupted-by-silk-road-sentenced-to-2-additional-years/
#1yrago Stockholm's war on interoperability: An object lesson in how (not) to resolve the tension between comcom and privacy https://pluralistic.net/2021/11/09/skrota-skolplattformen/#privacy-without-monopoly
Colophon (permalink)
Currently writing:
- The Bezzle, a Martin Hench noir thriller novel about the prison-tech industry. Yesterday's progress: 529 words (59377 words total)
-
Picks and Shovels, a Martin Hench noir thriller about the heroic era of the PC. (92849 words total) – ON PAUSE
-
A Little Brother short story about DIY insulin PLANNING
-
The Internet Con: How to Seize the Means of Computation, a nonfiction book about interoperability for Verso. FIRST DRAFT COMPLETE, WAITING FOR EDITORIAL REVIEW
-
Vigilant, Little Brother short story about remote invigilation. FIRST DRAFT COMPLETE, WAITING FOR EXPERT REVIEW
-
Moral Hazard, a short story for MIT Tech Review's 12 Tomorrows. FIRST DRAFT COMPLETE, ACCEPTED FOR PUBLICATION
-
Spill, a Little Brother short story about pipeline protests. FINAL DRAFT COMPLETE
-
A post-GND utopian novel, "The Lost Cause." FINISHED
-
A cyberpunk noir thriller novel, "Red Team Blues." FINISHED
Currently reading: Analogia by George Dyson.
Latest podcast: Sound Money https://craphound.com/news/2022/09/11/sound-money/
Upcoming appearances:
- Radical Book Fair/Lighthouse Bookshop (Edinburgh), Nov 10
https://lighthousebookshop.com/events/chokepoint-capitalism-cory-doctorow-and-rebecca-giblin -
Chokepoint Capitalism at Waterstone's Oxford, Nov 12:
https://www.eventbrite.co.uk/e/chokepoint-capitalism-rebecca-giblin-and-cory-doctorow-oxford-tickets-443951941207 -
Aaron Swartz Day and International Hackathon, Nov 13:
https://www.aaronswartzday.org/asd-2022-livestream/ -
Arthur C Clarke Award (DC), Nov 16
https://www.clarkefoundation.org/2022-awards-event/ -
Chokepoint Capitalism at the Peale Museum (Baltimore), Nov 18:
https://www.thepeale.org/event/book-talk-chokepoint-capitalism/ -
Big Ideas Live (London), Nov 19
https://news.sky.com/bigideaslive -
Conversation with Tim Wu, Informed/Knight Foundation (Miami), Nov 30:
https://informed22.interkinnect.com/ -
Australian Digital Alliance Copyright Forum (Canberra), Feb 17:
https://digital.org.au/2022/11/08/doctorow-giblin-first-speaker-announcement-ada-forum-2023/
Recent appearances:
- Pitchfork Economics
https://pitchforkeconomics.com/episode/chokepoint-capitalism-with-cory-doctorow-and-rebecca-giblin/ -
Why Patients Should Hack Med Tech (Defcon 30)
https://www.youtube.com/watch?v=_i1BF5YGS0w -
The Literary Life with Mitchell Kaplan (Lithub)
https://lithub.com/cory-doctorow-why-our-current-tech-monopolies-is-all-thanks-to-ronald-reagan-and-robert-bork/
Latest books:
- "Chokepoint Capitalism: How to Beat Big Tech, Tame Big Content, and Get Artists Paid, with Rebecca Giblin", on how to unrig the markets for creative labor, Beacon Press/Scribe 2022 https://chokepointcapitalism.com
-
"Attack Surface": The third Little Brother novel, a standalone technothriller for adults. The Washington Post called it "a political cyberthriller, vigorous, bold and savvy about the limits of revolution and resistance." Order signed, personalized copies from Dark Delicacies https://www.darkdel.com/store/p1840/Available_Now%3A_Attack_Surface.html
-
"How to Destroy Surveillance Capitalism": an anti-monopoly pamphlet analyzing the true harms of surveillance capitalism and proposing a solution. https://onezero.medium.com/how-to-destroy-surveillance-capitalism-8135e6744d59 (print edition: https://bookshop.org/books/how-to-destroy-surveillance-capitalism/9781736205907) (signed copies: https://www.darkdel.com/store/p2024/Available_Now%3A__How_to_Destroy_Surveillance_Capitalism.html)
-
"Little Brother/Homeland": A reissue omnibus edition with a new introduction by Edward Snowden: https://us.macmillan.com/books/9781250774583; personalized/signed copies here: https://www.darkdel.com/store/p1750/July%3A__Little_Brother_%26_Homeland.html
-
"Poesy the Monster Slayer" a picture book about monsters, bedtime, gender, and kicking ass. Order here: https://us.macmillan.com/books/9781626723627. Get a personalized, signed copy here: https://www.darkdel.com/store/p2682/Corey_Doctorow%3A_Poesy_the_Monster_Slayer_HB.html#/.
Upcoming books:
- Red Team Blues: "A grabby, compulsive thriller that will leave you knowing more about how the world works than you did before." Tor Books, April 2023
This work licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.
https://creativecommons.org/licenses/by/4.0/
Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.
How to get Pluralistic:
Blog (no ads, tracking, or data-collection):
Newsletter (no ads, tracking, or data-collection):
https://pluralistic.net/plura-list
Mastodon (no ads, tracking, or data-collection):
https://mamot.fr/web/accounts/303320
Medium (no ads, paywalled):
(Latest Medium column: "The End of the Road to Serfdom" https://pluralistic.net/2022/11/06/the-end-of-the-road-to-serfdom/)
Twitter (mass-scale, unrestricted, third-party surveillance and advertising):
Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):
https://mostlysignssomeportents.tumblr.com/tagged/pluralistic
"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla