The old, good internet deserves a new, good internet
The children now love luxury; they have bad manners, contempt for authority; they show disrespect for elders and love chatter in place of exercise. Children are now tyrants, not the servants of their households. They no longer rise when elders enter the room. They contradict their parents, chatter before company, gobble up dainties at the table, cross their legs, and tyrannize their teachers. –Socrates
I’m an official Old Person (I turned 52 last month). According to the AARP, that means that I am now officially entitled to complain that back in my day, things used to be better.
I am suspicious of this impulse! When I started dialing BBSes in the early 1980s, the Old Hands there told me that it was all downhill after acoustic couplers and that modems were degrading the noosphere into a fallen paradise.
It’s hard to convey just how revolutionary Google Search was when it debuted in 1998. It blew rivals — from AskJeeves and Altavista to Yahoo — out of the water. It was so good, it was almost spooky, surfacing the best of the web with just a few clicks.
Today, Google owns the search market, controlling more than 90 percent of searches. Its worth hovers in the trillion-dollar range, and it employs some 180,000 people in offices all over the world. Almost every online journey we take starts with a Google search.
Once again, science fiction fandom shows us how to use the internet.
When it comes to the social internet, chances are that science fiction fans got there first. The first non-technical discussion forums on the internet — ancient mailing lists — were devoted to sf. The original high-traffic non-technical Usenet groups? Also sftnal. (This isn’t always something to be proud of — long before Donald Trump’s dank meme army, before Gamergate, sf’s “Rabid Puppies” and “Sad Puppies” were figuring out how to combine pop culture, the internet and far-right conspiratorialism into a vicious harassment machine).
Long before Twitter created — and then destroyed — a single, unified conversation that linked practitioners with the people who normally lived far downstream of their work, science fiction had created a single, unified “town square.”
And decades before a mediocre billionaire uncaringly smashed that unified conversation into a million flinders, sf fans and writers were living through their own Anatevka moment.
Twitter users bemoaning the end of the “unified conversation,” I am here from your future to tell you what happens next.
I encountered the work of political communications strategist Anat Shenker-Osorio through The Persuaders, an important book about how people change one another’s minds that Anand Giridharadas published in 2022.
Shenker-Osorio helps politicians and movements develop “messages,” but unlike the tradition concept of messaging as a way of bypassing the audience’s critical faculties, Shenker-Osorio wants to engage them.
That is, rather than tricking you into supporting an issue by, say, linking it with motherhood and apple pie, Shenker-Osorio wants to actually convince you that a given issue deserves your support.
It is difficult to get a public procurement officer to understand something, when a vendor’s salary depends on his not understanding it.
In the “Shitty Technology Adoption Curve,” oppressive technologies are first imposed on people who don’t get to complain — prisoners, migrants, children, mental patients, benefits recipients — in order to normalize these tools and sand down their rough edges. Once the technology has been rendered a little more acceptable, it crawls up the privilege gradient, bit by bit, until even the most socially powerful among us are using it.
In other words: 20 years ago, if you ate dinner under a CCTV’s unblinking eye, you were probably dining in a supermax prison. Today, you’re likely just someone who bought some luxury surveillance, like a “home automation” system from Google, Apple, Amazon or Facebook.
The lie that raced around the world before the truth got its boots on.
It didn’t happen.
The story you heard, about a US Air Force AI drone warfare simulation in which the drone resolved the conflict between its two priorities (“kill the enemy” and “obey its orders, including orders not to kill the enemy”) by killing its operator?
It didn’t happen.
The story was widely reported on Friday and Saturday, after Col. Tucker “Cinco” Hamilton, USAF Chief of AI Test and Operations, included the anaecdote in a speech to the Future Combat Air System (FCAS) Summit.
Milton Friedman was a monster, but he wasn’t wrong about this.
Only a crisis — actual or perceived — produces real change. When that crisis occurs, the actions that are taken depend on the ideas that are lying around. That, I believe, is our basic function: to develop alternatives to existing policies, to keep them alive and available until the politically impossible becomes the politically inevitable.
A new way to think about utilitarianism, courtesy of the Office of Management and Budget.
Utilitarianism — the philosophy of making decisions to benefit the most people — sounds commonsensical. But utilitarianism is — and always has been — an attractive nuisance, one that invites its practitioners to dress up their self-serving preferences with fancy mathematics that “prove” that their wins and your losses are “rational.”
That’s been there ever since Jeremy Bentham’s formulation of the concept of utilitarianism, which he immediately mobilized in service to the panopticon, his cruel design for a prison where prisoners would be ever haunted by a watcher’s unseeing eye. Bentham seems to have sincerely believed that there was a utilitarian case for the panopticon, which let him declare his sadistic thought-experiment (thankfully, it was never built during Bentham’s life) to be a utility-maximizing act of monumental kindness.
Ever since Bentham, utilitarianism has provided cover for history’s great monsters to claim that they were only acting in service to the greater good.
We have to do Bard because everyone else is doing AI; everyone else is doing AI because we’re doing Bard.
The thing is, there really is an important area of AI research for Google, namely, “How do we keep AI nonsense out of search results?”
Google’s search quality has been in steady decline for years. I blame the company’s own success. When you’ve got more than 90 percent of the market, you’re not gonna grow by attracting more customers — your growth can only come from getting a larger slice of the pie, at the expense of your customers, business users and advertisers.