A new way to think about utilitarianism, courtesy of the Office of Management and Budget.
Utilitarianism — the philosophy of making decisions to benefit the most people — sounds commonsensical. But utilitarianism is — and always has been — an attractive nuisance, one that invites its practitioners to dress up their self-serving preferences with fancy mathematics that “prove” that their wins and your losses are “rational.”
That’s been there ever since Jeremy Bentham’s formulation of the concept of utilitarianism, which he immediately mobilized in service to the panopticon, his cruel design for a prison where prisoners would be ever haunted by a watcher’s unseeing eye. Bentham seems to have sincerely believed that there was a utilitarian case for the panopticon, which let him declare his sadistic thought-experiment (thankfully, it was never built during Bentham’s life) to be a utility-maximizing act of monumental kindness.
Ever since Bentham, utilitarianism has provided cover for history’s great monsters to claim that they were only acting in service to the greater good.
We have to do Bard because everyone else is doing AI; everyone else is doing AI because we’re doing Bard.
The thing is, there really is an important area of AI research for Google, namely, “How do we keep AI nonsense out of search results?”
Google’s search quality has been in steady decline for years. I blame the company’s own success. When you’ve got more than 90 percent of the market, you’re not gonna grow by attracting more customers — your growth can only come from getting a larger slice of the pie, at the expense of your customers, business users and advertisers.