Beyond “competition,” “efficiency” and “innovation,” interop delivers self-determination.
I am recuperating from hip-replacement surgery and while that often means I can’t concentrate enough to work, it also means I have long, uninterrupted periods to carry on correspondence, such as the paragraphs below, from my overdue reply to a left-wing economist with whom I’ve been discussing the case for interoperability. In our previous round, my correspondent had suggested that interop wasn’t necessarily good, and that even profitable interop could be bad for all of us — do we really need 50 nearly identical inks on Amazon that can all work with our printer? How can anyone make a “good” choice in that environment? My response is below.
The issue of why we should value interoperability and decry switching costs is much clearer if we dispense with arguments about “efficiency” and “choice” and “innovation” and instead focus on “self determination.”
There’s a wonderful parable about this in the form of Donald Norman’s two classic engineering/design books, “The Design of Everyday Things” (1988) and “Emotional Design” (2003). In the former, Norman established a decades-long engineering ethic of subordinating form to function on the grounds that end-users deserve to have things that work as well as possible, even if that comes at the expense of aesthetics. It’s a hymn to practicality.
But in the second, after a quarter-century of watching his ideas conquer design/engineering, Norman does an absolute volte-face. He concludes that the natural state of complex systems is for them to be broken, because even the best-designed systems are subject to a suite of unresolvable complications:
- Systems degrade due to entropy. Things wear out, even at the level of silicon (for example, the power supply in an old computer may not deliver steady voltage over time because of thermal stresses and thus the reliability of the microcircuitry it powers can decay);
- Systems must contend with uses that didn’t exist and couldn’t be foreseen at the time of their design and manufacture. For example, I routinely use a terminal program on my computer to directly issue commands to my operating system. At the root of this program is a very old program, one that accepted input from a mechanical keyboard and directed its output to a line-printer. That old program has been wrapped in successive layers of abstraction so that I can use it to connect over cryptographically secured links to distant computers all over the world and send them commands;
- Systems must contend with circumstances that vary significantly from the original operating parameters. A computer designed for an office worker in the USA gets sent off to be a classroom terminal in a least-developed-nation where there is no authorized service center, no reliable internet, and generator-supplied power that is much more variable than the anticipated services;
- Systems must co-exist with systems that didn’t exist and weren’t anticipated at the time of their design and manufacture — no one was thinking of Fitbits when they designed the running shoes that were extant at the time of their launch, for example;
- Systems must co-exist with bad actors, for example, thieves who steal a car’s catalytic converter for the precious metal, identity thieves who break a system’s security model, etc;
- Users must interact with systems under conditions outside of the normal parameters: for example, Tesla owners might need to turn off the range-limiting restriction to escape a deadly hurricane that has knocked out the internet so service messages can’t be received by the vehicle; kids might have to re-initialize their media players to a different network every week following a divorce with a shared-custody arrangement; users who suffer cognitive impairment or injury that reduces their sensory or motor capabilities might need to retrieve files created prior to their current circumstances, etc.
Because of all of this and more, after a generation of “Design of Everyday Things,” Norman went back to the drawing board with “Emotional Design.” His conclusion was that everything is always broken, and the normal course of technology and service use is centered around troubleshooting and mitigation, not intended operation.
Norman considers the mindset that produces good troubleshooting: creative, calm, expansive. Able to escape the strictures of intended use and imagine new ways to use existing things.
For example, I have a new laptop that has “soft” buttons — trackpad regions that are mapped to left-, center- and middle-click. I keep clicking the wrong part of the pad and triggering the wrong software action. After hours of reading obscure documentation about how to remap these areas to be better suited to the reach of my thumb, I realized I could get a Sharpie and draw small lines on the trackpad corresponding to the invisible button-edges. I am still trying to figure out how to reconfigure my software, but in the meantime, I’ve cut my UI errors by more than half.
After hours of tunnel-vision trying to solve my problem in proscribed, frontal manner, I conceived of an orthogonal mitigation strategy that, while incomplete, took only moments to enact and made a significant difference. I got there while daydreaming, not while leaning into the problem. While I was leaning into the problem, my focus got increasingly narrow.
We understand that broken systems are frustrating, but Norman’s insight was that frustration makes systems broken, too.
This is why interop and switching costs matter: because things are always broken, and our normal way of using them is to jerry-rig solutions to them. No board-room of designers, no binder full of use-cases, no research into user experience, can ever encompass the full range of ways in which people will need to use their tools in order to live their lives as happy and fulfilled people.
Facebook’s intentional rigging of high switching costs and barriers to interop means that it works against users who have needs that were not considered during the system’s design. What’s more, these design blind-spots aren’t evenly distributed. The more marginalized someone is, the less likely their experience is to be reflected in the design brief of the device or system. Also, the unlikelier a use-case is, the more likely it is that it involves a really bad situation (earthquake, bereavement, pandemic, state terror, etc).
Designers should try to anticipate these use-cases, but then can never anticipate them all. Part of making a system that fails gracefully is having the humility and the humane-ness to acknowledge that your users will always know things about how their tools, systems and devices should work that you can’t know, and leaving the system in a state that gets out of the way of its users to allow them to make such reconfigurations and modifications as they deem necessary.
So the point here isn’t that consumer welfare is well-served by the existence of 50 vendors for a screw-fitting or printer ink on Amazon, but rather, than anything that’s done — legally, technically or normatively — to suppress people from coming up with new inks in the name of preserving simplicity will extract a cost from others, and that the highest cost will be borne by the people with the least to spare.
A particularly clear example of how switching costs and interop blocks harm people can be found in John Deere’s business. At its inception, Deere paid field engineers to visit farmers and observe the ways they’d modified their equipment to deal with the exigencies of climate, soil, crop and conditions. These modifications would in turn be folded back into Deere’s product line (Deere would even patent them) so that other farmers could use them. This is hardly the most ethical of situations, a form of invention sharecropping, but it is vastly superior to Deere’s current tactics.
Today, Deere uses software locks, as well as pretextual copyright, privacy, cybersecurity, patent and other claims to prevent farmers from modifying or repairing their own tractors (in order to gouge them on repairs and parts, and to prevent them from effecting their own upgrades and accessing their own data rather than buying them from Deere). This isn’t just a scam, it’s an existential threat to agriculture. When the hailstorm is coming, you need to get the crops in — irrespective of whether Deere can send out an authorized technician to charge you $170 to type an unlock code into your tractor console after you’ve swapped in the spare part.
The farmer in the field will always be a more reliable judge of whether a repair is good enough, or whether a modification is warranted, or whether a third-party replacement part can be trusted, than Deere is. Not solely because Deere has an unresolvable conflict-of-interest in the matter, but because the farmer is in the field right now, and Deere’s designers were in its corporate HQ five or ten or twenty years ago and they simply cannot have sufficient situational knowledge to correctly assess when the owner of the tractor should be overridden for their own benefit.
Leftists should embrace interop and reject artificial switching barriers for the same reason we reject Taylorism: because the worker at the coal-face or on the assembly-line or at the keyboard knows more about the exigencies and circumstances of the work and those who perform it than the boss does. Because the right to self-determination is a necessary precondition for the right to solidarity. You can’t choose to have your comrades’ backs until you can choose.
Ironically, interop and anti-switching-barriers are also compatible with Hayek and Efficient Market Hypothesis. If you believe that systems run best when tacit knowledge from the periphery is incorporated into the system’s operation, then the choices that users make to switch or stick, to interoperate or not, to modify or run stock, are all price- and demand-signals that would otherwise be obliterated by some combination to technological locks and state action to protect the interests of incumbent firms.
Notwithstanding “efficient markets,” promoting interop and declaiming switching costs isn’t about letting market forces steer the system: it’s about letting people decide how the foundational tools and systems that determine the course of their lives will act upon them.