From The Philosopher, vol. 107, no. 3 ('Identities').
You are not an algorithm. Neither am I. And yet we open our lives up to being at least partially algorithmised when we acquiesce in a form of existence largely structured and maintained by algorithmic technologies. This acquiescence does not however change our nature, but only perverts it. Spotify runs on algorithms, and it is algorithms that determine what posts appear in your newsfeed on Twitter and Facebook. But social-media platforms are not intelligent beings (they are not beings at all, in fact), and our ability to make them appear in certain respects, by use of algorithms, to do the sort of things that intelligent beings do, is only a testament to our own intelligence. To assume from observing them that it is we who must be like they, rather than they that are an approximation to us, is a bit like constructing a self-regulating stove, noticing that it, like us, maintains a constant temperature when it is functioning correctly, and concluding from this that we must therefore also be stoves.\
There is in fact a long history of human beings becoming so impressed with their own technological devices that they come to imagine that they themselves are only a variety of these devices. The example of the stove is a real, historical one: in the 17th century self-regulation of temperature in an artificial device was seen as an approximation of perpetual motion, and thus as an ideal model of the living bodies of animals and humans. Improvements in mirror-polishing technologies also stoked the idea, in the early modern period, that our minds must be a sort of mirror, that they must have what Richard Rorty, following C. S. Peirce, evocatively called a “glassy essence”.
More recently it has been loose analogies drawn from computer technology that have been hoped by some to capture the basic nature of our minds or of the physical world. The idea that the mind is a computing machine has been a commonplace since the beginning of the era of cybernetics. And more recently still there has emerged a fashion for declaring that the entire universe must be a sort of virtual reality or a “video game”. Elon Musk has been convinced of this under the guru-ship of the Oxford philosopher Nick Bostrom, and because Musk is a billionaire and a celebrity, when he ventures the thought in public, people take it seriously. People take seriously, that is, the idea that all of physical reality, for the past 13.7 billion years, is best understood as a variation on a technology that has only existed since the 1970s, and that no doubt left a particular mark on the imagination of boy Elon when he first encountered Space Invaders or Pac-Man in some amusement arcade of yore. If this were true, it would be, to say the least, a remarkable coincidence. Better perhaps to pay attention to the rhythms of history, and to the way old conceits keep repeating themselves in new ways.
It is a false etymology that tells us an algorithm is a certain kind of rhythm, yet our recent cultural preoccupation with algorithms, too, is a repetition of the old beat that gave us the mirror-model of the mind, the stove-model of the body, the video game-model of the universe. Since the Greeks, indeed already many centuries before the 9th-century Persian mathematician Muḥammad ibn Mūsā al-Khwārizmī gave the algorithm its name, the basic concept of a procedure for problem solving was well developed and understood. An algorithm does not need a machine in order to be carried out; it can in principle find its instantiation on a sheet of paper, or even in the sequence of a person’s thoughts. But a key moment in the history of algorithms is the realisation, had most sharply by G.W. Leibniz in the 17th century, that whatever decision procedure can be carried out on paper can also, in principle, be carried out by a machine, and in this way machines may reproduce much of what goes on when human beings reason.
Over the course of the 20th century machines grew ever better at the reasoning-like tasks we assigned them, and for which we had built them. In 1950 Alan Turing proposed the test that would bear his name, according to which we are justified in concluding that any machine whose reasoning behaviour is indistinguishable from the reasoning behaviour of a human being is in fact a machine that is reasoning, and not only simulating reason.
Today we are surrounded by fragments of language of unknown origins, by Tweets and posts that seem to be saying something imbued with intention, but that may very well have been generated by bots. We move through an environment saturated with pseudo-intentions, machine-made imitations of human thought. It would be an exaggeration to say that these are indistinguishable from human thought. Most often we are able to distinguish, with a bit of effort, the barely coherent concatenation of words issued from a Russian Twitter bot in support of the campaign platform of the candidate Donald Trump, from the barely coherent concatenation of words issued from a human supporter of that same candidate. But the effort is draining. The entire information ecosystem we inhabit is draining, cognitively and emotionally, in a way that Leibniz and Turing could not possibly have envisioned.
And this is where the new refrain, that we are algorithms, might be different from the earlier technological analogies so often mistaken for literal truth: when it was said that the mind is a mirror, or that the body is a stove, this was generally in a spirit of excitement: excitement at the thought that we had understood the principles underlying the most basic elements of reality, and learned how to reproduce them. When it is said that we are algorithms, one often detects not a spirit of excitement, but of exhaustion. Are you overworked and short on leisure time? So much easier to outsource the cultivation of musical taste to Spotify. Are you threatened by differing viewpoints and all too aware of the futility of online argumentation? Why not just let your social-media feed reconfirm your pre-existing opinions and keep you sequestered within a community of never-ending mutual affirmation? There is, of course, a hitch. You can arrange for machine algorithms to see to the constitution and maintenance of nearly all aspects of your selfhood, but only by accepting a sort of contract, which you must sign as with the blood of a pricked finger (though now this is usually a matter of clicking at the bottom of an unread list of terms and conditions). You do not sell your soul, exactly, but you do put your subjectivity in pawn. You allow yourself to see yourself the way you might be seen from an external point of view, from a point of view that does not, and cannot, take into account what it is actually like to be you. You may recover your subjectivity at any time, though the social-media companies will not like this, by simply reasserting the primacy of your own tastes, desires, and opinions; stepping away from the machines when it comes time to cultivate and hone these tastes, desires, and opinions; and, finally, abandoning the lazy analogy of the self to an algorithm, which like all analogies of this sort is taken by the gullible and incurious as a statement of literal truth. Our existence as subjects is of course always conditioned by external forces, knocking us this way and that, so it is not as if reclaiming our freedom from the new algorithmic determinism will result in total autonomy. But there is more than one way of being knocked around. Besides the algorithmic, there is for example the aleatoric: rather than moving from one song to the next in virtue of properties of both songs that some AI program clumsily groups together in virtue of shared properties, one may move from song to song, or book to book, or look to look, based on no criterion of similarity at all, but rather only on sheer randomness.
When I was a child there was a barn in a field behind our house that some years before I was born had been filled with thousands of chickens, all producing eggs for my grandfather’s modest egg business. All the chickens were gone when I first began exploring there, though traces of them, in the form of feathers and faeces, still remained. At some point someone, I really don’t know who, dumped a pile of what must have been several thousand books in the middle of the floor. They seem to have arrived there before the chickens left, as many of them were covered in excrement, though I really can’t say what the order of events was.
These books were my first library. I pulled from the pile whatever caught my eye, and whatever was not too soiled, and brought them into the house to study them and to try to make some sense of the world through them. There was a volume of the minutes of a 1971 meeting of the National Transportation Safety Board. There was a 1969 yearbook of the West German forestry (this rare specimen in a foreign language was particularly precious); there were Louis Lamour westerns, Reader’s Digest large-print versions of the Brontë sisters, Erica Jong’s Fear of Flying. None of this made any sense as a grouping, but as a first stab at an ordering of the cosmos through the microcosmos of a personal library, it served me fairly well.
Now this selection was not totally free: I could not pick out books that were not there. But it was an exercise of freedom to go and do the picking. This is how human beings have in general always made their way through the world, and it has little common measure with the system of selection that relies on machines programmed according to the “You may also like...” principle. I liked the yearbook of West German forestry, and I liked Louis Lamour, and no machine could ever have come up with that pairing for me, unless it were malfunctioning. But what would be malfunctioning for a machine is, for us, the very definition of thriving. This is what we relinquish when we turn over the constitution of our tastes, interests, and opinions to dumb algorithms. Even at our laziest and most passive, we know they are dumb – when for example they put Coldplay in the queue with Jimi Hendrix because both are “rock” – just as we know something is off when a bot begins expressing its political opinions. And we make ourselves dumb, and betray our real nature, which is rooted in our free subjectivity, when we entertain the idea that algorithms are not just external crutches, but reflections of our true selves.
Justin E.H. Smith is professor of the history and philosophy of science at the University of Paris. His new book Irrationality: A History of the Dark Side of Reason was published in April by Princeton University Press.
From The Philosopher, vol. 107, no. 3 ('Identities').