Rage against the machine
For all the promise and dangers of AI, computers plainly canāt think. To think is to resist ā something no machine does
Computers donāt actually do anything. They donāt write, or play; they donāt even compute. Which doesnāt mean we canāt play with computers, or use them to invent, or make, or problem-solve. The new AI is unexpectedly reshaping ways of working and making, in the arts and sciences, in industry, and in warfare. We need to come to terms with the transformative promise and dangers of this new tech. But it ought to be possible to do so without succumbing to bogus claims about machine minds.
What could ever lead us to take seriously the thought that these devices of our own invention might actually understand, and think, and feel, or that, if not now, then later, they might one day come to open their artificial eyes thus finally to behold a shiny world of their very own? One source might simply be the sense that, now unleashed, AI is beyond our control. Fast, microscopic, distributed and astronomically complex, it is hard to understand this tech, and it is tempting to imagine that it has power over us.
But this is nothing new. The story of technology ā from prehistory to now ā has always been that of the ways we are entrained by the tools and systems that we ourselves have made. Think of the pathways we make by walking. To every tool there is a corresponding habit, that is, an automatised way of acting and being. From the humble pencil to the printing press to the internet, our human agency is enacted in part by the creation of social and technological landscapes that in turn transform what we can do, and so seem, or threaten, to govern and control us.
Yet it is one thing to appreciate the ways we make and remake ourselves through the cultural transformation of our worlds via tool use and technology, and another to mystify dumb matter put to work by us. If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.
But there is another origin of our impulse to concede mind to devices of our own invention, and this is what I focus on here: the tendency of some scientists to take for granted what can only be described as a wildly simplistic picture of human and animal cognitive life. They rely unchecked on one-sided, indeed, milquetoast conceptions of human activity, skill and cognitive accomplishment. The surreptitious substitution (to use a phrase of Edmund Husserlās) of this thin gruel version of the mind at work ā a substitution that I hope to convince you traces back to Alan Turing and the very origins of AI ā is the decisive move in the conjuring trick.
What scientists seem to have forgotten is that the human animal is a creature of disturbance. Or as the mid-20th-century philosopher of biology Hans Jonas wrote: āIrritability is the germ, and as it were the atom, of having a worldā¦ā With us there is always, so to speak, a pebble in the shoe. And this is what moves us, turns us, orients us to reorient ourselves, to do things differently, so that we might carry on. It is irritation and disorientation that is the source of our concern. In the absence of disturbance, there is nothing: no language, no games, no goals, no tasks, no world, no care, and so, yes, no consciousness.
Can machines think? Turing dismissed this as ātoo meaningless to deserve discussionā. Instead of trying to make a machine that can think, he was content to design one that might count as a reasonable substitute for a thinker. Everywhere in Turingās work, the focus is on imitation, replacement and substitution.
Consider his contribution to mathematics. A Turing machine is a formal model of the informal idea of computation: ie, the idea that some problems can be solved āmechanicallyā by following a recipe or algorithm. (Think long division.) Turing proposed that we replace the familiar notion with his more precise analogue. Whether a given function is Turing-computable is a mathematical question, one that Turing supplied the formal means to answer rigorously. But whether Turing-computability serves to capture the essence of computation as we understand this intuitively, and whether therefore itās a good idea to make the replacement, these are not questions that mathematics can decide. Indeed, presumably because they are themselves ātoo meaningless to deserve discussion,ā Turing left them to the philosophers.
In the same anti-philosophical spirit, Turing proposed that we replace the meaningless question Can machines think? with the empirically decidable question Can machines pass [what has come to be known as] the Turing test? To understand this proposal, we need to look at the test, which Turing called the Imitation Game.
The game is to be played by three players: one man, one woman, and one person whose gender doesnāt matter. Each has a distinct task. The player of unspecified gender, the interrogator, has the job of figuring out which of the other two is a man, and which a woman. The womanās task is to serve as the interrogatorās ally; the manās is to cause the interrogator to make the wrong identification.
The point is to explore whether substituting a machine for a player has any effect on the rate of success
This might make for fun adult entertainment, but Turing feared it would be too easy. Even today, when gender-experiment is commonplace, it wouldnāt be that hard, in most circumstances, to sort people by gender on the basis of superficial appearance. So Turing proposed that we isolate the interrogator in a room, limiting their access to others to the posing of questions. And he added: āIn order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms.ā
What does the Imitation Game teach us about machine intelligence? Here is what Turing says:
We now ask the question, āWhat will happen when a machine takes the part of [the man] in this game?ā Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, āCan machines think?ā The interrogatorās goal is not to out the computer; itās to out the human players as having this or that gender. But Turingās goal, and the gameās point, is to explore whether substituting a machine for one of the players has any effect on the interrogatorās rate of success. It is this last question, whether or not there is an effect on outcomes, that is proposed, by Turing, as proxy for the āmeaninglessā question of whether machines can think.
Instead of arguing about what thinking is, Turing envisions a scenario in which machines might be able to enter into and participate in meaningful human exchange. Would their ability to do this establish that they can think, or feel, that they have minds as we have minds? These are precisely the wrong questions to ask, according to Turing. What he does say is that machines will get better at the game, and he went so far as to venture a prediction: that by end of the century ā he was writing in 1950 ā āgeneral educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.ā
Despite Turingās apparent hostility to philosophy, it is possible to read him as capturing a critical philosophical insight. Why should we expect that evidence would be able to secure the minds of machines for us, when it doesnāt perform that function in our ordinary human dealings? None of us has ever found out or proved that the people around us in our lives actually think or feel. We just take it for granted. And it is this observation that motivates his conception of his own task: not that of proving that machines can think; but rather that of integrating them into our lives so that the question, in effect, goes away, or answers itself.
It turns out, however, that not all of Turingās replacements and substitutions are quite so straightforward as they seem. Some of them are downright misleading.
Consider, first, Turingās matter-of-fact suggestion that we replace talking by the use of typed messages. He suggests that this is to make the game challenging. But the substitution of text for speech has an entirely different effect: to lend a modicum of plausibility to the otherwise absurd suggestion that machines might participate at all. To appreciate this, recall that a Turing machine is what in mathematics is called a formal system. In a formal system, there is a finite alphabet, and a finite set of rules for combining elements of the alphabet into more complex expressions. What makes the system formal is that the vocabulary needs to be specified in terms of physical properties alone, and rules need to be framed only in terms of these physical, that is to say, formal properties. This is the crux: unless you can formally specify the inputs and the outputs ā the vocabulary ā you canāt define a Turing machine or a Turing-computable function.
And, crucially, it isnāt possible formally to specify the inputs and the outputs of ordinary human language. Speech is breathy, hot movement that always unfolds with others, in context, and against the background of needs, feelings, desires, projects, goals and constraints. Speech is active, felt and improvisational. It has more in common with dancing than text-messaging. We are so much at home, nowadays, under the regime of the keyboard that we donāt even notice the ways text conceals the bodily reality of language.
The gamification of life is one of Turingās most secure, and most troubling, legacies
Although speech is not formally specifiable, text ā in the sense of text-messaging ā is. So text can serve as a computationally tractable proxy for real human exchange. By filtering all communication between the players through the keyboard, in the name of making the game harder, Turing actually ā and really this is a sleight of hand ā sweeps what the philosopher Ned Block has called the problem of inputs and outputs under the rug.
But the substitution of text-message for speech is not the only sleight of hand at work in Turingās argument. The other is introduced even more surreptitiously. This is the tacit substitution of games for meaningful human exchange. Indeed, the gamification of life is one of Turingās most secure, and most troubling, legacies.
The problem is that Turing takes for granted a partial and distorted understanding of what games are. From the computational perspective, games are ā indeed, to be formally tractable, they must be ā crystalline structures of intelligibility, virtual worlds, where rules constrain what you can do, and where unproblematic values (points, goals, the score), and settled criteria of success and failure (winning and losing), are clearly specified.
But clarity, regimentation and transparency give us only one aspect of what a game is. Somehow Turing and his successors tend to forget that games are also contests; they are proving grounds, and it is we who are tested and we whose limitations are exposed, or whose powers as well as frailties are put on display on the kickball field, or the four square court. A child who plays competitive chess might suffer from anxiety so extreme they are nauseated. This visceral expression is no accidental epiphenomenon, an external of no essential value to the game. No, games without vomit ā or at least that live possibility ā would not be recognisable as human games at all.
All this is to say that true games are much more than they seem to be when we view them, as Turing did, through the lens of the regime of the keyboard. (Which is not to deny that we can, and do, usefully model aspects of the game computationally.)
Hereās the critical upshot: human beings are not merely doers (eg, games players) whose actions, at least when successful, conform to rules or norms. We are doers whose activity is always (at least potentially) the site of conflict. Second-order acts of reflection and criticism belong to the first-order performance itself. These are entangled, and with the consequence that you can never factor out, from the pure exercise of the activity itself, all the ways in which the activity challenges, removed, impedes and confounds. To play piano, for example ā that other keyboard technology ā is to fight with the machine, to battle against it.
Let me explain: the piano is the construction and elaboration of a particular musical culture and its values. It installs a conception of what is musically legible, intelligible, permitted and possible. A contraption made of approximately 12,000 pieces of wood, steel, felt and wire, the piano is a quasi-digital system, in which tones are the work of keystrokes, and in which intervals, scales and harmonic possibilities are controlled by the machineās design and manufacture.
The piano was invented, to be sure, but not by you or me. We encounter it. It pre-exists us and solicits our submission. To learn to play is to be altered, made to adapt oneās posture, hands, fingers, legs and feet to the pianoās mechanical requirements. Under the regime of the piano keyboard, it is demanded that we ourselves become player pianos, that is to say, extensions of the machine itself.
But we canāt. And we wonāt. To learn to play, to take on the machine, for us, is to struggle. It is hard to master the instrumentās demands.
To master the piano is not just to conform to the machineās demands. It is to push back, to say no
And this fact ā the difficulty we encounter in the face of the keyboardās insistence ā is productive. We make art out of it. It stops us being player pianos, but it is exactly what is required if we are to become piano players.
For it is the playerās fraught relation to the machine, and to the history and tradition that the machine imposes, that supplies the raw material of musical invention. Music and play happen in that entanglement. To master the piano, as only a person can, is not just to conform to the machineās demands. It is, rather, to push back, to say no, to rage against the machine. And so, for example, we slap and bang and shout out. In this way, the piano becomes not merely a vehicle of habit and control ā a mechanism ā but rather an opportunity for action and expression.
And, as with the piano, so with the whole of human cultural life. We live in the entanglement between government and resistance. We fight back.
Consider language. We donāt just talk, as it were, following the rules blindly. Talking is an issue for us, and the rules, such as they are, are up for grabs and in dispute. We always, inevitably, and from the beginning, are made to cope with how hard talking is, how liable we are to misunderstand each other, although most of the time this is undertaken matter-of-factly and without undue stress. To talk, almost inevitably, is to question word choice, to demand reformulation, repetition and repair. What do you mean? How can you say that? In this way, talking contains within it, from the start, and as one of its basic modes, the activities of criticism and reflection about talking, which end up changing the way we talk. We donāt just act, as it were, in the flow. Flow eludes us and, in its place, we know striving, argument and negotiation. And so we change language in using language; and thatās what a language is, a place of capture and release, engagement and criticism, a process. We can never factor out mere doing, skilfulness, habit ā the sort of things machines are used effectively to simulate ā from the ways these doings, engagements and skills are made new, transformed, through our very acts of doing them. These are entangled. This is a crucial lesson about the very shape of human cognition.
If we keep language, the piano, and games in view, and if we donāt lose sight of what I am calling entanglement ā the ways in which carrying on is entangled with everything required to deal with just how hard it is to carry on! ā then it becomes clear that the AI discussion tends unthinkingly to presuppose a one-sided, peaches-and-cream simplification of human skilfulness and cognitive life. As if speaking were the straightforward application of rules, or playing the piano was just a matter of doing what the manual instructs. But to imagine language users who were not also actively struggling with the problems of talk would be to imagine something that is, at most, the shell or semblance of human life with language. It would, in fact, be to imagine the language of machines (such as LLMs).
The telling fact: computers are used to play our games; they are engineered to make moves in the spaces opened up by our concerns. They donāt have concerns of their own, and they make no new games. They invent no new language.
The British philosopher R G Collingwood noticed that the painter doesnāt invent painting, and the musician doesnāt invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do ā ie, merely by getting trained up on large data sets. Humans arenāt trained up. We have experience. We learn. And for us, learning a language, for example, isnāt learning to generate āthe next tokenā. Itās learning to work, play, eat, love, flirt, dance, fight, pray, manipulate, negotiate, pretend, invent and think. And crucially, we donāt merely incorporate what we learn and carry on; we always resist. Our values are always problematic. We are not merely word-generators. We are makers of meaning.
We canāt help doing this; no computer can do this.
Like Einstein saidā¦computers are stupid but fast. Humans are slow but smart. Together, they make an awesome combination.
Stupid and slow
Computers donāt have souls, the bible clearly states that only humans and cows can think.
/s
The British philosopher R G Collingwood noticed that the painter doesnāt invent painting, and the musician doesnāt invent the musical culture in which they find themselves. And for Collingwood this served to show that no person is fully autonomous, a God-like fount of creativity; we are always to some degree recyclers and samplers and, at our best, participants in something larger than ourselves.
But this should not be taken to show that we become what we are (painters, musicians, speakers) by doing what, for example, LLMs do ā ie, merely by getting trained up on large data sets. Humans arenāt trained up. We have experience. We learn.
This is what happens when people try to apply philosophy to science. They get romantic and egotistical. āOh, humanity, will your wondrous powers of learning ever be duplicated? Not by brutish machines, I daresay. They only do what they are told and learn what is given to them.ā
Humans absolutely come up with new music theories, instruments and genres and painters do come up with new techniques, tools and styles all the time. We are still inventers, especially in the creative space. Ironically the LLM is one of them and it will completely stagnate and fade into obscurity if humans stop tinkering with it as well.
Agreed, and this mlst video is coincidentally what Iām listening to at the moment and it seems to be an appropriate response to the bullshit.
Butlerian jihad when
Hopefully not, the Dune universe isnāt one Iād particularly like to live in lol