Meaning Patterns’s Updates

Can Machines Think? Revisting Alan Turing

“Can machines think?” asked Alan Turing* in his celebrated 1950 article, “Computer Machinery and Intelligence.” His answer was that, some day in the not-too-distant future, in a certain sense they might: “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”218

Then Turing works over a number of possible objections to his prediction. “Lady Lovelace’s Objection: … she states, ‘The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform’.”§1.3.2a But perhaps, says Turing, the Analytical Engine had the potential to “think for itself” to the extent that it could come up with surprising answers to mathematical problems.219

Turing proposes an “imitation game” where a computer gives answers that seem as smart as a person.§1.3.2c Such a game might require, say, 109 binary digits. “At my present rate of working I produce about a thousand digits of programme a day, so that about sixty workers, working steadily through the fifty years might accomplish the job, if nothing went into the waste-paper basket.” Meanwhile, “[p]arts of modern machines which can be regarded as analogues of nerve cells work about a thousand times faster than the latter … Machines take me by surprise with great frequency …, largely because I do not do sufficient calculation.”220

Perhaps, Turing says in a footnote, in this sense Lovelace was not so wrong. Her statement about machines doing what we tell them, he says, does not include the word “only.” “There was no obligation on [Lovelace or Babbage] to claim all that could be claimed.”221 Nor, on the other hand, was Turing’s claim ever so grandiose as to suggest what is now popularly called “artificial intelligence.”§AS1.4.6c His claim was not to anything more than “mechanical intelligence,” that computers could produce surprising mathematical results with fast calculation. Lovelace surely would have agreed.

Turing first brought these ideas to print in his 1936 paper, “On Computable Numbers.” He had been appointed a fellow of Kings College Cambridge University just two years before, at the remarkably early age of 22.

“On Computable Numbers” is in some respects a strange paper, not just for its mathematical tangle written in obscure German Gothic type, but for talking about what seem to be two quite tangential things that add up to an apparently contradictory proposition. The rest of its title was “ … with an Application to the Entscheidungsproblem,” or the problem of decidability in mathematics.222 German mathematician David Hilbert had famously argued in 1928 that mathematics was complete, that it was consistent, and that every problem it posed was decidable in the sense that every one of its propositions could be proved or disproved. Kurt Gödel had by 1931 shown that mathematics was neither complete nor consistent.§1a Turing was now able to show that not every problem was decidable, so bringing down the last pillar of Hilbert’s mathematical edifice.

But, here is the apparent contradiction: incidental to making this case, Turing imagined a machine that would be able to deal with things in mathematics that were still absolutely decidable, though not feasibly decidable by humans. “According to my definition, a number is computable if its decimal can be written down by a machine.”223 This leaves a lot of scope for definitive decidability, including calculations that are well beyond what is practically possible for humans and without the errors to which they are prone.

Turing’s imaginary machine worked like this. It would run by reading a paper tape with a sequence of squares each capable of carrying a symbol. As the machine is fed the tape, it scans each symbol and is able to remember some of the symbols it has already scanned. It can print calculations onto blank squares, so remembering the steps it is taking. It can, in other words, write down the interim results of its calculations in a kind of working memory, then act on these. In this way, the machine emulates a sequence of minimal states of mind, breaking a sequence of mathematical operations into their most elementary steps.224

“Let us imagine the operations performed by [a person computing] to be split up into ‘simple operations’ which are so elementary that it is not easy to imagine them further divided,” said Turing. Now, “the two-dimensional character of paper is no [longer] essential of computation. I assume then that the computation is carried out on one-dimensional paper, i.e. on a tape divided into squares.” By this means, and far from being undecidable, “[i]t is my contention that these operations include all those which are used in the computation of a number.”225
Now, we find that we have a machine capable of making calculations beyond the practical realm of human decidability. “[A]n an Arabic numeral such as 17 or 999999999999999 is normally treated as a single symbol … The differences from our point of view between the single and compound symbol is that the compound symbols, if they are too lengthy, cannot be observed at one glance … We cannot tell at a glance whether 9999999999999999 and 999999999999999 are the same.” But by laborious emulation of elementary “states of mind,” a machine of the hypothetical kind he was describing would be able to deal quickly and with precision with calculations that a human could only do with such difficulty as to make them impracticable. With such a machine a whole lot more could become decidable, and definitively. “It is possible to invent a single machine which can be used to compute any computable sequence,” he concluded.226

Turing offered his first lecture course at Cambridge in 1939, “Foundations of Mathematics,” in which the final examination question was to come up with a solution to the Entscheidungsproblem. In that year, the famed philosopher of language, Ludwig Wittgenstein,§2.1.1a offered a course with the same name. Turing was one of fifteen people who attended Wittgenstein’s course.

TURING: You cannot be confident about your calculus until you know there is no hidden contradiction in it …
WITTGENSTEIN: … Why should [people] be afraid of contradictions inside mathematics? …
TURING: The real harm will not come in unless there is an application, in which a bridge may fall down or something of that sort.227

Or perhaps, to imagine a further question and its answer, why even bother with the Entscheidungsproblem? Because it tells of a whole lot more that is definitively decidable by calculation, even if much of what is possible cannot be done without the help of a machine. We calculate, machines help us to calculate, and bridges mostly stay up.

As it happened, a professor at the Institute of Advanced Study at Princeton University, Alonzo Church, had at the same time come to the same conclusion as Turing about the Entscheidungsproblem, though his method of proof was different and his version of the idea he called “effective calculability.”228 Turing acknowledged Church in his paper, and for his part Church recognized the ingenuity of Turing’s hypothetical apparatus by calling it a “Turing Machine.”229 The principles of calculability have come to be known as the “Church–Turing thesis.”

The Institute of Advanced Study was where the intellectual action was – not only were Church and Gödel there, but at various times, other key figures in the creation of computing, including John von Neumann and Claude Shannon. Turing spent time there in 1936–38, writing his PhD on ordinal numbers.
In 1938 Shannon come up with the idea that, instead of paper tape, relay circuits or on/off switches could represent mathematical symbols as zeros and ones, and do the work of calculation electrically. Applying the elementary logic of nineteenth-century mathematical philosopher George Boole, he suggested that when the circuit is closed a proposition could be considered false, and when it is open, it could be considered true: “any circuit is represented by a set of equations, the terms of the equations corresponding to the various relays and switches in the circuit. A calculus is developed for manipulating these equations by simple mathematical processes.”230 Such an electronic machine would work sequentially through a series of yes/no binaries: no decision problems or contradictions here.

By the mid-2010s, a smart phone would have over three billion such switches, performing exactly the kinds of humanly impractical calculations that Lovelace and Turing had both anticipated, though on a scale unimaginably larger again. Perhaps these things should be called Lovelace-Turing Machines.

***
The Second World War came, and Turing’s work turned to cryptography, decoding the messages sent by the German military through its Enigma enciphering machines. Working in an old house in the English countryside, Bletchley Park, Turing and his colleagues used a mix of hand-written mathematics and mechanical calculating machines to decipher German war commands.231

At Bletchley Park, and now also in Princeton, deceptively encoded meanings became the focus during this stage of the development of the meaning encipherment and decipherment machines that would later be called “computer.” It is a nice irony – or perhaps not – that the impetus for this foundational work was as much as anything about meanings deliberately hidden by calculation.

Then came the Cold War, and the focus shifted to the mathematic modeling of thermonuclear explosions. The first nuclear weapons – the atom bombs dropped on Hiroshima and Nagasaki,§2.0a and then the hydrogen bombs of the 1950s, were built with the help of computers in the Los Alamos National Laboratory in New Mexico. The most advanced in the world, these used punched cards like those in Jacquard looms and proposed for Charles Babbage’s Analytical Engine.§1.3.2a The Mathematical and Numerical Integrator and Computer created there came to be known by the unfortunate acronym MANIAC. For a decade, from the beginning of the second World War and into the Cold War, computing research and development was cloaked in militaristic secrecy. Without scholarly publication, it becomes hard to tell who contributed what in the development of these first electronic machines of calculation.232

After the war, the Illinois Automatic Computer ILLIAC was installed at the University of Illinois, a first generation replicant of MANIAC and the first electronic computer to be owned by a university. Its design had been mapped out by John von Neumann in 1945.233 A former doctoral student of David Hilbert, then professor at the Institute of Advanced Study where Turing had worked for two years, von Neumann moved to Los Alamos to work on nuclear weapons. When he drove through Urbana in 1951 on one of his frequent trips between Princeton and Los Alamos, he stopped at the University to give a talk about the new machine, which was of course also being used for “secret government work.”234
Like so many of his colleagues at the Institute of Advanced Study, von Neumann was a Jewish refugee. Born into a wealthy Hungarian family, he was “violently anti-communist … since I had about three months taste of it in Hungary in 1919.”§AS2.4.2b So the Cold War project suited him in a way that it did not necessarily suit other scientists and Jews who had also signed up to fight Nazis during the war. Prominent among these was Robert Oppenheimer, who pulled out of the venture when the bombs he and Neumann had created in Los Alamos were dropped on civilian populations.

Not von Neumann: of all his intellectual projects ranging from the game theory of an idealized capitalist market235 to computer design,236 nuclear “shock waves were von Neumann’s first love.” He is credited with the development of the doctrine of “mutually assured destruction” (MAD) where the US and the Soviet Union had piled up enough nuclear weapons to be able to destroy each other and the world. This, so the theory went, would be sufficient to keep the Soviets at bay while also creating a disincentive to use them.237

John Bardeen, inventor of the transistor, came to Illinois in the same year that von Neumann visited, winning the Nobel Prize in Physics for this achievement in 1956, and a second Nobel Prize in 1972. The pieces for the creation of modern computing were now all in place, Turing’s logic and Bardeen’s semiconductors. As the supposedly foolproof HAL 9000 computer dies in Arthur C. Clarke’s 1968 novel and Stanley Kubrick’s film 2001: A Space Odyssey, he reminisces about his construction and training in Urbana, Illinois.238 In another Kubrick movie, Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb, a von Neumann caricature is the star of the show, barely able to restrain a spontaneous Nazi salute.

But beyond military decipherment and designing thermonuclear explosions, what could all this mean? What could be the use of computing machinery? Like Lovelace, Turing was ready to dream forward. He is better known today for his philosophy of the machine and his theory of its meaning capacities, than his work on the machines themselves.

The war was over and the Bletchley Park team was disbanded. Turing joined the National Physical Laboratory, whose director was Sir Charles Darwin, grandson of the natural scientist. Turing and the team to which he belonged made slow progress on the machine, the Automatic Computing Engine (a nod to Babbage’s “Engines”). In the meantime, Turing wrote a speculative report, “Intelligent Machinery.”

Turing made the reference point for his report the human brain. A hypothetical Universal Logical Computing Machine would be able to do more than calculate according to instructions; it would also be able to apply the answers it came up with as new instructions to itself. “Whenever the content of [the machine’s] storage was altered by the internal operations of the machine, one would naturally speak of the machine ‘modifying itself.’” By this process, the machine would also be able to teach itself in a way analogous to children’s learning, using a system of rewards and punishments for right and wrong answers.239 (Behaviorism§3.1.2i was at the time the psychological theory of choice.)

“The electrical circuits which are used in electronic computing machinery seem to have the essential properties of nerves,” Turing said. Without having to go through the cumbersome process of creating a body, an electronic brain could be given organs of sight (television cameras), speech (loudspeakers), and hearing (microphones), by means of which it could learn games, languages, translation of languages, and mathematics, though he wondered about language because this was rather too dependent on locomotion. Compared with these large promises, the report was also at times rather more circumspect, the machine merely “mimicking education,” learning only by laboriously undertaking an enormous number of minimal mathematical steps.240

Sir Charles Darwin was not impressed: “a schoolboy’s essay … not suitable for publication” was his verdict.241 Considering him useless to the ACE project, Darwin sent Turing back to Cambridge for a year’s sabbatical. The report was not published until after Turing’s death.

By the time the sabbatical was over, Turing had a new job, at the University of Manchester. As it happens, this was where Ludwig Wittgenstein§2.1.1a had begun his mathematical and philosophical journey.

In Manchester, they had started to build an electronic computer using seven tons of surplus parts delivered there from the scrapped Bletchley Park machines. “Manchester 1” had 1024 bits of random access memory, with paper-tape input and output – this last mechanism had been Turing’s suggestion. By 1949, announced The Times, “the mechanical brain” in Manchester had done something that was practically impossible to achieve in paper. It had found some previously undiscovered, extremely large prime numbers.242

Turing decided it was time to try again with the theoretical ideas that had been so cursorily consigned to the proverbial dustbin by Sir Charles Darwin. In 1950, he published “Computing Machinery and Intelligence” in the philosophy journal, Mind. His question: how would you be able to determine when a machine was intelligent? His answer has come to be called the “Turing Test.”

Turing proposed an “imitation game,” where a machine and a person are behind a screen. Each is asked questions, with the source of the answers masked by a teleprinter. If the smartness of answers are indistinguishable to the human questioner, the machine might be deemed as intelligent as a person.
Playing such a game, the machine had certain advantages. The number of discrete states of which the Manchester machine was capable was, said Turing, about 1,050,000. Moreover, a universal machine was able to conduct any kind of operation depending on how they were programmed. And more: “By observing the results of its own behaviour it can modify its own programmes so as to achieve some purpose more effectively.” By calculation based on its own previous calculations, such a machine can learn. It can be “used to help in making up its own programmes …; a machine undoubtedly can be its own subject matter.”243

To this extent, computing machinery could perform some of the same operations of calculation as humans, though much faster. “Parts of modern machines which can be regarded as analogues of nerve cells work about a thousand times faster than the latter.” So, the “view that machines cannot give rise to surprises is due, to a fallacy …, the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the mind simultaneously with it.”244 Things can present themselves as surprises after a lot of working over data, and some of the working over that machines can do is too laborious for manual calculation. Today, these methods are called “machine learning” and “artificial intelligence.”§AS1.4.6c

Having specified these capacities, Turing reframed the question that began the paper. With these capacities, “the question ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’”245

Philosopher of language John R. Searle§2.3.1b was to mount what has become the best known challenge to Turing, in the hypothetical case of the Chinese room. Two people are behind Turing’s screen; the first one knows Chinese and the second doesn’t but has a dictionary. A third person asks each the meaning of words, and will get correct answers from both. The two people behind the screen seem equally smart because they can both type out the right answers. Like the second person, computers cannot know Chinese; they can only seem to know Chinese.246

The target of Searle’s critique was “artificial intelligence.” But Turing never made such an overblown claim. His claim was to no more than “mechanical intelligence,”247 and this intelligence was in fact a trick. Turing had a dry sense of humor, and with some of its absurd counter-arguments, “Computing Machinery and Intelligence” can be read as a humorous text. The computer, he said, exhibited no more than a semblance of intelligence. If it was smart enough, it would be able to trick a gullible user in an “imitation game.” The computer itself was the joker. Perhaps Turing had Wittgenstein’s “language game” in mind, though Wittgenstein had not been joking.§2.3.2a Sir Charles Darwin was not joking, either.

Here is our re-reading of Turing. Calculation is one narrowly circumscribed function of meaning, a function which in our multimodal grammar we have called quantity. By transposition into quantity we can reduce a Chinese character to the zeros and ones that lie behind Unicode.§0.2.1a The character makes sense as something in the world expressed in our sensuous experience though non-quantitative functions. Or we have the word for the character spoken in voice synthesis that also reduces the word represented by that character to zeros and ones, albeit a totally different set of zeros and ones from text. Because, even in the binary world the instruction tables for text and speech are fundamentally different and can only be aligned by probabilistic statistics.

The zeros and ones in the middle, between creation and rendering, are an extremely narrow species of written text, completely unreadable except by a machine. When, following an instruction to calculate, they come out of the machine, and they are mechanically reconstructed as text or sound. In the middle is a huge amount of very fast calculation, but the elemental textual units are far too minimal and too many for the human mind to be able to read, even for their mathematical meaning.

Now we’re back to routines like those developed in Bletchley Park to fight the German War Machine: encipherment > practically unintelligible, mathematically expressed text > decipherment. The stuff between encipherment and decipherment is no more than calculation, the logic of which humans have created by clever reduction, and the speed of which has been mechanized to beyond mortal capacities. We are back with Lovelace.

Nor is the computer so very different from other novel inventions of Lovelace’s time. When in 1825 the steam locomotive first went faster than a person walking or a horse pulling a stage coach, people marveled – would a human body disintegrate at twenty miles per hour, a speed until then thought possible only for angels?248 Bodies moving fast, numbers adding fast. Such are the powers of the mechanical prostheses of modern invention.

The question, then, is what is the scope of transposition into quantity? What are its affordances? Our answer, in sympathy we believe with the insights of both Lovelace and Turing: only things that are countable. By counting, instances can become concepts.§1.1 By counting across a grade, qualities§1.3.1 can be measured and named. By relating variables, curves can be drawn.§1.2.2d By counting the pixels and numbering their colors, an image can be deconstructed then reconstructed.§1.2.2e By numbering time and place, we can organize our days and find our way.§AS1.3 And by giving a huge number of things a huge number of mostly unspeakable alphanumerical names, we have created for ourselves an unfathomably long list of things to count. That’s a lot, but also, that’s about all.

Turing’s mortal fate was to be determined by another kind of decidability, born of a different series of binaries. On 7 February 1952, he reported to the police a burglary at his house. When the case came to court, Turing confessed, “I tried to mislead you about my informant … I picked him up in Oxford Street, Manchester … I have been an accessory to an offense in this house … I had an affair with him.”

The informant’s friend had stolen Alan’s things, but that was soon ignored, and Turing was committed to trial on three charges of indecency. He wrote a grimly jokey letter about it to a friend, signing off:

Turing believes machines think
Turing lies with men
Therefore machines do not think
Yours in distress
Alan249

He was sentenced by the court to undergo “treatment,” and this included estrogen implants whose effect was chemical castration.250

Turing was found dead from poisoning on 7 June 1954. He had a small lab in the house where he used cyanide to conduct electrical experiments. The coroner’s court ruled suicide, but his mother, whom he had never liked but who went on to write for him a hagiography, insisted the death was accidental.251

  • Cope, Bill and Mary Kalantzis, 2020, Making Sense: Reference, Agency and Structure in a Grammar of Multimodal Meaning, Cambridge UK: Cambridge University Press, pp. 159-69 [§ markers are cross-references to other sections in this book and the companion volume (AS); footnotes are in this book.]