New Learning’s Updates

Artificial Intelligence in Education - Potentials and Limits

To what extent and in what ways can machines support learning? Since the beginning of computing, questions have been raised about the intelligence of the machines and their potential role of in supporting human learning. Can they be helpful to learners because they can be smart like a teacher? Can they give feedback the way a teacher would? These questions require a response that addresses the larger question of the nature of ‘artificial intelligence’. And can computers learn from their own data, in order to help human learning? This question addresses a phenomenon now framed as ‘machine learning’.

The question of artificial intelligence was famously posed in the form of a test by one of the founders of digital computing, Alan Turing. In his proposed test, a computer and a person is each hidden behind a screen, and another person is asking them questions via a teletype machine so the source of the answers is indistinguishable. If the person asking the questions can’t tell the difference between a human and a machine response to a question, then the machine may be taken to exhibit artificial intelligence.

Digital computers can be constructed, and indeed have been constructed ... that ... can in fact mimic the actions of a human computer very closely. ... If one wants to make a machine mimic the behaviour of the human computer in some complex operation one has to ask him how it is done, and then translate the answer into the form of an instruction table. Constructing instruction tables is usually described as ‘programming.’ To ‘programme a machine to carry out the operation A’ means to put the appropriate instruction table into the machine so that it will do A. (Turing 1950)

John Searle’s response is that this is not artificial intelligence at all, or at least not something that the proponents of what he labels a ‘strong AI’ hypothesis would claim. Computers can’t think, he says. He sets out to disprove the Turing thesis with a hypothetical Chinese room. Behind the screen is a person who knows Chinese and a computer that can give the correct answer to the meaning of the Chinese character by using look-up tables. Just because the answer is correct, does not mean that the computer understands Chinese (Searle 1980). His conclusion: ‘in the literal, real, observer-independent sense in which humans compute, mechanical computers do not compute. They go through a set of transitions in electronic states that we can interpret computationally (Searle 2014).’

To Searle, Stevan Harnad responds that computers set out to do no more than simulate or model patterns in the world in a way that records and represents, in increasingly smart ways, human knowledge and pattern-making (Harnad 1989). Daniel Dennett says that Searle over-interprets and so misunderstands the Turing test, which is not about human-like thinking, but thinking which can be made to seem human-like, hence the aura of façade in all AI programs (Dennett 1998). On our iPhones, Siri seems smart because, if our question is sufficiently predictable, she seems to recognize what we are asking her. She can look things up on the internet to give you answer, and she can calculate things. Siri is in fact a recording of the voice a woman named Susan Bennett who lives in Atlanta, Georgia. So the intelligence of our devices is no more than what Turing calls an ‘imitation game’, a piece of trickery by means of which we anthropomorphize the machine. How smart is the machine? Siri is smart enough to have a recording of a huge amount of human data at her finger tips per medium of the internet—the address of a restaurant, what the weather is forecast to be at 3.00pm, the year of the French Revolution. She has been programmed to answer many kinds of question and to look up many kinds of thing and make numerical calculations. She’s also been programmed to give funny answers to insults. This is how our phones pass the Turing test every day if it the test is seen to be no more than an imitation game. Computers fail this test to the extent that passing the test is a failure of credulity.

Computers in education, can be this smart: they can assess whether an answer to a question is right or wrong, because they have been programmed to ‘know’ the answer (for instance, computerized, select response tests). They can anticipate a range of sequences of learning and feedback which have been programmed into them (for instance, in an intelligent tutor). They can grade a text by comparing it with other texts that have already been graded by humans (automated essay scoring). They are alternately not very smart and very smart. They are not very smart in the sense that they can only repeat answers and sequences that have already been programmed into them by humans. But they are smart to the extent that they record and deliver already-recorded information and patterns of human action.

In this regard, computers are merely an extension of the knowledge architectures of writing. They make them more efficient by mechanizing parts of the reading path. Instead of looking up an answer to check whether it is correct (via tables of contents, indexes or lists of answers to questions in a book, printed upside down to help you resist the temptation of looking up the answer prematurely), they look it up for you—as long as the question and answer have been written up by humans in advance. Just as a thousand people could read a book, a thousand people can have an answer asked of them by the computer, and their answer checked. The machine is smart because it has been programmed by a human, in much the same way that a book is smart because it has been written by a human. Books are smart, and computers are smart in the same way—only mechanically more efficient. The smartest thing about computers, perhaps, is their ‘imitation game’, the way they have been designed to seem smart. It’s not just the anthropomorphized interfaces (Siri’s voice or responsive avatars), but the text on the screen that says things to the learner, or tells them with a nicely presented alert when their answers are right or wrong.

Computers can also be intelligent in another way. Not only repeating things they have been programmed to say, to a limited degree they can also learn. This aspect of artificial intelligence is called ‘machine learning’. This means that the machine can learn from the data that it is presented. In fact, this is a process of finding statistical patterns in meanings that can be represented using numerical tokens. In supervised machine learning, computer interaction (an action sequence in an intelligent tutor, the words typed in an essay) that has been attributed a value by a human (an affective judgment such as ‘engagement’, or a grade) is compared statistically with a new pattern of interaction or text. If similar, the human judgment is attributed by the machine to the new activity or text. Unsupervised machine learning presents the person with statistically significant patterns, and asks them to attribute a meaning. The machine may then record the conclusion, thereby adding to its apparent smarts. But the computer is still essentially no more than a recording machine and a calculating machine. However, these mere recording and calculating machines might be very helpful to learners to the extent that they record and calculate data that have been construed by humans to be evidence-of-learning.

The computer is an artifice of human communication, an extension of older textual architectures, but a significant extension nevertheless, if not the qualitative leap that its most enthusiastic advocates would have us believe. It is a cognitive prosthesis, an extension of the social mind, and as such part of a continuous history that begins with writing and later becomes books and schools.


Dennett, Daniel C. 1998. Brainchildren: Essays on Designing Minds. Cambridge MA: MIT Press.

Harnad, Stevan. 1989. "Minds, Machines and Searle." Journal of Theoretical and Experimental Artificial Intelligence 1:5-25.

Searle, John. 1980. "Minds, Brains, and Programs." Behavioral and Brain Sciences:417-457.

Searle, John R. 2014. "What Your Computer Can’t Know." New York Review of Books. 9 October.

Turing, Alan M. 1950. "Computing Machinery and Intelligence." Mind 59:433-460.

  • Michael Conn
  • Steven Polster