AI’s have been developed that respond to human language, drive cars, and play masterful chess. As these feats traditionally require human intelligence, it might be said that AIs possess a form of intelligence.
What do we humans make of this artificial ‘intelligence’? Are AIs intelligent entities in the same sense as we humans? Can machines think?
Thinking about thinking is second nature to us humans. What first evolved as an ability to guess what others are thinking, to better compete and collaborate, further evolved into self-reflection. While other animals self-reflect, humans have a unique ability to conceptualize thinking. While self-reflection is key to human intelligence in general, it shows up particularly in philosophy (Descartes’s famous ‘I think therefore I am’), and it has of course driven the invention of AI itself.
For a long time, philosophers have been thinking about whether machines can (ever) think. The philosophical debate centers on whether there is something inherent in human intelligence that can never be duplicated by a machine .
In a 1980 paper, the philosopher John Searle argues that machines can never achieve human-like intelligence. In his famous ‘Chinese room’ thought experiment, a non-Chinese-speaking man is able to respond, in Chinese, to Chinese messages passed through a slot in the door. He is able to do this without understanding any Chinese, simply by referring to Chinese-to-Chinese correspondence tables.
Searle says that like computers, the Chinese room only simulates thinking, which is clearly different from a person communicating from an understanding of the Chinese language. Thus, AIs do not understand as humans do, and cannot really be thinking. AIs only simulate thinking.
Hubert Dreyfus also believed that machines will never think like humans. In ‘What Computers Can’t Do‘ he argues that human intelligence requires the context of a human body and a human life, which can never be reduced to machine algorithms.
On the other hand, in a 1950 paper Alan Turing concluded that there is no reason a machine might not eventually be judged as ‘thinking’, to the extent we are able to come up with a suitable test. He proposed his famous ‘Imitation Game’ (now called the Turing Test) as a criteria: if an AI can carry on an open-ended conversation with a person and not reveal itself as a non-person, we have no justification to say it is not thinking.
More recently, AI pioneer Geoff Hinton made another argument for the possibility of machines thinking. He believes deep neural networks may eventually achieve human-level intelligence. He points out that our brains work with patterns of billions of elementary elements (electrical and chemical signals) in a way not fundamentally different from the way deep neural networks encode patterns in their billions of parameters.
AI practitioners have tended to view the question of whether machines will ever think as more of a practical issue than a philosophical one. They point out the difficulty in pinning down what constitutes human intelligence, and the difficulty in predicting or ruling out technical breakthroughs. They prefer to look at what AI has accomplished so far, and speculate when continuing progress might produce AIs that match human-level performance.
It seems these speculations have often underestimated how far we have to go. For example, in the 1960’s, AI pioneer Herbert Simon predicted human level intelligence by the 1980’s. In 2006, Ray Kurzweil predicted that a computer with the power of a human brain would be available around 2020 (for $1,000).
We are still waiting!