Categories
AI

Is AI an Existential Threat?

“Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven’t even landed on the planet yet!”

Andrew Ng, Is Ai an Existential Threat to Humanity?

Where will AI take us? One school of thought envisions the progression of AI first to human-level Artificial General Intelligence, then on to Superintelligence, by virtue of AI’s ability to self-improve. Once AIs are superintelligent, the speculation continues, humankind will essentially be at the mercy of these superior beings, who may well decide against humans in favor of their own goals. This may sound fanciful, however prominent figures such as Stephen Hawking, Elon Musk, and Bill Gates are among those sounding the alarm that AI might eventually lead to disaster.

Yes, AI can be dangerous – autonomous weapons, social manipulation, fraud. However I think experience with AI so far gives us no evidence that superintelligent AIs will become an existential threat to humankind. Here are some of the reasons:

  • Today’s AI is not in the same league as human intelligence
  • Human intelligence is intertwined with today’s AI
  • Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers
  • Even a superintelligent AI would not seek human domination.

Today’s AI is is not in the same league as human intelligence

Today’s AIs are impressive. People have made it their life’s work to learn the game of Go, and AlphaGo-Zero learned to play brilliantly and beat any human opponent in just 40 days, through trial and error. AIs have learned to recognize spoken language by mapping patterns of speech to patterns of text, without being given any information about the things and actions and ideas represented by the spoken words, nor any information about the structure of language.

Playing Go and understanding language are certainly evidence of intelligence in humans. But when AIs accomplish these feats, they are drawing on capabilities that are very narrowly specialized and shallow compared to human intelligence.

A single human has the capability to learn pretty much anything they need to know about their environment, including things microscopic and light-centuries away. An AI can become an ‘expert’ only in a very limited domain. The “universe” “understood” by AlphaGo-Zero is a 19 x19 game board, black and white stones, and the simple rules of Go.

The AI community got pretty excited when a single AI mastered 57 Atari video games. Parents might be impressed if their child became an Atari ace, but they pretty much take it for granted that their child might master reading, writing, arithmetic, music, dance, sports, ….

A narrow superhuman capability is not intelligence. Even electronic calculators are superhuman in a sense – they can do arithmetic much faster than we can. But no one thinks of them as ‘intelligent’.

While today’s neural networks may be deep, what they learn is shallow compared to human learning. When a child learns words, the words have meaning in the physical world. An AI can do a great job of mapping spoken words to text, but the AI is only mapping patterns of sound to patterns of text, without any regard to what words mean.

The depth of human understanding is key to human intelligence. That depth allows humans to become competent in many things. It also allows children to learn words from a small number of examples. The voice-to-text AI needed 3 million audio samples and 220 million text samples to learn its mappings!

Human intelligence is intertwined with today’s AI

AIs function in a context totally created by humans. To enable machine learning, humans must first design the machine, structuring perhaps billions of elements controlled by adjustable parameters. It has taken decades of human engineering to find good designs, designs that can, with the right parameters, effectively map input data to recognized patterns. Humans must then run many examples through the neural network to discover parameter values that lead to accurate recognition. The machine ‘learns’ only because humans have developed algorithms to adjust parameters to get better and better results.

Note that AI learning does not take place in the physical world occupied by humans, but instead in artificial worlds created by humans: visual scenes converted to numbers by digital cameras, sound converted to time-frequency arrays by electronics, online retail patterns created by a vast online infrastructure.

Human-level intelligence can’t be achieved simply by scaling up today’s AI on faster computers

It has been simply remarkable how far AI has advanced in the last few years, as artificial neural networks have become bigger and faster – self-driving cars, language translation, personal assistants, advanced robots. Artificial neural networks of larger and larger size have become practical (175 billion parameters!), and the larger the networks, the more the networks seem to be able to learn.

And the way modern deep neural networks learn is impressive. For example, in the old days, the relatively small neural networks built to recognize objects in images relied on humans to give them a significant head start: engineers would write programs that extracted edges, objects, and other features in order to simplify the patterns the networks had to learn to recognize. Modern networks don’t need that head start – they can take digital picture elements directly as inputs, figuring out for themselves what features are important. And they perform far better than earlier networks.

Some argue that since intelligence is based on computation (even the human brain is essentially an organic computer), intelligence is limited only by computational power. Therefore as human-built computers continue their exponential increase in power, it’s only a matter of time before machines will match and then exceed human intelligence.

I (and others) disagree. I think we have no evidence that achieving super intelligence is just a matter of more powerful computers and more elaborate versions of today’s AI. As discussed earlier in this post, there are significant differences between the machine and human versions of ‘intelligence’ and ‘learning’. The gap between machines and humans is too large to expect achievement of even human-level intelligence, let alone superintelligence, by just doing ‘more of the same’. An analogy would be claiming the inevitability of faster than light travel because of the rapid rise in the top speed of human-made vehicles: from the 36 mph steam locomotive ‘Rocket’ 200 years ago, to the 90,000 mph Juno spacecraft today.

Even a superintelligent AI would not seek human domination

A more fundamental error is in thinking that future AIs and humans would act like two tribes competing for world domination. Predicting that AIs and humans would fight each other in the arena of world domination requires mistaking AIs for humans and humans for AIs.

Humans have evolved to live on earth with other humans using conscious and unconscious cognitive capabilities. Our interactions with our environment and each other lead to complex motivations and behaviors – writing symphonies, engineering AIs, seeking power, waging war. The part of our cognitive abilities we capture in AI is just a small part – the kind of problem solving we can understand and model.

Imagine that part of your cognitive ability – let’s say the ability to play chess or understand languages – were suddenly amplified thanks to a new drug. Should your neighbors become worried that you will become a bully, or start trying to dupe them out of money?

Of course some people, finding themselves suddenly smarter, might apply their new capability to crime. However, these would be people already inclined to crime, using their enhanced intelligence as a new tool. Higher intelligence does not increase inclination toward crime. (In fact, human IQ and crime are negatively correlated.)

If smart AIs of the future become dangerous, it won’t be because they are super smart problem solvers. It will because they are designed by humans to be dangerous. Motivation to dominate humanity is not an automatic consequence of superintelligence, even if it could be achieved.

Another barrier to AI world domination is that agency – getting things done in the world – takes more than intelligence. Materials must be acquired, artifacts fashioned, millions of details coordinated in the physical world. Even if an AI did achieve superintelligence and decide to try to dominate humanity, that would far from guarantee it enough agency in the real world to succeed.

Superintelligence?

The thought of AIs that are somehow superhumanly intelligent yet very human-like in their vices is certainly chilling. Just like faster-than-light travel, it makes a great device for science fiction. However, I think our worries are better spent on more plausible threats.

By Tom Robertson

Tom Robertson, Ph.D., is an organizational and engineering consultant specializing in harmonizing human and artificial intelligence. He has been an AI researcher, an aerospace executive, and a consultant in Organizational Development. An international speaker and teacher, he has presented in a dozen countries and has served as visiting faculty at Écoles des Mines d’Ales in France and Portland State University.

One reply on “Is AI an Existential Threat?”

Leave a Reply to AI’s Superpower – Thinking Teams Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s