Reason, Emotion, and AI

Human intelligence has always been an inspiration for artificial intelligence. For example, early work in artificial neural networks was inspired by the interconnected axons and dendrites found in biological brains. 

Human intelligence also inspires the tasks AI researchers use to benchmark their machines: tasks are defined that require human intelligence, then researchers attemp to build machines that can perform those tasks. AI has progressively matched or out -performed humans in tasks such as chess, Go, and language translation.

Does this progress in AI mean machines are getting ‘smarter’, in the sense of being closer to having human intelligence? I would say ‘No’. Others would say skeptics like me just keep moving the goal post by saying “Anything a machine can do can’t be intelligence, it’s just code!”

But let’s look at human intelligence. It involves a lot more than the kind of skills demonstrated by a chess grandmaster in a championship game. Although chess mastery is a demonstration of exceptional human intelligence, this skill represents a narrow slice of the grandmaster’s intelligence, the totality of which relies on a complex cognitive architecture shared by all humans. Emotions are part of that architecture.

Emotions are often framed as the antithesis of intelligence and a human weakness. In the 1951 science fiction movie classic ‘The Thing from Another World’, a scientist, Dr. Carrington, marvels at the superiority of an alien mind: ‘No pleasure, no pain… no emotion, no heart. Our superior in every way’. Modern commentators cast our society as minds manipulated by social media, embracing conspiracy theories in the service of anger and resentment, at the expense of reason.

It is true that immense human progress has been made through science and reason, and emotions can stir up real troubles. However, it is clear that reason and emotion work hand-in-hand. Humans wouldn’t have evolved that way if emotions weren’t an essential part of our survival.

So, what role does emotion play in human intelligence? It provides essential context and motivation behind conscious analytical problem solving. It is the reason the chess grandmaster acquired her skill to begin with, and the architect of the path that led her there. The grandmaster’s skilled game play is just the tip of the iceberg. The intelligence required to create that highly specialized, analytical skill, in a brain evolved to survive in the broad context of human life, is truly awesome, and far beyond the rote learning of a deep neural network.

What role does emotion play in AI? There is a branch of AI called Emotion AI, which seeks to develop AIs that recognize and respond to human emotion. While this line of work benefits human – AI collaboration (and unfortunately, manipulation), in my view it doesn’t get at the essential role of emotions in human intelligence.

It’s not that AIs need to be able to ‘feel’ emotions to have human-like intelligence. Instead, AI problem solving would need to incorporate the immense informational context represented by human emotions. Emotions represent lifetimes of experience living embodied in the real world, incorporating a comprehensiveness and appreciation of causality and common sense that has been unmatched so far in AI.

Author: Tom Robertson

Tom Robertson, Ph.D., is an organizational and engineering consultant specializing in harmonizing human and artificial intelligence. He has been an AI researcher, an aerospace executive, and a consultant in Organizational Development. An international speaker and teacher, he has presented in a dozen countries and has served as visiting faculty at Écoles des Mines d’Ales in France and Portland State University.

2 thoughts on “Reason, Emotion, and AI”

  1. Emotions in AI

    Starting with first principles the goal of any machine is to achieve suitability for purpose. The purpose can be crisp or soft, near term or ongoing, undefined or well understood, and so on. Doesn’t matter – the goal state and measurement criteria is suitability for purpose.

    For those of us who have spent our lives building systems, the methodology was always to define that suitability for purpose in a set of requirements. We needed something to measure so that we could say we were done and get paid. If that is applied to the AI challenge of the day, self-driving, then you come up with a set of requirements; lane keeping, collision avoidance, distance keeping and flow management. And then secondary requirements that relate to the comfort of the passengers; acceleration/deceleration, sway, … whatever.

    So by parsing the suitability for purpose in this way we can add the emotional context. As a self-aware, self-diagnostic machine an AI can get emotional about how well it is comparing to metrics in the pursuit of these goals and to try to learn and improve its performance.

    Human actors definitely have emotions about all aspects of our lives. How well we have done at our chosen pursuits and how well our performance has demonstrated our suitability for purpose, as well as other softer pleasures of life. I am pleased when I have a toasted bagel with ‘bad for me peanut butter’ and when I give a good presentation succinctly making a key point to a broad audience. We could say that an AI is superior because it does not have any of the distractions in the goal seeking – just as James Holzhauer was so successful on his Jeopardy run because of his cold, single minded approach to winning. My wife stopped watching because she found him machine-like and boring. But he was kick butt in the suitability for purpose area.

    IMO the Turing Test was always a parlor trick with no real purpose. AI with a focus on achieving a suitability for purpose could have an emotional response and thus assess and improve its performance in the track provided. That is the way AI can fit in an organizational context and improve the overall performance of the human/AI team. But to achieve that emotional, organizational enhancing win we need to discard the android silliness like Cmdr Data on Startreck and do good system design and programming that understands the suitability for purpose of the team.

    Swick

    Like

    1. Greg,

      Thanks for your thoughtful comment – I really like the idea of thinking about AI and emotion in the context of purpose.

      I think well-designed systems often outperform their human designers precisely because they are so relentlessly purposeful. (I’m thinking of forklifts, airplanes, chess programs, etc.). And of course the error signal in a feedback control system is a counterpart to emotion.

      What strikes me about human emotions is that they serve a ‘meta-purpose’ outside of conscious purpose. They bring to our attention assessments our brain is making, underneath our conscious thought.

      Of course sometimes emotions lead to impulses that are best controlled. Like overcoming fear to rescue someone from a fire, or turning down that second helping of peanut butter.

      While some of our emotional assessments are obsolete in the modern world, I think mostly they represent a deep wisdom that helps steer our consciously purposeful thinking to adapt to the requirements of surviving in nature and society. AIs have yet to achieve human-level adaptability, and I think they will have a hard time getting there, because part of what makes humans so adaptable operates below consciousness.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: