Empathy: the ability to feel each others’ feelings. It lies at the core of what makes us human. But how important will it be to the artificial minds we one day create? What are AI researchers doing to imbue their creations with artificial empathy ... and should they be doing more? In short, what is the pathway to the “
machines of loving grace” that poet Richard Brautigan foresaw?
The mainstream of AI research has traditionally focused on the more explicitly analytical, intellectual, nerdy aspects of human intelligence: planning, problem-solving, categorization, language understanding. Recent attempts to broaden this focus have focused mainly on creating software with perceptual and motor skills: computer vision systems, intelligent automated vehicles, and so forth. Missing almost entirely from the AI field is the more social and emotional aspects of human intelligence. Chatbots attempting the Turing test have confronted these aspects directly – going back to ELIZA, the landmark AI psychotherapist from the early 1970’s – but these bots are extremely simplistic and have little connection to the main body of work in the AI field.
I think this is a major omission, and my own view is that empathy may be one of the final frontiers of AI. My opinion as an AI researcher is that, if we can crack artificial empathy, the rest of the general-AI problem will soon follow, based on the decades of successes that have already been achieved in problem-solving, reasoning, perception, motorics, planning, cognitive architecture and other areas.
(In my own AI work, involving the Novamente Cognition Engine and the OpenCog Prime system, I’ve sought to explicitly ensure the capability for empathy via multiple coordinated design aspects – but in this blog post I’m not going to focus on that, restricting myself to more general issues.)
Why would empathy be so important for AI? After all, it’s just about human feelings, which are among the least intelligent, most primitively animal-like aspect of the human mind? Well, the human emotional system certainly has its quirks and dangers, and the wisdom of propagating these to powerful AI systems is questionable. But the basic concept of an emotion, as a high-level integrated systemic response to a situation, is critical to functioning of any intelligent system. An AI system may not have the same specific emotions as a human being – particular emotions like love, anger and so forth are manifestations of humans’ evolutionary heritage, rather than intrinsic aspects of intelligence. But it seems unlikely that an AI without any kinds of high-level integrated systemic responses (aka emotions) would be able to cope with the realities of responding to a complex dynamic world in real-time.
A closely related point is the social nature of intelligence. Human intelligence isn’t as individual as we modern Westerners often seem to think: a great percentage of our intelligence is collective and intersubjective. Cognitive psychologists have increasingly realized this in recent decades, and have started talking about “distributed cognition.” If the advocates of the “global brain” hypothesis are correct, then eventually artificial minds will synergize with human minds to form a kind of symbiotic emergent cyber-consciousness. But in order for distributed cognition to work, the minds in a society need to be able to recognize, interpret and respond to each others’ emotions. And this is where empathy comes in.
Mutual empathy binds together social networks: as we go through our lives we are perpetually embodying self-referential emotional equations like
X = I feel pain and that you feel X
Y = I feel that you feel both joy and Y
or mutually-referential ones like
A = I feel happy that you enjoy both this music and B
B = I feel surprised that you feel A
These sorts of equations bind us together: as they unfold through time they constitute much of the rhythm by which our collective intelligence experiences and creates.
So empathy is important: but how does it work?
We don’t yet know for sure ... but the best current thinking is that there are two aspects to how the brain does empathy: inference and simulation. (And I do think this is a lesson for AI: in my own AI designs I deal with these aspects separately, and then address their interaction ... and I do think this is the right approach.)
Inference-wise, empathy has to do with understanding and modeling (sometimes consciously, sometimes unconsciously) what another person must be feeling, based on the cues we perceive and our background knowledge. Psychologists have mapped out
transformational rules that help us do this modeling.
Simulative empathy is different: we feel what each other are feeling. A rough analogue is virtualization in computers: running Windows on a virtual machine within Linux; emulating an Activision console within Windows. Similarly, we use the same brain-systems that are used to run ourselves, to run a simulation of another person feeling what they seem to be feeling. And we do this unconsciously, at the body level: even though we don’t consciously notice that sad people have smaller pupils,
our pupils automatically shrink when we see a sad person -- a physiological response that synergizes with our cognitive and emotional response to their sadness (and is the technique the lead character uses in Blade Runner to track down androids who lack human feeling). A long list of examples has been explored in the lab already, and we’ve barely scratched the surface yet: people feel disgust when they see others smell bad odor, they feel pain when they see others being pierced by a needle or get electrical shock, they sense touching when they see others being brushed, etc.
Biologists have just started to unravel the neural basis of simulative empathy, which seems to involve brain cells called
mirror neurons ... which some have argued play a key role in other aspects of intelligence as well, including language learning and the emergence of the self (I wrote a speculative paper on this a couple years back).
(A mirror neuron is a neuron which fires both when an animal acts and when the animal observes the same action performed by another animal (especially one of the same species). Thus, the neuron "mirrors" the behavior of another animal, as though the observer were itself acting. These neurons have been directly observed in primates, and are believed to exist in humans and in some birds. In humans, brain activity consistent with mirror neurons has been found in the premotor cortex and the inferior parietal cortex.)
So: synergize inference and simulation, and you get the wonderful phenomenon of empathy that makes our lives so painful and joyful and rich, and to a large extent serves as the glue holding together the social superorganism.
The human capacity for empathy is, obviously, limited. This limitation is surely partly due to our limited capabilities of both inference and simulation; but, intriguingly, it might also be the case that evolution has adaptively limited the degree of our empathic-ness. Perhaps an excessive degree of empathy would have militated against our survival, in our ancestral environments?
The counterfactual world in which human empathy is dramatically more intense is difficult to accurately fathom. Perhaps, if our minds were too tightly coupled emotionally, progress would reach the stage of some ant-colony-like utopia and then halt, as further change would be too risky in terms of hurting someone else’s feelings. On the other hand, perhaps a richer and more universal empathy would cause a dramatic shift in our internal architectures, dissolving or morphing the illusion of “self” that now dominates our inner worlds, and leading to a richer way of individually/collectively existing.
One aspect of empathy that isn’t sufficiently appreciated is the way it reaches beyond the touchy-feely sides of human life: for instance it pervades the worlds of science and business as well, which is why there are still so many meetings in the world, email, Skype and WebEx notwithstanding. The main reason professionals fly across the world to hob-nob with their colleagues – in spite of the often exhausting and tedious nature of business travel (which I’ve come to know all too well myself in recent years) -- is because, right now, only face-to-face communication systematically gives enough of the right kind of information to trigger empathic response. In a face-to-face meeting, humans can link together into an empathically-joined collective mind-system, in a way that doesn’t yet happen nearly as reliably via electronically-mediated communications.
Careful study has been given to the difficulty we have empathizing with certain robots or animated characters. According to Mori’s theory of the “
uncanny valley” – which has been backed up by brain imaging studies -- if a character looks very close to human, but not close enough, then people will find it disturbing rather than appealing. We can empathize more with the distorted faces of Disney cartoons or manga, than with semi-photo-realistic renditions of humans that look almost-right-but-eerily-off.
To grasp the uncanny valley viscerally, watch one of the
online videos of researcher Hiroshi Ishiguro and the “geminoid” robot that is his near-physical-clone -- an extremely lifelike imitation of his own body and its contours, textures and movements. No AI is involved here: the geminoid is controlled by motion-capture apparatus that watches what Ishiguro does and transfers his movements to the robot. The imitation is amazing – until the bot starts moving. It looks close enough to human that its lack of subtle human expressiveness is disturbing. We look at it and we try to empathize, but we find we’re empathizing with a feelingless robot, and the experience is unsettling and feels “wrong.”
Jamais Cascio has proposed that existly this kind of reaction may occur to transhumans with body modifications – so from that point of view among others, this phenomenon may be worth attending.
It’s interesting to contrast the case of the geminoid, though, with the experience of interacting with ELIZA, the AI psychotherapist created by Joseph Weizenbaum in 1966. In spite of having essentially no intrinsic intelligence, ELIZA managed to carry out conversations that did involve genuine empathic sharing on the part of its conversaton-partners. (I admit ELIZA didn’t do much for me even back then, but, I encountered it knowing exactly what it was and intrigued by it from a programming perspective, which surely colored the nature of my experience.)
Relatedly, some people today feel more empathy with their online friends than their real-life friends. And yet, I can’t help feel there’s something key lacking in such relationships.
One of the benefits of online social life is that one is freed from the many socio-psychological restrictions that come along with real-world interaction. Issues of body image and social status recede into the background – or become the subject of wild, free-ranging play, as in virtual worlds such as Second Life. Many people are far less shy online than in person – a phenomenon that’s particularly notable in cultures like Japan and Korea where social regulations on face-to-face communcation are stricter.
And the benefits can go far beyond overcoming shyness: for example, a fifty-year-old overweight trucker from Arkansas may be able to relate to others more genuinely in the guise of a slender, big-busted Asian girl with blue hair, a microskirt and a spiky tail... and in Second Life he can do just that.
On the other hand, there’s a certain falsity and emotional distance that comes along with all this. The reason the trucker can impersonate the Asian ingenue so effectively is precisely that the avenues for precise emotional expression are so impoverished in today’s virtual environments. So, the other fifty-year-old trucker from Arkansas whose purple furry avatar is engaged in obscene virtual acts with the Asian babe, has to fill in the gaps left by the simplistic technology – to a large extent, the babe he’s interacting with is a construct of his own mind, which improvises on the cues provided by the signals given by the first trucker.
Of course, all social interaction is constructive in this way: the woman I see when I talk to my wife is largely a construct of my own mind, and may be a different woman than I would see if I were in a different mood (even if her appearance and actions were precisely the same). But text-chat or virtual-world interactions are even more intensely constructive, which is both a plus and a minus. We gain the ability for more complete wish-fulfillment (except for wishes that are intrinsically tied to the physical ... though some people do impressively well as satisfying virtual satisfactions for physical ones), but we lose much of the potential for growing in new directions via empathically absorbing emotional experiences dramatically different from anything we would construct on our own based on scant, sketchy inputs.
It will be interesting to see how the emotional experience of virtual world use develops as the technology advances ... in time we will have the ability to toggle how much detail our avatars project, just as we can now choose whether to watch cartoons or live action films. In this way, we will be able to adjust the degree of constructive wish-fulfillment versus self-expanding experience-of-other ... and of course to fulfill different sorts of wishes than can be satisfied currently in physical or virtual realities.
As avatars become more realistic, they may encounter the uncanny valley themselves: it may be more rewarding to look at a crude, iconic representation of someone else’s face, than a representation that’s almost-there-but-not-quite ... just as with Ishiguro’s geminoids. But just as with the geminoids, the technology will get there in time.
The gaming industry wants to cross the uncanny valley by making better and better graphics. But will this suffice? Yes, a sufficiently perfected geminoid or game character will evoke as much empathy as a real human. But for a robot or game character controlled by AI software, the limitation will probably lie in subtleties of movement. Just like verbal language, the language of emotional gestures is one where it’s hard to spell out the rules exactly: we humans grok them from a combination of heredity and learning. One way to create AIs that people can empathize with will be to make the AIs themselves empathize, and reflect back to people the sorts of emotions that they perceive. Much as babies imitate adult emotions. Envision a robot or game character that watches a video-feed of your face and tailors its responses to your emotions.
Arguably, creating AIs capable of empathy has importance far beyond the creation of more convincing game characters. One of the great unanswered questions as the Singularity looms is how to increase the odds that once our AIs get massively smarter than we are, they still value our existence and our happiness. Creating AIs that empathize with humans could be part of the answer.
Predictably, AI researchers so far have done more with the inferential than the simulative side of empathic response. Selmer Bringsjord’s team at RPI got a lot of press earlier this year for
an AI that controls a bot in Second Life, in a way that demonstrates a limited amount of “theory of mind”: the bot watches other characters with a view toward figuring out what they’re aware of, and uses this to predict their behavior and guide its interactions.
But Bringsjord’s bots don’t try to feel the feelings of the other bots or human-controlled avatars they interact with. The creation of AIs embodying simulative empathy seems to be getting very little attention. Rosalind Picard’s
Affective Computing Lab at MIT has done some interesting work bringing emotion into AI decision processses but has stopped short of modeling simulative empathy. But I predict this is a subfield that will emerge within the next decade. In fact, it seems plausible that AI’s will one day be far more empathic than humans are – not only with each other but also with human beings. Ultimately, an AI may be able to internally simulate you better than your best human friend, and hence demonstrate a higher degree of empathy. Which will make our games more fun, our robots less eerie, and potentially help make the post-Singularity world a more human-friendly place.