Robots, Dolls and CGI. Between Empathy and Sheer Horror.
It’s 1970, and Japanese roboticist Masahiro Mori makes a simple drawing to accompany his latest to-be-published work. He is researching the relationship between human likeness and familiarity, and soon finds himself staring at a marked drop in the curvy line that transverses his graph. He calls this sink the “uncanny valley”, a dip in emotional response when encountering an entity that is almost, but not quite human.
The term is anything but new, yet the question remains exquisitely relevant today. Plenty has been written about this sense of unease and discomfort (510 academic papers referenced the “uncanny valley” phenomenon in 2015, an interesting growth compared to the humble 35 that were published in 2004), which makes it even more interesting as the concept can mostly be understood circularly, or subjectively. The effect of the uncanny valley varies from person to person, it is tied to culture and, to make it worse, it changes with exposure.
But where there’s obscurity, there go the social sciences, hungry for such elusive entities. Soon the topic had become less about robotics and more interwoven with philosophy, psychology and neuroscience. And its subjects have also changed: It wasn’t just androids we could look into. Life-like dolls and movie and computer game characters can also give us the creeps!
What do these sciences say, then?
It wasn’t until recently that we could look into a person’s brain activity live, but thanks to neuroscience experts agree (more or less) that the phenomenon has to do a lot with boundaries. In particular, those where something moves from one category to another, or fails to be classified effectively as it carries contradictory information.
Humans are special amazingly complex creatures, with a bit of strange wiring here and there. We know for a fact that:
a.- Humans don’t deal very well with things that don’t fall into defined states; and
b.- Humans intuitively believe/feel that near-human entities possess a mind.
Now, that’s a recipe for an emotional soup. Because it all seems to indicate that the eerie feelings we get when looking at an ugly robot is actually a mismatch between what an entity looks like it’s able to do and what it can actually do. Which doesn’t sound that terrible written down, but we are talking about a fine, fine organ, the brain. And the tiniest deviation can cause immense discomfort.
Angela Tinwell, a researcher from Bolton University who has been successfully exploring the uncanny valley in a variety of media, has made some very interesting discoveries. For example, that game characters who show different emotions in the upper and lower parts of their faces cause a pretty uncanny feeling — in particular when happy faces are paired with angry or fearful eyes. This, she says, might be reminiscent of humans who exhibit psychopathic traits. Individuals whose intentions we cannot decipher? That rings a bell.
Which takes us to a third, more positive trait of our species that lies at the very foundation of our humanity. We have evolved this one for millions of years: our hardcoded talent for reading (with more or less success) other people’s faces and, consequentially, their motivations.
There’s a reason why we have dozens of facial muscles when functionally we would only need two (open and close eyes, open and close mouth — simple). Our social intelligence allows us to surmise what lies behind the facial expressions of others, and we normally do this in a split second.
So, if we have this immense ability to read faces and therefore intentions, what happens when we look at a robot? If we are not really familiar with the thing, in all probability we will experience some degree of psychological discomfort, the triggering of an experience similar to that of cognitive dissonance.
The mechanism that lies behind this strong dissonance is no other than the conflict between expectation and reality. A robot that looks a certain way, that has for example human-like features, creates a subconscious expectation — namely that it will act human, too. But if the robot moves in a different way, say mechanical, or shows inability to process complex information or read social clues, there is an incongruence between what we assumed the robot was, and what it was in reality.
Which means that we will react much better to a cartoonish-looking robot than to a realistic one, something that has been proven many times. You need look no further for a real-life example than the animated movies from the last decade or so. Whereas Dreamworks and Pixar destroyed box offices, more realistic approaches like Polar Express left everyone weirded out.
For a character of any kind to give a more positive impression, its appearance needs to match its behaviour. A robot with a synthetic voice will be found less eerie than one with a human voice. It’s not just about going too human, not even going alive, since we are not wired to prefer biological movement over mechanical. It’s just about consistency. The Pixar lamp moves like a lamp, not like a person. it shows some degree of intention, without the need for eyes or a mouth. It’s consistent. When an entity doesn’t deliver what is expected, a ‘prediction error’ is generated, causing the creeps.
With technology moving forward, we are able to create more and more realistic androids and virtual characters. Does this mean that we will, at some point, overcome the uncanny valley? Well… no, not necessarily.
The dissonance comes from our brain being incapable of integrating the entities’ features into a whole. For example, human and non-human features are processed by different parts of the brain, which have different speeds. This competition where some areas are analysed faster produces a lag, and after millennia of reading others, the tiniest deviation from the norm can elicit large feedback error signals. When creating virtual characters, there will always be a degree of incongruence, if not only because of the artificial medium. Would we still get the strange feeling fi we achieve physiologically perfect bio-bots? Hard to say. We do feel something eerie when plastic surgery goes unnatural. We also have gotten great at distinguishing between CGI and real sets in movies.
As our ability to create these entities improves, so does our ability to detect when something’s just not right. We might fine-tune our perception more as time goes by, who knows what evolution has in store for us if we manage to survive the next dozen years.
This means that perhaps we are not looking at an uncanny valley, but rather an uncanny wall. One that grows taller as we climb it. And this might not be such a bad thing. After all, do we want to live in a world where we are unable to determine if someone has a conscience or not?
Oh, hang on…
Cover photo Source: Joey Gannon, CC.
References:
Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, Trans.).IEEE Robotics and Automation, 19(2), 98–100. http://dx.doi.org/10.1109/MRA.2012.2192811.
Y. Yamada, T. Kawabe, K. Ihaya. Categorization difficulty is associated with negative evaluation in the “uncanny valley” phenomenon. Japanese Psychological Research, 55 (1) (2013), pp. 20–32 http://dx.doi.org/10.1111/j.1468-5884.2012.00538.x
J. Seyama, R.S. Nagayama. The uncanny valley: The effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16 (4) (2007), pp. 337–351http://dx.doi.org/10.1162/pres.16.4.337
D. Hanson, A. Olney, S. Prilliman, E. Mathews, M. Zielke, D. Hammons, … H. Stephanou. Upending the uncanny valley. Proceedings of the twentieth national conference on artificial intelligence, AAAI Press, Menlo Park, CA (2005), pp. 1728–1729