The Uncanny Valley in Game Design

Creating human-like characters that won’t give players the creeps

Yisela Alvarez Trentini
Towards Data Science
13 min readMar 8, 2019

--

The concept of the Uncanny Valley was introduced in the 1970s by a Japanese roboticist called Masahiro Mori. Mori also happens to be the founder of Robocon (the first Japanese robot-building competition) and is the President of the Mukta Research Institute, which studies the metaphysical implications of robotics.

Mori loved designing robots, and as he was good at it. The more he learned, the more realistic his creations looked — with synthetic skin, moving eyes and other fantastic feats. But Mori noticed something interesting: While the simpler robots created a positive reaction from humans around, the more realistic or human-like they became, the more people became scared of them — even though they were excellent examples of robotics.

What Mori noticed was that there was a relationship between how similar something is to a human being, and how we react to it, emotionally.

So, he created a graphic to represent his findings regarding how we react to something similar to a human:

The graphic has two axis:

  • Human Likeness (horizontal), which means how similar the robot is to a living person; and
  • Familiarity (vertical), or whether the robot generates a positive or a negative emotion in those watching or interacting with it.
  1. We start with something that is fairly familiar, but doesn’t look very human: A robotic hand (for example the Meca500 by Mecademic for industrial use). We are neutral to it, emotionally speaking.
  2. As we move along the “human likeness” axis, we find a Humanoid robot (in this case the NAO by Vishnu Engineering, although a stuffed animal will work). We add human characteristics to the object, which makes it more appealing — we perceive it as having a “personality”, or as being cute, in this case.
  3. But if we get too close to a real human… We don’t respond too well, it makes us uneasy. We are now in the middle of the uncanny valley.
  4. As we finally pass the valley and move to the top right corner, we’re fine again. No creepy feelings, no uncanny negative responses. Only thing is that, so far, only humans inhabit this side (yes, green humans are still humans). We’re not there technologically, neither with robotics nor with digital characters.

A note on Movement

The curve is a little different if we consider static vs moving characters. Movement helps perception of “alive” and “sentient”, as exemplified by Travis, developed by the Media Innovation Lab at IDC Herzliya and the Georgia Tech Center for Music Technology:

But moving characters can also give us the creeps, just take a look at Spot, created by Boston Dynamics — or the nightmare-inducing baby below:

And games can… well, give us some funny glitches. Not exactly creepy, but you might want to avoid these if you’re going for realism and immersion:

Why is there an Uncanny Valley?

There seems to be an agreement that the Uncanny Valley has to do with expectations. We expect things, and humans in particular, to look and behave in a particular way.

For example, we are used to seeing others’ faces (although I work from home and the majority my twitter friends are bots ;). It’s truly fascinating how glitchy we can be as well. Did you take a look at the image on the left? Is it also blurry and uncomfortable for you?

We can’t easily accept a face that doesn’t fit our expectations, our brain will try to correct it, to make it match what we consider a human should look like.

The reason this happens is because, as homo sapiens, we have evolved to be very good at distinguishing faces.

Human faces have grown to be unique because we recognize one another by sight (and not by smell or sound). Our facial variation has actually been enhanced through evolution. For example, the human genome has more regions to determine face shape variants than for any other part of the body.

What’s also surprising is that we find a similar variation in Neanderthals and Denisovans (our closest relatives), so we’ve been good at detecting and expressing emotions with our faces for a long while!

Hominid sculpture reconstructions by Adrie and Alfons Kennis

Now, there’s debate around which mechanisms are behind our uneasiness around human-looking-but-not-fully-there robots. These are the top contenders:

  • Mate selection. Automatic aversion to mates with ineffective immune systems — visible through features of face and body.
  • Mortality salience. Innate fear of death and culturally-supported ways for coping with death’s inevitability.
  • Pathogen avoidance. Avoid potential sources of pathogens by eliciting a disgust response. Reaction of alarm and revulsion like corpses and visibly diseased individuals.
  • Violation of human norms. We judge robots/entities by human standards of empathy, intelligence, etc.
  • Religious definition of human identity. Threat to the concept of human identity. Folklore, creation of human-like, but soulless, beings.
  • Conflicting perceptual cues. Proven perceptual tension with conflicting cues to category membership. Aversion to hybrid entities.
  • Threat to humans’ distinctiveness and identity. Challenge human uniqueness — redefinition of humanness.

Robots and Empathy

But do our feelings go both ways? Can we also connect with robots the way we do with living beings? For better or worse… it seems we can!

Let me tell you about Pleo. Pleo is an animatronic pet dinosaur toy created by Innvo Labs in 2006 that has a ton of cool features — like cameras, touch and tilt sensors, microphones, infrared and others.

Recently, a study was done that used functional MRI (which measures brain activity by detecting changes associated with blood flow) to see how much people empathize with robots compared to humans. This is the video the interviewees saw (and I feel it should have some sort of trigger warning!):

A study from the University of Duisburg-Essen in Germany using functional MRI to see how much people empathize with robots compared to humans.

If you DID empathize with PLEO, as most people in the study, you can’t help but wonder about the implications for Human-Computer interaction relationships.

The reason you feel bad for PLEO is that, if we have something with a lot of robot characteristics and few human characteristics, the human chars will stand out more. The other way around, it’s the robotic ones that stand out.

If something non-human is given human qualities, we find it endearing. If we give it too many human characteristics, it starts looking like an imperfect simulation (and therefore probably revolting).

The Uncanny Valley in Games

We want Game characters to be likable, or at least believable. Hence, they need to be on the top half of the chart. That is, if you want them to be likable! If not, you know where to look for monster ideas ;)

Recommended Area (for likable characters!)

Because they need to be in one of the peaks of the graphic, they can be either Stylized (like a humanoid robot or a stuffed bear) or Photo-realistic:

Photo-Realism

(Far Cry 5, Ubisoft 2018)

The aim of photo realistic games is to appear indistinguishable from a photo, or from real life. Or be better than real life, as we will see below.

Among the advantages o this type of development, we can see that Photo-Realistic games simulate reality in a visually believable and pleasing way, which makes them generally more immersive.

(NFS Payback, Polygon 2017)

However, photo-realism is frequently more expensive to develop and needs larger teams of people working in coordination.

It’s also more complex, having to account for textures, models, movement, light and acting among other things that are required to create a believable experience.

This makes it much more difficult to hide mistakes, as every aspect of the simulation needs to be polished to perfection.

When Realism goes Wrong…

There are many layers needed to create a complete experience. If one is missing, it all comes tumbling down.

I remember when the Final Fantasy movie released. Like other high-schoolers around the world, my friends & I were absolute nerds who already knew they wanted to work with computers. We were all terribly excited… and terribly disappointed when the movie finally came out. All we could see was the weird expressions of the characters, an abyss we couldn’t overcome. This was bound to happen again soon after, with the release of the movie The Polar Express and, quite recently, the first release of the game Mass Effect Andromeda.

Many were surprised by Mass Effect’s 2017 reception. With overly ambitious advancements to the dialogue animations and a huge number of stories to play, they didn’t have time to apply animations to parts of the face beyond the mouths of the characters that are speaking.

The writing, the multiplayer, the gameplay, and everything besides animations have strengths and weaknesses of their own, but the first thing fans are going to be talking about when it comes to this game for the foreseeable future is how weird people look when they talk. This isn’t a complaint, this is just another iteration of one of the oldest and wisest axioms out there: A chain is only as strong as its weakest link. The Kenpire

Animating only the mouth of a virtual character can be a problem, and we’ll see why.

Perception of Psychopathy in virtual Characters

There’s a fantastic study by Angela Tinwell, from the University of Bolton, that uses regression analysis to track the reactions of users faced with this type of mouth-only animation.

If a character lacks a visible response in the eye region to emotive situations, this evokes a relationship with anger, callousness, coldness, dominance, being unconcerned and not trustworthy — all traits associated with psychopathy.

Therefore, inadequate upper facial animation in human-like virtual characters can evoke the uncanny due to a perception of psychopathic traits within a character. For example, characters that showed a lack of a startle response to a scream sound were considered the most uncanny. What, we weren’t planning on programming your characters to react to loud sounds around them?

But if you have adequate facial animation, and a great team of detail-oriented people, you can potentially create something like this close portrait from Assassin’s Creed Odyssey (Ubisoft, 2018):

The expressions are so good, it almost makes this article pointless!

Graphical Fidelity vs Human Fidelity

The takeaway should be that you can have an incredible-looking game, but as beautiful as each individual but of grass can look when touched by a perfumed summer breeze, graphical fidelity doesn’t necessarily mean human fidelity.

And you need both to create a realistic experience.

Because we are so tuned to human faces, expressions and motions, we need to, whenever possible, cover all areas of perception.

So how can we create a realistic and human gaming experience…?

The best way to achieve human fidelity is to observe and use humans as our foundation. We can capture movement, or we can capture a whole actor’s performance. Because when you use facial and body capture, you get…

Real Time cinematography in a virtual world.

Have you had a chance to play Senua’s Sacrifice? I was in awe the first time I saw the trailer. Here is one of them:

And here’s how they achieved such wonderful and human expressions:

But of course we don’t want things to just look realistic, we want them to sound and feel realistic as well. And I can’t think of a better successful story than The Witcher 3.

Realistic Cinematic Dialogs

For The Witcher 3, the dialog was created in 4 stages: Writing, Quest Design, Dialog Design and Post Production (which included things like modifying cameras, creating idles, animating gestures and correcting faces and poses).

In total there were around 2400 dialog animations, and the best part: they were all re-usable! Sharing was supported across different characters. Here’s a small summary:

What the designers of the game did was to add support for real time preview, which allowed the creators to move characters around the scene easily, while they could check their results by previewing the entire scene alongside the editor. Not only that, but the scene could also be played (rendered) in the expected world location with just one click.

Real time dialog creation or The Witcher 3

Stylization

(Ylands, Bohemia Interactive 2017)

Now for the other option apart from Realistic characters. Stylization means that simplifications are made in terms of shape, color, patterns and surface details, as well as in functionality and relationship to other objects in a scene.

(Mario Odyssey, Nintendo 2017)

Among the advantages of Stylization is the fact that characters are not human, which makes their human characteristics stand out more. Also, Stylization allows for almost infinite variety of designs.

On the other hand, characters might not resonate with us as much as photo-realistic ones do, because they are not ‘human’. The experience can be less immersive if the simplifications undermine the authenticity of simulation.

There are several levels of Stylization, but in general games tend to be grouped into Over-Exaggerated and Minimalistic.

Over Exaggerated Stylization

The focus in this case is on larger details and shapes rather than small details and microsurfaces that are common in games with realistic graphics. Firewatch is an example of an over-exaggerated game that came out in 2016 and didn’t disappoint.

(Firewatch — Campo Santo, 2016)

The game designers decided to use the style and colors of the retro National Parks posters. The rich tones and cartoonish animations are not supposed to look real, yet the game is incredible immersive —although not just from a visual point of view, the story is compelling as well.

Players can get truly lost in because there are not comparisons being made to the real world. Libby Keller, ODYSSEY

Minimalistic Stylization

Comparatively flat, Minimalist stylization plays on simplicity. It’s usually stripped of all medium and small details, and can for example use just a color map. We have great minimalist games like Minecraft, where a cube is little more than a pixel, but filled with potential for creating new great things.

(Minecraft — Mojang, 2011)

Part of the appeal of pixel art, whether we’re talking about Space Invaders or Passage, is that the simplicity saves our attention from being wasted on nuances like bump-mapping giving everything a wet plastic look. We don’t see cubes in Minecraft after only a few minutes playing: we instead see an ocean, a fortress, a tunnel, or a tree house. Hobby GameDev.

Minimalistic Stylization in Robotics

And finally a tiny example of a successful minimalist robot — because we can fall for any pair of cute eyes! The Honda 3E-A18 Robotic Device, which made its debut in 2018:

Final Thoughts

Where do you start if you want to create a relatable, likable character? Fist of all, you will need to pick a direction: Stylized or Realistic. Your path will probably depend on the type of game you want to do, the team and the resources you have available.

If you choose realism, you will need to pay extra attention to every and all details and probably spend a long time in post-production. If you choose stylized characters, you will need to focus on their personality, on the human traits that will stand out.

Whatever you do, you need to make sure that it’s a consistent experience, that it has fidelity.

Initially, games required characters to be just a gateway into their virtual world. This was easily achieved using cheerful cartoons like Mario and Sonic, or heroic archetypes like Lara Croft. Certain genres, primarily RPGs, went further, providing a colorful cast of individuals, each with their own personalities and motivations.

Evolution of games and their characters

And today we have games like Assassin’s Creed, worlds that look absolutely stunning. Or storylines like that of Fallout New Vegas or Bioshock. Even stylized games like This War of Mine (the saddest game I’ve ever played. Pavle, I’m so sorry, we’ll never forget you!). And small games that create sweet believable experiences, too, like Emily is Away.

But I believe the clue to creating believable experiences is to work as part of a team of people who are willing to share their creativity.

Although the relationship between animators and concept artists is well-established, the role of writers is something the industry is less comfortable with, and so is the role of the actor. Traditionally game actors would be “voice-actors”, but this isn’t the case anymore. Many studios employ motion-capture software to capture the full-body performance of the actors, incorporating the mannerisms they provide into the animation of the character.

We all have diverse inspirations, and it’s through collaboration and dialog that we can achieve greatness. What better way to create a human experience than giving a little of our own uniqueness to our creations?

Did you experience the uncanny valley in games? Did you also get the feels when following a good story or a compelling character? Share your thoughts below, I’d love to hear from you!

This article was originally presented by Yisela Alvarez Trentini in the Female Game Developers Meetup (Frankfurt) on 3rd March 2019.

--

--

Anthropologist & User Experience Designer. I write about science and technology. Robot whisperer. VR enthusiast. Gamer. @yisela_at www.yisela.com