Avoiding the Uncanny Valley in AI assistants
Key Points
- The “uncanny valley” describes discomfort users feel when a virtual assistant looks or sounds almost human but not quite, a concept first introduced by roboticist Masahiro Mori in 1970.
- To avoid this unease, designers should prioritize clear, transparent interactions that make it obvious the assistant is not a human, favoring stylized or functional designs over hyper‑realism.
- Consistent tone, predictable behavior, and guardrails (e.g., retrieval‑augmented generation, prompting, fine‑tuning) help maintain user trust and reduce bias or hallucinations.
- Monitoring user comfort and iterating based on feedback allows developers to adjust the assistant’s human‑likeness to match expectations.
- A practical example shows that a seamless, uniformly casual response to a restaurant query feels more comfortable than a mix of conversational and overly robotic language.
Sections
- Avoiding the Uncanny Valley - The passage explains how overly human‑like virtual assistants can trigger discomfort and offers design strategies—such as clear non‑human cues, stylized aesthetics, and prioritizing functionality—to keep interactions comfortable and trustworthy.
- Designing Around the Uncanny Valley - The speaker explains how an inconsistent, robotic tone creates an uncanny feeling and recommends keeping machine‑human interactions simple and natural instead of trying to make machines appear human.
Full Transcript
# Avoiding the Uncanny Valley in AI assistants **Source:** [https://www.youtube.com/watch?v=r94tXiCTmQU](https://www.youtube.com/watch?v=r94tXiCTmQU) **Duration:** 00:03:44 ## Summary - The “uncanny valley” describes discomfort users feel when a virtual assistant looks or sounds almost human but not quite, a concept first introduced by roboticist Masahiro Mori in 1970. - To avoid this unease, designers should prioritize clear, transparent interactions that make it obvious the assistant is not a human, favoring stylized or functional designs over hyper‑realism. - Consistent tone, predictable behavior, and guardrails (e.g., retrieval‑augmented generation, prompting, fine‑tuning) help maintain user trust and reduce bias or hallucinations. - Monitoring user comfort and iterating based on feedback allows developers to adjust the assistant’s human‑likeness to match expectations. - A practical example shows that a seamless, uniformly casual response to a restaurant query feels more comfortable than a mix of conversational and overly robotic language. ## Sections - [00:00:00](https://www.youtube.com/watch?v=r94tXiCTmQU&t=0s) **Avoiding the Uncanny Valley** - The passage explains how overly human‑like virtual assistants can trigger discomfort and offers design strategies—such as clear non‑human cues, stylized aesthetics, and prioritizing functionality—to keep interactions comfortable and trustworthy. - [00:03:05](https://www.youtube.com/watch?v=r94tXiCTmQU&t=185s) **Designing Around the Uncanny Valley** - The speaker explains how an inconsistent, robotic tone creates an uncanny feeling and recommends keeping machine‑human interactions simple and natural instead of trying to make machines appear human. ## Full Transcript
Interactions with virtual assistants in the form of chatbots and voice agents are increasingly common,
but have you ever had an experience with a bot that left you feeling uneasy?
You might have experienced what's called the uncanny valley.
When we interact with a virtual assistant, we interpret a character based on how it responds,
aAnd this character should be defined with intention.
What we don't want is an assistant that causes people nightmares.
The Uncanny Valley is a theory
that users become uncomfortable whenever they encounter an entity that's almost, but not quite, human.
Roboticist Masahiro Mori introduced this concept in 1970, and he used a line graph to visualize this phenomenon.
It's been applied to robots, AI, dolls, and game characters.
As the entity becomes more human-like,
user comfort increases, until the point when the entity resembles a human, but is clearly still not human.
This is when feelings of discomfort and eeriness can arise.
So what can we do to avoid the uncanny valley?
Users want to be able to meet their expectations quickly
and they want to have an experience as transparent.
The assistant should be both helpful and clear that it's not a human.
We can choose stylization over realism.
Aim for relatable, but not perfectly human-like.
Stylization can make your AI unique and memorable.
You can also focus on function over form.
You want to meet user needs first and foremost.
Implement techniques like retrieval augmented generation in order to control the context window and output.
You can also instruct the model to respond in a particular format using prompting and fine tuning.
Also, it's good to implement guardrails in order to increase accuracy while decreasing bias and hallucinations.
Create consistency in tone and behavior.
Consistent, predictable behavior builds trust.
you want to match the user's expectations.
Also, you want to be able to monitor user comfort levels and then iterate.
Create a system to review user friendliness and then modify based on that.
Once you know what the comfort levels are, you can make your assistant more or less human-like.
An example scenario that you might have experienced
would be a user querying something like, can you recommend a good restaurant near me?
A bad response might be something like, sure, I can help you with that.
Here are some highly regarded institutions near you.
The reason this doesn't work is because the tone switches from casual to robotic.
It starts off fluid, but then becomes stilted.
Something better might be a little bit more simple, like here are some highly rated restaurants near you.
So now you've had a chance to explore the Uncanny Valley and some ways to design around it.
We can design machine and human interactions
in a way that's as natural as possible without trying to make the machine appear human.
Leave a comment below and let us know if you've had experience with the Uncanny.