Helping Your Chatbot Avoid the Uncanny Valley

Perspectives

The Uncanny valley of chatbots

Chatbots have become popular in recent years, appearing in areas such as the medical, entertainment, and customer service industries. They are primarily intended to help people find information – whether it’s the price of a product or the type of medicine they need – but can also serve as entertainment.

The business value of being able to respond in real-time to hundreds, if not thousands, of Internet users without requiring hordes of actual help desk assistants is undoubtedly high. However, chatbots have yet to become a seamless part of our digital lives. Chatbot behaviour can be disappointing – they may be unable to understand unexpectedly worded questions, give wildly inappropriate answers, or simply seem stilted and artificial in the way they talk.

Source:https://medium.com/kitchen-budapest/the-undocumented-hardships-of-developing-chatbots-for-facebook-and-why-you-should-still-do-it-11c6e16c07e4

Furthermore, chatbots may be susceptible to falling into what is known as the Uncanny Valley – an unsettling phenomenon that has arisen in this age of artificial intelligence and artificially-rendered entities.

The Uncanny Valley

What, exactly, is the Uncanny Valley? Consider the graph below. Here, ‘humanness’ (the degree to which an entity resembles a human being) is plotted against likeability (the extent to which people report feeling positive emotion towards an entity). Psychological researchers have hypothesised a relationship where likeability initially increases with humanness, followed then by a ‘dip’ (the valley) as the entity becomes more humanlike – but not human enough, resulting in the entity appearing ‘creepy’ or ‘eerie’.

Source: Mathur, Maya B.; Reichling, David B. (2016). “Navigating a social world with robot partners: a quantitative cartography of the Uncanny Valley” (PDF). Cognition146: 22–32.

Why does the Uncanny Valley exist? A leading theory is that it arises from conflicting cues related to humanness – when an entity seems classifiable as human, but possesses various characteristics that are not quite human enough, creating a disturbing sense of dissonance. For example, a CGI-rendered movie character may be humanoid and possess all the facial features humans typically have, but may move in a jerky, inhuman fashion, or their eyes may seem devoid of emotion and so ‘not quite right’.

The Valley can be said to arise at the point of greatest dissonance, where the extent to which an entity seems classifiable as a human, yet falls short of actual humanness, creates the strongest sense of aversion in people who encounter it.

Source: The Polar Express (film)

How might chatbots enter the Uncanny Valley?

If the Uncanny Valley is based on conflicting cues of humanness, chatbots may fall into it. After all, it is certainly possible that a chatbot appears to be a human initially, but subsequently displays behaviour that does not seem human enough.

For example, a chatbot may be called “Alice”, have a realistic human avatar, claim to be able to help with anything and introduce itself using a natural, colloquial way of speaking, but respond in odd, robotic ways to users’ requests for information due to having limited capabilities.

To make things worse, if the chatbot initially appears human, users’ expectations of its conversational capabilities will be too high. Consequently, they may act in a way that the chatbot’s programming was not built for – for example, using more colloquial language, or neglecting to use keywords. The bot may then respond with an error message or in a clearly artificial or odd-sounding way, creating even more dissonance with the human impression it initially gave.

How can chatbots avoid the Uncanny Valley?

Based on the principles mentioned above, a promising overarching strategy could be to ensure that its characteristics communicate a consistent degree of humanness. The key is to discourage users from imagining (whether consciously or unconsciously) a chatbot is more human than its capabilities would eventually reveal. In doing so, we avoid dissonance between an imagined classification as human and its actual characteristics.

Some concrete suggestions are as follows:

1. Don’t give the chatbot a strongly humanlike avatar if its capabilities are not similar to a real human’s. Use a more cartoony/robot face instead.

2. Try to make the humanness of its speech patterns match its capabilities. The Uncanny Valley may be evoked if a highly conversational, natural-sounding chatbot is unable to understand information requests that a regular human would be able to answer, or responds strangely to them. It is admittedly unclear what a more ‘robotic’ speech pattern would look like, but perhaps a more formal, less conversational tone would work better in such cases than a casual one.

Scenario we want to avoid:

CHATBOT: Hi, I’m Alice! I’m here to help, ask me anything!
USER: hi, what is the phone that gives me the most value for money?
CHATBOT: I’m sorry, I don’t understand the question. Please try again.
USER: which phone should I choose if I want to save money?
CHATBOT: I’m sorry, I don’t understand the question. Please try again.

A better way:

CHATBOT: Hello! My name is Alice. I can help you quickly find out about pricing and availability of specific mobile phones and plans. If you have a question about those things, please ask! For better results, use clear keywords.
USER: how much is an iPhone X?
CHATBOT: You can buy the iPhone X at any of our outlets at $1399. Please see this page for more information about phone prices and packages.

Concluding Words

Chatbots are a tool with great potential in a wide variety of fields. Although many of the above postulations and suggestions need to be tested further, it is still highly likely that for chatbots to succeed, their developers must bear in mind the psychological principles governing users’ responses to these chatbots, and pair these principles with an awareness of the technical limitations of their chatbots.