Giving human touch to chatbots may not help: Study

Image
Press Trust of India Washington
Last Updated : Apr 19 2019 | 4:40 PM IST

Giving chatbots human names or adding humanlike features to its avatar may not be enough to win over a user if the device fails to maintain a conversation, researchers, including one of Indian origin, suggest.

According to researchers from The Pennsylvania State University in the US, those humanlike features may create a backlash against less responsive humanlike chatbots.

In a study, chatbots that had human features -- such as a human avatar -- but lacked interactivity, disappointed people who used it.

However, people responded better to a less-interactive chatbot that did not have humanlike cues, said S Shyam Sundar, a professor at Penn State.

High interactivity is marked by swift responses that match a user's queries and feature a threaded exchange that can be followed easily, according to Sundar.

"People are pleasantly surprised when a chatbot with low anthropomorphism -- fewer human cues -- has higher interactivity," said Sundar.

"But when there are high anthropomorphic visual cues, it may set up your expectations for high interactivity -- and when the chatbot doesn't deliver that -- it may leave you disappointed," he said.

On the other hand, improving interactivity may be more than enough to compensate for a less-humanlike chatbot.

Even small changes in the dialogue, like acknowledging what the user said before providing a response, can make the chatbot seem more interactive, said Sundar.

"In the case of the low-humanlike chatbot, if you give the user high interactivity, it's much more appreciated because it provides a sense of dialogue and social presence," said lead author of the study, Eun Go, a former doctoral student at Penn State and currently assistant professor at Western Illinois University.

Because there is an expectation that people may be leery of interacting with a machine, developers typically add human names to their chatbots -- for example, Apple's Siri -- or program a human-like avatar to appear when the chatbot responds to a user.

The study, published in Computers in Human Behavior, currently, also found that just mentioning whether a human or a machine is involved -- or, providing an identity cue -- guides how people perceive the interaction.

"Identity cues build expectations," said Go said.

"When we say that it's going to be a human or chatbot, people immediately start expecting certain things."

Disclaimer: No Business Standard Journalist was involved in creation of this content

*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

First Published: Apr 19 2019 | 4:40 PM IST

Next Story