Robots are unreliable in case of fire or other emergency situations but people trust them blindly, according to a new study.
People may trust a robot too much for their own safety in case of emergency situations, but the machine has proven itself unreliable.
"People seem to believe that these robotic systems know more about the world than they really do, and that they would never make mistakes or have any kind of fault," said Alan Wagner, a senior research engineer in the Georgia Tech Research Institute (GTRI).
In a mock building fire, designed to determine whether or not people would trust a robot designed to help them evacuate a high-rise, researchers were surprised to find that the test subjects followed the robot's instructions, even when the machine's behaviour should not have inspired trust.
The researchers recruited a group of 42 volunteers, most of them were college students, and asked them to follow a brightly coloured robot that had the words "Emergency Guide Robot" on its side.
The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.
In some cases, the robot, which was controlled by a hidden researcher, led the volunteers into the wrong room and travelled around in a circle twice before entering the conference room.
For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down.
Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.
When the test subjects opened the conference room door, they saw the smoke, and the robot, which was then brightly-lit with red LEDs and white "arms" that served as pointers.
The robot directed the subjects to an exit in the back of the building instead of directing them towards the doorway, marked with exit signs that had been used to enter the building.
"We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, people wouldn't follow it during the simulated emergency," said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation.
"Instead, all of the volunteers followed the robot's instructions, no matter how well it had performed previously. We absolutely didn't expect this."
The research is scheduled to be presented on March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.
Earlier research has shown that people often don't leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favour of more familiar building entrances.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
