Scientists make robots more expressive

Image
Press Trust of India Tokyo
Last Updated : Nov 18 2018 | 11:30 AM IST

Japanese scientists have found way to make faces of human-like robots more expressive, paving the way for machines to show a greater range of emotions, and ultimately have deeper interaction with people.

While robots have featured in advances in healthcare, industrial, and other settings, capturing humanistic expression in a robotic face remains an elusive challenge.

Researchers at Osaka University in Japan found a method for identifying and quantitatively evaluating facial movements on their android robot child head.

Named Affetto, the android's first-generation model was first unveiled in 2011. The researchers have now found a system to make the second-generation Affetto more expressive.

Their findings, published in the journal Frontiers in Robotics and AI, offer a path for androids to express greater ranges of emotion, and ultimately have deeper interaction with humans.

"Surface deformations are a key issue in controlling android faces. Movements of their soft facial skin create instability, and this is a big hardware problem we grapple with," said Minoru Asada from Osaka University.

"We sought a better way to measure and control it," Asada said.

The researchers investigated 116 different facial points on Affetto to measure its three-dimensional movement. Facial points were underpinned by so-called deformation units.

Each unit comprises a set of mechanisms that create a distinctive facial contortion, such as lowering or raising of part of a lip or eyelid.

Measurements from these were then subjected to a mathematical model to quantify their surface motion patterns.

While the researchers encountered challenges in balancing the applied force and in adjusting the synthetic skin, they were able to employ their system to adjust the deformation units for precise control of Affetto's facial surface motions.

"Android robot faces have persisted in being a black box problem: they have been implemented but have only been judged in vague and general terms," said Hisashi Ishihara, first author of the study.

"Our precise findings will let us effectively control android facial movements to introduce more nuanced expressions, such as smiling and frowning," said Ishihara.

Disclaimer: No Business Standard Journalist was involved in creation of this content

*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

First Published: Nov 18 2018 | 11:30 AM IST

Next Story