Do you find getting dressed in the mornings a mundane task? Take heart, a novel computational method driven by machine learning techniques will assist you in the multi-step process of putting on clothes.
According to computer scientists from the Georgia Institute of Technology and Google Brain -- Google's artificial intelligence research arm -- the task of dressing up is quite complex and involves several different physical interactions between the character and his or her clothing, primarily guided by the person's sense of touch.
The team leveraged simulation to teach a neural network to accomplish the complex tasks of dressing by breaking down the task into smaller pieces with well-defined goals.
It allowed the character to try the task thousands of times and providing reward or penalty signals when the character tries beneficial or detrimental changes to its policy.
The researchers' method then updates the neural network one step at a time to make the discovered positive changes more likely to occur in the future.
"We've opened the door to a new way of animating multi-step interaction tasks in complex environments using reinforcement learning," said lead author Alexander Clegg, a doctoral student at the Georgia Institute of Technology.
"There is still plenty of work to be done continuing down this path, allowing simulation to provide experience and practice for task training in a virtual world."
In the study, the researchers demonstrated their approach on several dressing tasks: putting on a t-shirt, throwing on a jacket and robot-assisted dressing of a sleeve.
With the trained neural network they were able to achieve complex re-enactment of a variety of ways an animated character puts on clothes. Key is incorporating the sense of touch into their framework to overcome the challenges in cloth simulation.
The researchers found that careful selection of the cloth observations and the reward functions in their trained network are crucial to the framework's success. As a result, this novel approach not only enables single dressing sequences but a character controller that can successfully dress under various conditions.
The team will present their work at SIGGRAPH Asia 2018 in Tokyo.
--IANS
rt/mag/bg
Disclaimer: No Business Standard Journalist was involved in creation of this content
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
