ALSO READFICE becomes UC Berkeley's partner to establish jobs 4.0 micro accelerators in Indian Academia 'Mars oceans formed much earlier than thought' Wipro signs definitive agreement to acquire minority stake in This age-detecting algorithm can make your smartphone child-proof Amateur astronomer spots rare first light from exploding star
The researchers at University of California, Berkeley in the US used deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts.
The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.
"In the past, a lot of work has gone into simulating natural motions, but these physics-based methods tend to be very specialised; they are not general methods that can handle a large variety of skills," said Peng.
"We developed more capable agents that behave in a natural manner," he said.
"If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is simulation and what is real. We're moving toward a virtual stuntman," said Peng.
The work could also inspire the development of more dynamic motor skills for robots.
Traditional techniques in animation typically require designing custom controllers by hand for every skill: one controller for walking, for example, and another for running, flips and other movements.
These hand-designed controllers can look pretty good, Peng said.
Alternatively, deep reinforcement learning methods, such as GAIL, can simulate a variety of different skills using a single general algorithm, but their results often look very unnatural.
"The advantage of our work is that we can get the best of both worlds," Peng said.
To achieve this, Peng obtained reference data from motion-capture (mocap) clips demonstrating more than 25 different acrobatic feats, such as backflips, cartwheels, kip-ups and vaults, as well as simple running, throwing and jumping.
After providing the mocap data to the computer, the team then allowed the system - dubbed DeepMimic - to "practice" each skill for about a month of simulated time, a bit longer than a human might take to learn the same skill.
The computer practiced 24/7, going through millions of trials to learn how to realistically simulate each skill.
It learned through trial and error: comparing its performance after each trial to the mocap data, and tweaking its behaviour to more closely match the human motion.
(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)