You are here: Home » PTI Stories » National » News
Business Standard

New algorithm for more realistic computer animation

Press Trust of India  |  Los Angeles 

Scientists have developed a new that can make more agile, acrobatic and realistic.

The researchers at University of California, in the US used deep reinforcement learning to recreate natural motions, even for acrobatic feats like break dancing and martial arts.

The simulated characters can also respond naturally to changes in the environment, such as recovering from tripping or being pelted by projectiles.

"This is actually a pretty big leap from what has been done with deep learning and animation," said UC graduate student

"In the past, a lot of work has gone into simulating natural motions, but these physics-based methods tend to be very specialised; they are not general methods that can handle a large variety of skills," said Peng.

Each activity or task typically requires its own

"We developed more capable agents that behave in a natural manner," he said.

"If you compare our results to motion-capture recorded from humans, we are getting to the point where it is pretty difficult to distinguish the two, to tell what is and what is real. We're moving toward a virtual stuntman," said Peng.

The work could also inspire the development of more dynamic motor skills for robots.

Traditional techniques in typically require designing custom controllers by hand for every skill: one for walking, for example, and another for running, flips and other movements.

These hand-designed controllers can look pretty good, Peng said.

Alternatively, deep reinforcement learning methods, such as GAIL, can simulate a variety of different skills using a single general algorithm, but their results often look very unnatural.

"The advantage of our work is that we can get the best of both worlds," Peng said.

"We have a single that can learn a variety of different skills, and produce motions that rival if not surpass the state of the art in with handcrafted controllers," said Peng.

To achieve this, Peng obtained reference data from motion-capture (mocap) clips demonstrating more than 25 different acrobatic feats, such as backflips, cartwheels, kip-ups and vaults, as well as simple running, throwing and jumping.

After providing the mocap data to the computer, the team then allowed the system - dubbed DeepMimic - to "practice" each skill for about a month of simulated time, a bit longer than a human might take to learn the same skill.

The computer practiced 24/7, going through millions of trials to learn how to realistically simulate each skill.

It learned through trial and error: comparing its performance after each trial to the mocap data, and tweaking its behaviour to more closely match the human motion.

(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)

First Published: Wed, April 11 2018. 12:45 IST