The new approach uses both scene and object features from the video and enables associations between these visual elements and each type of event to be automatically determined and weighted by a machine-learning architecture known as a neural network.
The approach not only works better than other methods in recognising events in videos, but is significantly better at identifying events that that the computer programme has never or rarely encountered previously, said Leonid Sigal, senior research scientist at Disney Research.
These events can include such things as riding a horse, baking cookies or eating at a restaurant.
"Automated techniques are essential for indexing, searching and analysing the incredible amount of video being created and uploaded daily to the Internet," said Jessica Hodgins, vice president at Disney Research.
"With multiple hours of video being uploaded to YouTube every second, there's no way to describe all of that content manually," Hodgins said.
"And if we don't know what's in all those videos, we can't find things we need and much of the videos' potential value is lost," she said.
Understanding the content of a video, particularly user-generated video, is a difficult challenge for computer vision because video content can vary so much.
Even when the content - a particular concert, for instance - is the same, it can look very different depending on the perspective from which it was shot and on lighting conditions.
Computer vision researchers have had some success using a deep learning approach involving Convolutional Neural Networks (CNNs) to identify events when a large amount of labelled examples are available to train the computer model.
However, that method does not work if few labelled examples are available to train the model, so scaling it up to include thousands, if not tens of thousands, of additional classes of events would be difficult.
The new approach by researchers, including those from Fudan University in China, enables the computer model to identify objects and scenes associated with each activity or event and figure out how much weight to give each.
When presented with an event that it has not previously encountered, the model can identify objects and scenes that it already has associated with similar events to help it classify the new event, Sigal said.
If it already is familiar with "eating pasta" and "eating rice," for instance, it might reason that elements associated with one or the other - chopsticks, bowls, restaurant settings, - might be associated with "eating noodles."
This ability to extend its knowledge into events not previously seen, or for which labelled examples are limited, makes it possible to scale up the model to include an ever-increasing number of event classes, Sigal said.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)