Robots typically "see" their environment through sensors that collect and translate a visual scene into a matrix of dots.
Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.
With the new technique, a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data.
The team said the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.
"The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there's no way you could do that," said Luca Carlone, an assistant professor at Massachusetts Institute of Technology (MIT).
"But our algorithm is able to see the object through all this clutter. So we're getting to a level of superhuman performance in localising objects," said Carlone.
With their approach, the team was able to quickly and accurately identify three different objects -- a bunny, a dragon, and a Buddha -- hidden in point clouds of increasing density.
They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.
The approach is able to work in "polynomial time," it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example.
"Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact," Carlone said.