You are here: Home » Technology » News
Business Standard

US Researchers develop new autonomous cars with human-like reasoning

The system, similar to human drivers, has the ability to detect any mismatches between its map and features of the road, determining if its position, sensors, or mapping are incorrect.

Press Trust of India  |  Boston 

sports cars
Representative Image

With an aim to incorporate human-like reasoning into autonomous vehicles, researchers at have developed a system that uses simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.

The system, similar to human drivers, has the ability to detect any mismatches between its map and features of the road, determining if its position, sensors, or mapping are incorrect, in order to correct the car's course.

The autonomous control system "learns" the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple global positioning system (GPS)-like map, researchers said.

"Our objective is to achieve autonomous navigation that is robust for driving in new environments," said from (MIT) in the US.

Driverless cars, unlike human drivers, struggle with this basic reasoning and lack the ability to navigate on unfamiliar roads using observation and simple tools.

Human drivers simply match what they see around to what they see on the devices to determine the current location and destination.

In every new area, the cars must first map and analyse all the new roads, which is very time consuming.

The systems also rely on complex maps -- usually generated by 3D scans -- which are computationally intensive to generate and process on the fly.

"With our system, you don't need to train on every road beforehand. You can download a new map for the car to navigate through roads it has never seen before," said from

To train the system initially, a controlled a driverless Prius -- equipped with several cameras and a basic navigation system -- collecting data from local suburban streets including various road structures and obstacles, the researchers said.

When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.

According to the research, the system uses a model called a convolutional (CNN), commonly used for image recognition.

During training, the system watches and learns how to steer from a human driver, according to a paper presented at the International Conference on and in Montreal,

The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map.

Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries, researchers said.

"In the real world, sensors do fail. We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road, Amini said.

First Published: Fri, May 24 2019. 15:41 IST
RECOMMENDED FOR YOU
RECOMMENDED FOR YOU