With an aim to incorporate human-like reasoning into autonomous vehicles, researchers at MIT have developed a system that uses simple maps and visual data to enable driverless cars to navigate routes in new, complex environments.
The system, similar to human drivers, has the ability to detect any mismatches between its map and features of the road, determining if its position, sensors, or mapping are incorrect, in order to correct the car's course.
The autonomous control system "learns" the steering patterns of human drivers as they navigate roads in a small area, using only data from video camera feeds and a simple global positioning system (GPS)-like map, researchers said.
"Our objective is to achieve autonomous navigation that is robust for driving in new environments," said Daniela Rus from Massachusetts Institute of Technology (MIT) in the US.
Driverless cars, unlike human drivers, struggle with this basic reasoning and lack the ability to navigate on unfamiliar roads using observation and simple tools.
Human drivers simply match what they see around to what they see on the GPS devices to determine the current location and destination.
In every new area, the cars must first map and analyse all the new roads, which is very time consuming.
The systems also rely on complex maps -- usually generated by 3D scans -- which are computationally intensive to generate and process on the fly.
"With our system, you don't need to train on every road beforehand. You can download a new map for the car to navigate through roads it has never seen before," said Alexander Amini from MIT.
To train the system initially, a human operator controlled a driverless Toyota Prius -- equipped with several cameras and a basic GPS navigation system -- collecting data from local suburban streets including various road structures and obstacles, the researchers said.
When deployed autonomously, the system successfully navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.
According to the research, the system uses a machine learning model called a convolutional neural network (CNN), commonly used for image recognition.
During training, the system watches and learns how to steer from a human driver, according to a paper presented at the International Conference on Robotics and Automation in Montreal, Canada.
The CNN correlates steering wheel rotations to road curvatures it observes through cameras and an inputted map.
Eventually, it learns the most likely steering command for various driving situations, such as straight roads, four-way or T-shaped intersections, forks, and rotaries, researchers said.
"In the real world, sensors do fail. We want to make sure that the system is robust to different failures of different sensors by building a system that can accept these noisy inputs and still navigate and localize itself correctly on the road, Amini said.
You’ve reached your limit of {{free_limit}} free articles this month.
Subscribe now for unlimited access.
Already subscribed? Log in
Subscribe to read the full story →
Smart Quarterly
₹900
3 Months
₹300/Month
Smart Essential
₹2,700
1 Year
₹225/Month
Super Saver
₹3,900
2 Years
₹162/Month
Renews automatically, cancel anytime
Here’s what’s included in our digital subscription plans
Exclusive premium stories online
Over 30 premium stories daily, handpicked by our editors


Complimentary Access to The New York Times
News, Games, Cooking, Audio, Wirecutter & The Athletic
Business Standard Epaper
Digital replica of our daily newspaper — with options to read, save, and share


Curated Newsletters
Insights on markets, finance, politics, tech, and more delivered to your inbox
Market Analysis & Investment Insights
In-depth market analysis & insights with access to The Smart Investor


Archives
Repository of articles and publications dating back to 1997
Ad-free Reading
Uninterrupted reading experience with no advertisements


Seamless Access Across All Devices
Access Business Standard across devices — mobile, tablet, or PC, via web or app
)