Just over a year ago, Tesla sent out a software update to its cars that made its “autopilot” features available to customers, in what the company called a “public beta test”. In the intervening 12 months, at least one customer died while the Tesla was in autopilot mode. Cars have crashed, regulators have cracked down, and the headlines proclaiming that “self-driving cars are here” were replaced with Tesla’s assurances that autopilot was nothing but a particularly advanced driver-assist system.
Given all this, one might assume that a chastened Tesla would take things more cautiously with its next iteration of autonomous technology. But in a launch event this week, Tesla introduced its Autopilot 2.0 hardware with the promise that all the cars it builds from now on will have hardware capable of “the highest levels of autonomy”.
Tesla’s proof that its new hardware is capable of driving in the “complex urban environment” was a brief, edited video of the system navigating the area around its headquarters near Stanford University in California. Though exciting for enthusiasts who can’t wait to own a self-driving car, the video is hardly proof that Tesla’s system is ready to handle all the complexities that are holding back other companies that have been working on autonomous technology for longer than Tesla. As impressive as Tesla’s system is — and make no mistake, it is deeply impressive — navigating the Stanford campus is a hurdle that even graduate school projects are able to clear.
Tesla’s new sensor suite upgrades what was a single forward-facing camera to eight cameras giving a 360-degree view around the car. It also updates the 12 ultrasonic sensors, while keeping a single forward-facing radar. Yet independent experts and representatives from competitor firms tell me this system is still insufficient for full level 5 autonomy — the National Highway Traffic Safety Administration’s highest rating — which requires more (and better) radar, multiple cameras with different apertures at each position and 360-degree laser-sensing capabilities.
What Tesla’s upgraded hardware does do is vastly improve the company’s ability to pull high-quality data from its vehicles already on the road, giving it an unrivalled ability to comply with new regulatory guidelines requiring granular data about autonomous-drive functions in a variety of conditions. Whereas its competitors’ autonomous-drive programmes harvest data from small test fleets and extrapolate from there, Tesla has made every car it sells into an independent experiment of conditions that can only be found on the open road. All this real-world data gives Tesla a unique opportunity to validate its autopilot technology. If the company had announced Autopilot 2.0 as another step toward an eventual fully autonomous system, this would be an unambiguously good (if not earth-shattering) development.
Also Read
Unfortunately, that’s not what Tesla did. Instead, in Wednesday’s launch events, it called its new hardware suite “full self-driving hardware”. It said the technology would demonstrate the system’s ability to drive cross-country without any human intervention.
© Bloomberg

)
