A string of high-profile Tesla crashes has led the US road safety regulator (NHTSA) to upgrade its existing investigation into Tesla’s image-reliant Autopilot system.
A key selling point of Tesla vehicles, especially when they first hit the market, was the optional Autopilot semi-autonomous driving system. This level 2 driving system is able to maintain the car’s position in its lane and, depending on the version, can almost entirely self-navigate entire highway sections of journeys, including making lane changes and handling on/off-ramps.
However, despite Autopilot being a key to Tesla’s appeal, a string of high-profile crashes have led the US National Highway Traffic Safety Administration (NHTSA) to upgrade its existing investigation into the system’s safety performance.
Tesla’s Autopilot has some key differences to level 2 systems deployed by rivals
Autopilot is something of an outlier compared to level 2 autonomous systems deployed by Tesla’s rivals. All recent Tesla variants rely solely on image-based data processing fed by vision cameras positioned around the car. This is in contrast to systems deployed by GM and Ford, for example, that also incorporate radar into the sensor suite to enable the vehicle to sense its relative proximity to the outside world. Tesla instead relies on a pair of front-facing cameras and clever image processing to triangulate objects and work out their relative proximity.
In addition, Tesla’s Autopilot system has seen criticism from some industry observers for alleged lax monitoring of human driver behavior. Level 2 semi-autonomous systems require full-time oversight of the driving task by a human, even with the system entirely controlling the vehicle’s steering direction and speed. The human must be able to take over at a moment’s notice should the system make a mistake. Many level 2 setups also include a monitoring system to ensure the human driver is paying attention, usually through capacitive touch sensing on the steering wheel or eye-tracking infrared cameras.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataUntil recently, Tesla was only using a torque sensor on the steering wheel to detect the presence of a human hand. Several users reported, however, that this system could easily be fooled by wedging an object such as an orange in the steering wheel spokes, essentially stopping the car from reminding drivers that they had to pay attention. As criticism of this approach grew, Tesla sent out a software update to Model 3 and Model Y vehicles to repurpose the interior security camera mounted near the rearview mirror for driver monitoring, using image processing to determine whether the driver is looking at the road. However, this somewhat beefed-up monitoring system still has drawbacks because it was not designed from the ground-up for that purpose, with the most pressing issue being its inability to sense the driver in low-light conditions.
Autopilot crashes
Of course, the biggest risk with any autonomous driving system is that a crash could occur while under computer control. Level 2 systems like Tesla Autopilot cannot, in theory, be held responsible for a crash because the human driver is always considered to be in charge of the car and must be able to take over at any time to prevent an accident.
Sadly, as we know, humans are fallible and several Tesla crashes have occurred since Autopilot’s launch where the system was in control of the vehicle at the time. In all of these cases, the human driver is technically at fault because they failed in their responsibility to monitor the car’s behavior. However, the nature of these accidents – especially a string of accidents where Autopilot-driven Teslas collided with parked emergency vehicles leading to 15 injuries and one fatality – has pushed the NHTSA to open an investigation into the wider safety performance of Tesla’s Autopilot. A key question the administration aims to answer is whether the design and nature of Autopilot is likely to lead to a scenario where it is misused by the human driver, either through fatigue from extended observation of the automated system, or by deliberate misuse.
The NHTSA probe began in August 2021 and the administration has decided to upgrade the 16 crashes at the center of its probe to full engineering analyses. This is the next stage of its investigation after determining that the potential safety defects it found warrant further investigation. An NHTSA engineering analysis is the final stage of investigation before the body determines whether to issue a recall notice for what it views as a safety defect.
The outcome of the NHTSA investigation could significantly impact Tesla if it finds that the company’s cars are unsafe when using Autopilot. This could bring significant financial hardship if the company has to add additional hardware to existing vehicles, or lead to further lawsuits and reputational damage if it is forced to pull the feature.
Below we outline three potential ways the situation could play out:
Optimistic outcome
The best-case scenario for Tesla will see the NHTSA investigation conclude with the decision that Autopilot already includes sufficient safety measures in its current form. This would mean Tesla wouldn’t be required to issue a recall, either for its onboard software, or physical monitoring hardware.
This would take pressure off the company as it seeks to grow its sales footprint, and would be seen as something of an official blessing from the US’s safety administrations that Autopilot is safe to use.
Moderate outcome
Another potential way the NHTSA investigation might play out could see it deliver a verdict that Autopilot does have some safety deficiencies, but not so severe that they would necessitate the NHTSA demand a recall for Tesla vehicles.
In this scenario, the NHTSA might make informal recommendations to Tesla such as beefing up its driver monitoring strategies, or ensuring detection of emergency vehicles and road safety measures such as signs and cones is more reliable. This outcome would likely lead to a few critical headlines highlighting Tesla’s safety shortcomings but would be unlikely to significantly impact the company’s bottom line.
Pessimistic outcome
Finally, the worst-case outcome for Tesla would be a situation where the NHTSA determines that Autopilot has significant safety deficiencies and cannot be used on US roads in its current form. This would likely see a recall issued requiring at least an over the air update to the vehicle’s software, and could even extend to physical hardware changes such as an enhanced driver monitoring system to be allowed back on the roads.
This scenario would cause significant financial and reputational damage to Tesla. There would be noticeable costs if the company has to write new software for an over-the-air update but, if a hardware change is demanded via a recall for the more than 800,000 Tesla vehicles potentially affected, costs could easily soar into the billions of dollars.
There would also be tremendous fallout from both critics and customers. Many in the latter category paid extra to add semiautonomous features to their Tesla and, if these features need to be disabled or substantially reengineered, they might feel compelled to sue Tesla for its failure to deliver on-board services they paid for.
This article was first published on GlobalData’s dedicated research platform, the Automotive Intelligence Center