A Tesla Crash, Missing Data, and the Cost of Regulatory Inaction
- Sam Abuelsamid
- 7 days ago
- 8 min read
By Sam Abuelsamid, VP of Market Research

Last week, a landmark jury ruling in Florida found Tesla partly liable for a crash that resulted in the death of Naibel Benavides and severe injuries to Dillon Angulo. The jury determined that Tesla was one-third responsible for the accident and ordered the company to pay $42.5 million in compensatory damages to the victims and their families. The driver of the Model S was deemed responsible for the remaining two-thirds of the liability. Additionally, the jury held Tesla fully accountable for $200 million in punitive damages.
Based on the evidence presented during the trial, it could be argued that the National Highway Traffic Safety Administration (NHTSA) shares some responsibility for not taking substantial action to regulate Tesla and the broader auto industry.
Following last week’s trial conclusion, writer Fred Lambert from the website Electrek reviewed a copy of the trial transcripts. What Lambert found was a trove of very damning evidence against Tesla about its actions following the crash, and it should serve as a warning to everyone about connected vehicles in general and Tesla in particular.
Autopilot is an advanced driver assistance system (ADAS) that Tesla claims makes its vehicles safer than human drivers. The branding of both Autopilot and the more expensive Full Self-Driving (FSD) and Musk's repeated public comments over the past decade imply that it is capable of driving the vehicle more safely than a human. However, there has never been any conclusive evidence to back up these claims, and in fact, there is evidence that when used in ways that Musk himself has demonstrated, including going hands-off and not watching the road, Autopilot may be less safe.
In fact, even when used as intended, the system has caused numerous crashes, many involving phantom braking. This occurs when the perception software misinterprets data from the car’s eight cameras, leading it to believe there's an object on the road that isn’t actually there. In November 2022, FSD caused a phantom braking event in the I-80 tunnel on Treasure Island in San Francisco Bay, which resulted in an eight-car pileup.
A History of Inaction
The first known fatal crash with Autopilot occurred in May 2016 when Joshua Brown's Model S drove under a tractor-trailer, slicing off the roof and decapitating the driver. In the years since, the system has improved but still makes frequent errors, such as phantom braking and failing to see obstacles like stationary vehicles. There have been at least 14 known incidents of Tesla vehicles using Autopilot or FSD running into stationary emergency vehicles, including police, ambulances, and fire trucks.
In the report following the National Transportation Safety Board (NTSB) investigation of the Brown crash, the NTSB made numerous recommendations that neither the NHTSA nor Tesla has acted upon, although other automakers have. The NTSB is purely an investigatory body with no rulemaking or enforcement authority. It is staffed by numerous experts in all areas of transportation safety, including aviation, marine, rail, and road vehicles and has long been respected globally. The NTSB is often dispatched to investigate crashes in other countries.
Among those recommendations were that the use of such ADAS features should be geofenced and limited to the operational design domain (ODD) they were designed for and that there should be more robust driver monitoring to avoid predictable misuse. The ODD defines the limits of where the system can be used — for example it might be for divided highways only, or only in good weather, or it might have some other location or speed constraint. More than nine years after Brown's death, NHTSA has never taken action to require such protections. Tesla also refuses to geofence its systems and has extremely limited driver monitoring.
As an engineer who spent the first 17 years of my career working on automotive safety systems, including anti-lock brakes and electronic stability control, I learned early on that it’s not enough to develop a system that only meets basic performance requirements. It’s crucial to anticipate all potential ways people might use the system, including misuse. For instance, during an early winter testing session for anti-lock brakes on a frozen lake, my manager started to pump the brakes rapidly. This caused the system to behave unexpectedly and release all the brake pressure. We had to go back and modify some of the control logic to detect this sort of behavior and make adjustments.
The fact that Tesla warned drivers from the original launch of Autopilot in the fall of 2015 to only use the system on divided highways and keep their eyes on the road and hands on the wheel indicates that they anticipated that some drivers would not do that. Choosing not to implement a straightforward method to enforce those restrictions was an act of negligence on the part of the company. Similarly, they didn’t include a robust driver-monitoring system.
NHTSA could have followed the NTSB recommendations and mandated these features, but it chose inaction instead. Even for an agency whose operating procedure has been reactive rather than proactive since its formation, following these recommendations would have fit the reactive model. Nine years later, NHTSA has still failed to act.
When General Motors launched its Super Cruise system in 2017, it included maps to geofence where the system could be engaged, an infrared camera to monitor the driver’s eye gaze, and head pose and capacitive sensors in the steering wheel to detect hands-on. Most other automakers have done the same. However, GM’s decision to include these features wasn’t just about doing the right thing.
It stemmed in part from the aftereffects of a recall of millions of vehicles it built over a nearly two decade period with defective ignition switches. The design allowed the switches to be accidentally turned off if the driver hung too much weight from the key or hit the key with their knee. There were years worth of complaints in the NHTSA defect database but also years of inaction. Ultimately, in addition to the recalls, GM paid a $900 million criminal settlement to the US Department of Justice and nearly $600 million to settle civil lawsuits.
Subsequently, GM implemented new internal safety review policies to prevent similar issues from happening again. These reviews caused a two-year delay in launching Super Cruise because they added the features mentioned earlier. Most automakers have faced similar safety problems at some point, including Ford with Pinto fuel tanks, Toyota with sticking accelerator pedals, and many others.
Why hasn’t Tesla followed the rest of the industry with geofencing and driver monitoring? Possibly greed. While these changes would add modest cost, they would not fit Musk’s self-driving narrative. Tesla’s outsized market value is almost entirely based on the premise that its cars can operate autonomously anywhere using cameras only. This differs from companies like Waymo, Zoox, and Aurora, which limit where their vehicles can operate to locations where evidence shows they can do it safely. Every other company developing automated driving systems uses multiple sensor types, like radar and lidar, in addition to cameras.
If Tesla were to geofence its systems, customers would be far less likely to believe that their cars would one day turn into cash-generating robotaxis and pay exorbitant amounts for FSD. Adding better driver monitoring would imply that the system is not safe enough to operate on its own. The reality is that the system is not safe enough.
So What Did Tesla Do after the 2019 Crash?
Tesla has long claimed that it can make a camera-only system work better than other multi-sensor systems because of all the data it has from millions of drivers using its cars every day. All Tesla vehicles have cellular data connections, and short snippets of data are regularly collected from customer cars and uploaded to company servers for analysis and to use as training data.
In the moments after the 2019 crash that took the life of Naibel Benavides, the Tesla Model S uploaded a data file including video and activity logs to Tesla, which the company denied the existence of and tried to hide from investigators. The same company that constantly brags about all the data it collects suddenly claimed not to have any data on this particular crash.
It took until 2024 before a forensic investigator hired by the lawyers for Benavides’ family finally extracted evidence from the car's Autopilot computer that Tesla had tried to erase the data and found the metadata for what had been sent to the servers. From that, they finally managed to get the full data file for examination from an AWS server.
The file contained evidence that Autopilot was indeed active at the time of the crash and made fundamental errors that could have prevented it. Among these errors was proof that the internal maps had marked the intersection where the crash occurred as a restricted zone, which should have prevented the use of the Auto Steer function. The system also failed to warn the driver, who was distracted trying to retrieve the phone he had dropped. As was known from the moment of the crash, the system also did not respond to a stationary vehicle in its path, causing it to collide with it and subsequently hit Benevides and Angulo. That behavior is similar to when the system keeps running into emergency vehicles.
Tesla's refusal to prevent predictable driver misuse of the system could have been stopped in 2016 if either Musk had instructed his engineers to do so or if NHTSA had taken action on those NTSB findings. Neither happened, and the driver, George McGee, put more trust than was warranted in Autopilot and failed to pay proper attention. McGee has admitted his culpability in the crash and settled with the families.
But we still have a regulatory agency that has been dysfunctional since long before the first Trump administration. It has only gotten worse in the years since and has allowed countless people to die through its inaction. For what it’s worth, since Joshua Brown, the NTSB has investigated several other Tesla crashes and made similar recommendations, none of which have led to regulation or enforcement.
Even without a regulator, unethical management at Tesla, starting with CEO Elon Musk, could have stopped this and chose not to in order to perpetuate the narrative that Teslas are safer than other cars and can drive themselves, which has in turn led to the inflation of its stock price.
Whether through incompetence, malice or something in between, companies and people make mistakes and sometimes do the wrong thing. It is the nature of the world. The key is to learn from those mistakes and to have accountability. Lawsuits like this example are part of the accountability process that will hopefully lead to at least some degree of behavioral change. But even though businesses might not like it, regulation has a role to play in learning from mistakes by setting standards.
It's unlikely that the current administration in Washington will do anything to protect the American public. But the public can still hold Tesla and every other automaker accountable. Drivers should consider disabling connectivity in their vehicles until automakers provide easy access to all of their data and the ability to retrieve it whenever they want. It's also up to consumers to speak with their wallets when companies misbehave with how they use our data or how they handle safety. Ultimately, we should demand that our lawmakers stand up for safety by holding agencies like NHTSA accountable for their actions or lack thereof.