Artificial Intelligence
How Do Human Drivers Compare to Autonomous Vehicles?

Research suggests autonomous vehicles are involved in accidents much less often than human drivers. Is this because the technology is truly superior? Or is it because there are far fewer self-driving cars than people?
The Debate Over Autonomous Vehicle Safety
Data suggests autonomous vehicles are far safer than human drivers. For instance, of 25 of Waymo’s most serious crashes, 17 involved a human driver rear-ending a robotaxi. This indicates that people are to blame for most accidents that cause significant injuries.
However, that doesn’t mean self-driving cars never make mistakes or have close calls. They drive the wrong way down a one-way street, get stuck in endless loops in traffic circles and misclassify road hazards, necessitating intervention.
As of 2026, the National Highway Traffic Safety Administration (NHTSA) has launched a probe into Waymo after receiving 22 reports of its robotaxis crashing or violating traffic laws. The agency is also investigating Tesla, which was behind 467 crashes resulting in 54 injuries and 14 deaths as of 2024, and General Motors’ Cruise LLC for similar violations.
The NHTSA has a low tolerance for mistakes because this technology is still unproven. While software bugs and classification errors are bound to happen during the early implementation phase, it must be strict to ensure people’s safety.
The Strengths and Weaknesses of Each Driver Type
Self-driving technology has existed for over a decade, but vehicles were road-ready for years. For instance, while Waymo was founded in 2009, it did not receive regulatory approval to expand its robotaxi service to freeways until December 2025.
Until then, most of the self-driving car company’s trips comprised city miles in five key metro areas — Los Angeles, Phoenix, the San Francisco Bay Area, Atlanta and Austin. Concerned citizens worry that the likelihood of a deadly car accident will drastically increase since speeds are much higher on freeways.
As autonomous vehicles expand into new territories, they must adapt to new driving conditions. Most are Level 2 and only offer highway pilot. Few systems can handle all aspects of driving. Levels 4 and 5, which represent high and full automation, respectively, are not yet available.
Some cars use a camera array instead of LiDAR, a sensing method that uses laser light pulses to measure distances to objects. A vision-only approach makes them vulnerable to changing weather and road conditions. Fog, heavy rain and bright glare can impair their perception. Comparably, human drivers can rely on their other senses, including common sense.
However, they can’t compare to LiDAR. Although it isn’t universally better, three-dimensional mapping enables cars to perform well in conditions where humans may struggle, such as in darkness or glare. A combination of vision and spatial data is ideal.
The Consequences of Technology Failure
Even with LiDAR, camera arrays and artificial intelligence-enabled decision-making, driverless vehicles still make mistakes. It may be relatively rare, but it happens. Software bugs can cause a car to mistake a pedestrian for a pothole. Sensor faults can make a robotaxi mistake the right lane for a curb. These situations aren’t entirely hypothetical.
Despite software updates and a voluntary recall, Waymo robotaxis have repeatedly violated traffic laws. From August 2025 to November 2025, they illegally passed school buses 1.5 times per week on average in just one school district. In December 2025, after receiving its 20th violation from the Austin Independent School District, the school publicized videos of the incidents. Only then did Waymo announce it would voluntarily recall some vehicles.
Waymo’s approximately two dozen incidents pale in comparison to the over 7,000 citations the district issued to human drivers in the same span. However, Assistant Chief Travis Pickford of the Austin Police Department said that 98% of people who receive one violation do not receive another. Meanwhile, Waymo vehicles continued to illegally pass school buses week after week.
If a person were to repeatedly pass a stopped school bus, risking the safety of school children, they’d have their license revoked. When a robotaxi does it, the engineers can only push an update and hope it solves the problem. Technically, the NHTSA could revoke its license to operate in the city. However, they’re more likely to issue a fine.
Sometimes, consumers take matters into their own hands. Tesla has been sued multiple times regarding injuries and deaths associated with its driver assistance technology. Cases like these often get settled before going to trial, but juries have awarded damages before.
Public Perception of Autonomous Vehicles
One study with over 5,000 respondents found people are more likely to focus on an autonomous vehicle’s role in a crash, even if it wasn’t at fault. Also, they are slightly more likely to support suing the manufacturer.
Self-driving car companies have released reports demonstrating their superior safety. Some independent studies even support their claims. If data shows driverless cars are safer than their human-driven counterparts, why are people more critical of them?
For one, self-driving cars may be far less likely to get into accidents simply because there are fewer of them than humans — in 2025, there were 34,340 autonomous vehicles on the road. With over 242 million licensed drivers sharing the road, that means there were about 7,047 human drivers for every self-driving car.
There is also a psychological effect. If a person gets into an accident, people have someone to blame. If they were driving distracted or under the influence, court-mandated interventions can prevent them from making the same mistake.
Autonomous vehicles share software, so if one makes a mistake, they can all repeat it. Moreover, it can be challenging to identify the root cause — software bugs and sensor faults are harder to identify than signs of a DUI or distracted driving.
The Reliability of Company-Reported Safety Data
The safety and performance data reported by self-driving car companies make their technology seem vastly superior to human drivers, but this may be biased. It wouldn’t be the first time they have released false or misleading information.
In October 2023, an autonomous Cruise vehicle ran over a pedestrian who’d been thrown into its path by a human-driven car. It then dragged her over 20 feet instead of making an emergency stop. When filing an incident report, Cruise omitted that fact.
In a call with the NHTSA the next day, details of the dragging were missing from the company’s verbal summary. They even showed a video of the accident without that portion. When they submitted their official report that afternoon, there was still no mention of the dragging. They were eventually criminally fined for submitting a false report to influence a federal investigation.
This one incident doesn’t prove all manufacturers have nefarious intentions. However, taking all data — especially reports based on small sample sizes or authored by employees of self-driving car companies — at face value could have catastrophic consequences.
How to Foster Trust in Self-Driving Cars
Instead of speeding to be the first to unveil and approve the latest driverless technology, automakers and lawmakers should pump the brakes. Safety should take priority, even if it means parking driverless cars for a while.
Integrating advanced AI could address people’s concerns. It could make intelligent, context-aware decisions in real time. The cars would need to rely on edge servers — servers located at the network’s edge near users — to reduce latency. This would require a significant up-front investment, but the payoff could be substantial.
This approach may also have a positive psychological effect, as generative models can communicate in plain language. If an accident happens, it can explain its reasoning or deliver a human-readable report, humanizing the AI.
Other than improving driverless technology, the best way to foster the general public’s trust is to leverage rigorous simulations, training scenarios and testing. The more peer-reviewed, verifiable data people have, the more likely they are to trust this technology.
Improving Safety and Performance in Driverless Cars
Autonomous car companies are required to report crashes to government agencies, so the public will always have insight into the safety of autonomous vehicles. As this technology becomes more common, people will have more historical data to draw on, enabling them to see trends and forecast future changes.
The sooner automakers invest in safety, the better this data will look. Decision-makers should consider leveraging advanced edge AI and realistic training simulations to optimize safety and performance.








