Autonomous driving has had a black eye over the past few weeks and months, thanks to a string of accidents.
Some of these crashes have been fatal, such as the Tesla Inc. (TSLA) accident in California when the driver was using Tesla's driver-assist feature, Autopilot. Another occurred in Arizona, after a self-driving Uber struck and killed a pedestrian.
So what exactly was behind that accident?
According to sources, it wasn't a hardware malfunction with the car's cameras or sensors, but rather a software issue behind the accident. The software for the vehicle was programmed to ignore "false positives," meaning that the vehicle's self-driving program wouldn't stop for certain things, like a plastic bag or piece of garbage for example.
The problem is, the sensor was set to ignore too much information. While automakers test these systems, they put a human driver in charge to take over in case there's an issue. In the autonomous driving world, it's referred to as a disengagement when the human driver has to take control of the vehicle.
In this particular case, though, the driver of the autonomous Uber vehicle briefly looked away before striking the pedestrian. This fatal sequence of events is what caused the accident to occur and is forcing many companies to look at how they conduct these tests.
For instance, while many companies resort to using real-road testing, synthetic testing is becoming more prevalent. In Alphabet's (GOOGL) (GOOG) recent conference call, management said Waymo, the company's self-driving segment, had logged 5 million real-world test miles. However, it's synthetic testing would soon log 5 billion test miles.
Nvidia also announced a new product, allowing companies to take advantage of synthetic testing.