There were a few big takeaways from the Autonomous Driving 2018 Conference in Novi, MI.
Perhaps the largest? Someone can imagine a situation for nearly every single possible driving scenario that could result in an accident or fatality. Simply put, until we're decades into the future and there aren't any human drivers operating a vehicle, there will be accidents.
On-board cameras can detect lanes in the road, allowing the car to drive itself down the highway. "What happens if someone goes out and paints fake lines on the road?"
That's the situation Harsha Vemulapalli, the head of design at Bosch, was asked about when he was talking about autonomous driving technology with his child. But in various discussions and presentations over the past few days, this type of "what if" situation came up often. Not necessarily childlike, but from a point of safety.
In essence, there's no way to safeguard against every incident; it's simply not possible. We can use virtual reality and synthetic testing -- something Alphabet (GOOG) - Get Report (GOOGL) - Get Report with its Waymo unit and Nvidia (NVDA) - Get Report via its impressive DRIVE Constellation platform utilize -- to help accelerate the learning curve and improve safety.
He certainly acknowledged that it's a key component to the development of safe AVs, but there has to be other methods involved, too. That's why Intel's working on Responsibility-Sensitive Safety, RSS. Essentially, a reactionary system that keeps passengers out of harm's way.
One might push bach with the argument of, isn't that the concept of every self-driving car system?
To a degree, yes. All the cameras, radars and sensors are giving constant feedback to the autonomous system and reacting to what is happening. But Intel's system hinges on two things: Being safe and being effective.
If our AV won't merge into traffic or stops at every little shadow, it won't be very effective. That in itself can pose a threat to other drivers, but if it barges into bumper-to-bumper traffic and ignores potential threats or obstacles, then it also isn't safe.
Intel's Role in Autonomous Driving
Using its mathematical approach and various suite of sensors, Intel's RSS system keeps a constant monitor around the vehicle's safe operating field. These algorithms are what tell the AV system how close to follow the car in front of it, or how close is too close for a car in the next lane (on the highway, for example).
When these calculations detect a violation, the car reacts in a manner that will make them right again. For instance, if the lead car slows down, then so too will the AV. If two cars are driving side by side and one cuts into the AV's lane, the AV will slow until the violating vehicle passes and re-establishes the correct safe distances.
I can almost hear the readers at this point saying, "yeah but what if...<insert concern>?"
And that's sort of the whole point. We can't predict every single possible scenario. But by using Intel's RSS system, AVs will be able to account for other vehicles, obstacles, their speeds and distances, and what that means for how the AV should be operating at that exact moment.
It can account for the possibility of a pedestrian darting into the street or trying to turn into traffic by knowing the maximum speeds, distances and/or applications that need to take place in order to stay safe.
Take driving distance, for instance. By monitoring the car in front of it going 50 mph, the AV can calculate the distance it needs to keep from the lead vehicle by knowing how long it takes to come to a complete stop. This following distance would be different at 70 mph, just as it would be different at 30 mph.
One conference attendee asked if the AV would follow the lead car more closely or further away than a human driver would. Some observers may think the latter, because it's the "safer" choice. However, because an AV can sense speed changes and react in a much faster manner than humans, Weast said the AV could actually follow the lead car more closely than one would think.
Where does all of this fit into Intel's bottom line?
Investors may remember Intel's $15.3 billion purchase of Mobileye back in 2017. While some investors viewed the deal as a big overpay, we're starting to see the deal pay off.
Not only is Intel getting well-positioned in the autonomous driving race, but it's also getting contracts. Intel has been working with BMW in a non-exclusive AV manner and recently inked a deal to use its EyeQ5 chips in 8 million cars. The deal is with an unknown European automaker beginning in 2021, financial terms of which have yet to be disclosed at this time.
Still, 8 million vehicles is a pretty large number, even if it's over a span of several years. For reference, that's almost half the annual U.S. selling volume for new vehicles.
In short? Don't write off Intel when it comes to autonomous driving.
This article is commentary by an independent contributor. At the time of publication, the author had no positions in the stocks mentioned.