Among other things, the GPU giant announced included new partnerships with Volkswagen, Uber and Baidu (BIDU) , shared fresh details about its Drive Xavier and Pegasus autonomous driving platforms and unveiled software solutions such as Drive IX, Drive AR and AutoSIM.
Shortly afterwards, I got a chance to talk with Danny Shapiro, Nvidia's Senior Director of Automotive, about the company's CES announcements and Nvidia's auto strategy in general.
Just as Tesla Inc. (TSLA) argues that its Nvidia-powered second-gen Autopilot system (it launched in late 2016) is powerful enough to support full autonomy once the software catches up, Shapiro says Nvidia is aiming to deliver the computing power and sensor support needed for Level 4 (the ability to take over from drivers in most, but not all, situations) or greater autonomy in the short-term. "The software-defined car can get new apps and updates so it might not be a fully self-driving car when you buy it, but it can get self-driving capabilities over the life of the vehicle through software updates," he said.
With regards to Nvidia's software work, Shapiro suggests there are parallels between the company's approaches to working with game developers and automakers. "We don't write the video games," he said. "We write all the tools, all the libraries...there's also the rendering tools, the shaders...the game developers can build their applications on top."
And the same effectively goes for cars. Nvidia will supply auto OEMs with things such as software stacks, developer tools runtimes and simulators, but automakers will decide what types of vehicles and form factors Nvidia's solutions go into, as well as fine-tune the user experience.
I asked Shapiro about Nvidia's mapping strategy -- autonomous cars need very detailed maps, and Alphabet's (GOOGL) Waymo and Intel's (INTC) Mobileye have both made big investments in mapping platforms. He noted Nvidia is partnering with top third-party mapping providers such as Here, TomTom and Baidu, and is hoping to leverage its AI strengths. "Mapping is a very local endeavor...you want to create a system that can scale. It's very labor-intensive, so the more it can be automated [and] the more it can be done in the data center, the more AI is going to be involved."
Nvidia, it should be noted, isn't (unlike Waymo and Mobileye) developing its own maps. Rather, with the help of partners, it want to help others use its solutions to develop them -- and not just via the company's in-vehicle hardware and software, but also its deep learning-optimized server GPUs.
Shapiro declared that it's "very likely" that cars supporting Level 4 autonomy will hit the market within two to three years, provided regulatory approvals arrive. "It may be in closed environments, airports, shuttles...Disney World...there may be towns or cities with special geo-fenced areas...but I do believe we'll see it very soon," he said. "It's extremely robust now, and we're just working on making it as perfect as we can."
Given that Nvidia is working with a number of automakers and parts suppliers who compete against each other, I asked Shapiro to what extent Nvidia is able to use data from one partnership to enhance the solutions it provides to the industry in general. Not surprisingly, Shapiro noted Nvidia has NDAs (non-disclosure agreements) that prevent it, for example, from sharing data related to its Volkswagen project with Uber. But he added there are still ways in which Nvidia can make use of what it learns.
"We're building our software stack and our neural nets and various foundational tools and libraries," he said. "[D]ata is not going to be shared from automaker to automaker, but we're able to leverage a lot of that data...to refine our neural nets." In addition, Shapiro pointed out that the data produced by an automaker relying on one set of sensors (cameras, radar, LIDAR) for its self-driving platform might not be all that useful to an automaker relying on a different set of sensors.
Regarding Nvidia's Drive AR platform, which provides an augmented-reality view of a driver's surroundings through a car's infotainment system, Shapiro noted the solution isn't meant solely for autonomous cars. The same, of course, holds for Drive IX, which uses sensors and AI to understand what's going on inside and outside of a car, and takes appropriate actions and sends driver alerts. Examples include alerting driver or passenger opening a door that a cyclist is nearby, and using face-recognition to automatically open a car's trunk when the owner approaches carrying items in both hands.
During the press event, CEO Jensen Huang noted that Nvidia could often see multiple Drive computing boards designed into cars to handle different tasks -- for example, one to handle autonomous driving, and another for Drive IX. I asked Shapiro if Nvidia ultimately sees everything being consolidated onto one board. He suggested that while the feature sets of individual boards will naturally grow, there will still often be a need for multiple products.
"If you look at the history of the computing industry, you never have enough...because the applications just become more and more complex," Shapiro said. "We don't see this any different in the car space."
To back up his point, Shapiro pointed out the high-end Audi A8 sedan, whose new Traffic Jam Pilot feature is promoted as a Level 3 autonomous driving solution (i.e., it can take over from drivers in some, but not all, situations). It features six Nvidia processors -- one for Traffic Jam Pilot, one for the infotainment system, two for rear-seat entertainment systems and one for the rear-seat remote control.
"We continue to take these big things and shrink them down," Shapiro said, "but then we do multiple of the big things."
More of What's Trending on TheStreet: