A string of late-September reports and announcements have driven home a few truisms about where Nvidia Corp. (NVDA) - Get Free Report stands, as much of the tech world, as well as parts of the broader corporate world, starts to make AI-related investments a key R&D priority:
Nvidia is facing growing direct and indirect competition from other chipmakers developing AI-related solutions, and by no means has a monopoly on developing quality silicon.
The large developer and hardware ecosystem that Nvidia has built up still gives it a major edge in certain battles, and at least a selling point in others.
Nvidia is doing its utmost to keep would-be rivals at bay, both by spending heavily on product development and working hard to grow its ecosystem and partner base.
The Nvidia-related story getting the most attention of late: CNBC reported on Sept. 22 that Tesla Inc. (TSLA) - Get Free Report is working on a processor for a next-gen autonomous driving system that will use AMD Inc.'s (AMD) - Get Free Report intellectual property and be manufactured by GlobalFoundries (long responsible for producing many of AMD's chips). Nvidia, which last year saw its Drive PX 2 autonomous driving platform adopted by Elon Musk's company to power the second-gen Autopilot system found in all new Tesla cars, sold off on the news, while AMD rallied.
And a few days before the Tesla/AMD report, Intel Corp. (INTC) - Get Free Reportformally announced that it's a key supplier to Alphabet Inc.'s (GOOGL) - Get Free Report Waymo self-driving hardware/software unit, and that Chrysler Pacifica test cars featuring Waymo systems contain Intel chips. It added (confirming what some suspected) that Intel and Waymo have been partners since 2009, with Waymo systems relying on Intel Xeon CPUs (more often than not found in servers), Ethernet transceivers and programmable chips (FPGAs).
The FPGAs, the product of Intel's $16.7 billion Altera acquisition, are used to handle machine vision algorithms. On the flip side, Waymo isn't using chips from top driver-assistance (ADAS) vision processor supplier Mobileye, which was a supplier for Tesla's first-gen Autopilot system and which Intel recently bought for $15.3 billion.
Also of note: Bloomberg reported on Sept. 26 that in the future, Tesla plans to use Intel processors to power the giant and versatile infotainment systems found in its cars, after years of relying on Nvidia's Tegra app processors. But this is a small loss for Nvidia compared with Tesla's Autopilot systems. Whereas Nvidia is believed to charge hundreds of dollars for Drive PX 2 -- Mizuho has estimated an average selling price (ASP) of roughly $300 -- ARM-architecture app processors such as Nvidia's Tegra chips are often sold in volume for less than $30, and usually don't go for more than $50.
Jim Cramer and the AAP team hold positions in Nvidia and Alphabet for their Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells NVDA or GOOGL? Learn more now.
More of What's Trending on TheStreet:
- Would You Buy Cryptocurrency Backed by a Guy Named 'Broken Tooth'?
- Rick Pitino Might Now Be One of College Basketball's Former Highest Paid Coaches
- Floyd Mayweather Got Knocked Out in Real Estate By Not Taking a Mortgage
- Why Boston Beer Company Should Sell to Molson Coors
Do the Tesla and Waymo stories suggest Nvidia is losing its edge in the autonomous driving wars? Not really -- especially when one remembers how long Intel and Waymo have been partnering, and that Tesla's change-of-heart likely has something to do with the fact that Jim Keller, who led the AMD team responsible for the company's well-received Zen CPU core architecture (used by its latest PC and server CPUs), is now in charge of Tesla's chip work. But the stories do show that autonomous driving is going to be a more competitive AI-related market for Nvidia than some others.
There are a couple of reasons for this. One is that while Nvidia's Tesla server GPUs (not to be confused with Tesla the automaker) remain the dominant platform for the demanding job of training a deep learning algorithm to handle a task -- for example, to translate text or make sense out of what the cameras and sensors in a self-driving system have picked up -- autonomous driving systems within cars are engaging in inference, the relatively less demanding job of running trained algorithms against real-world data.
Certain Nvidia GPUs, including the two dedicated GPUs found on Drive PX 2 boards, are pretty good options for inference: Nvidia says over 1,200 companies are using its GPUs for inference workloads, including many cloud giants. But other solutions have followings, too. FPGAs, though typically offering less raw horsepower than rival GPU products, are used not only by Waymo, but also in Microsoft Corp. (MSFT) - Get Free Report and Baidu Inc.'s (BIDU) - Get Free Report cloud data centers, thanks to their unmatched programmability. And Apple Inc. (AAPL) - Get Free Report , preferring to have inference done on its devices rather than on cloud servers, put a dedicated AI co-processor (called the Neural Engine) inside the A11 Bionic processor found inside the iPhone X and 8/8-Plus.
The other reason is that a self-driving system doesn't merely need to handle AI-based algorithms, but also do a lot of general-purpose processing work related to all the data it's taking in and acting on. That's why Intel's autonomous driving platform includes a Xeon CPU in addition to an FPGA, and also why Nvidia's Drive PX 2 features two Tegra processors -- each contains six CPU cores, including two custom Nvidia cores -- in addition to standalone GPUs.
It might also be partly why Tesla is reportedly working on a processor for Autopilot that uses AMD's IP. Though AMD, unlike Nvidia, hasn't launched a hardware and software development platform for autonomous driving, it does have the ability to provide high-quality CPU and GPU IP, both of which Tesla is bound to find useful.
That said, although it might be losing Tesla, Nvidia still has a pretty long list of Drive PX partners, including Audi, Mercedes-Benz, Toyota, Volvo, Volkswagen and major auto parts suppliers Bosch and ZF Corp. And odds are that most (if not all) of these companies won't be as keen as Tesla, which has been showing an Apple-like obsession with vertical integration and creating proprietary technology platforms, to develop their own autonomous driving processors. Most of them will likely adopt Drive PX Xavier, a powerful successor to Drive PX 2 due to see general availability in Q4 2018.
Meanwhile, though the Tesla news took the spotlight, Nvidia made a slew of announcements at a Beijing developers conference on Monday that highlighted its lead in the training portion of the deep learning market, as well as how serious it is about maintaining a healthy chunk of the data center inference market and growing its ecosystem.
Among other things, Nvidia:
- Unveiled TensorRT 3, a third-gen version of its inference optimization and runtime software for Tesla GPUs. Among other things, the company claims unmatched performance when running Google's popular TensorFlow machine learning framework.
- Announced that Alibaba (BABA) - Get Free Report , Baidu (BIDU) - Get Free Report and Tencent (TCEHY) are using its flagship Tesla V100 server GPU (also embraced by U.S. cloud giants) to train neural networks. Nvidia's big developer ecosystem edge in the training space is among the reasons why the V100 is quite popular.
- Announced Lenovo, Huawei and other Chinese server OEMs are launching servers based on the company's HGX reference platform for building systems containing V100 GPUs.
- Said it's teaming with Tencent and graphics card maker Leadtek to set up Chinese AI training centers.
- Disclosed #2 Chinese e-commerce firm JD.com (JD) - Get Free Report is using Nvidia's Jetson platform for embedded systems to power autonomous delivery drones and robots.
The moves come as Intel gets set to challenge Nvidia in the training space by launching a specialized chip relying on technology obtained via last year's purchase of startup Nervana Systems. It has also launched a new Xeon Phi co-processor line (codenamed Knights Mill) that's much better at inference than Intel's current Xeon Phi offerings. And not long after Google rolled out its second-gen Tensor Processing Unit (TPU), which unlike its predecessor can be used for training algorithms relying on TensorFlow as well as for inference within the company's cloud servers.
Clearly, the AI processor landscape is very complicated. Claims that one company or another is going to be the only winner look pretty dubious at this point. But on the whole, Nvidia has to like where it stands right now, even after accounting for its recent setback with Tesla.