Nvidia's (NVDA) - Get Reportspectacular earnings and guidance last week provided good evidence that the GPU leader is on its way to making the powering of artificial intelligence workloads a 10-figure annual business.
Since then, it hasn't wasted time announcing moves that grow its AI ecosystem and could help keep hungry rivals at bay.
On Monday, Nvidia and IBM (IBM) - Get Reportannounced the latter is rolling out a software toolkit called IBM PowerAI for IBM servers containing Nvidia's Tesla accelerator cards, which are widely used to handle a popular type of AI known as deep learning.
IBM also rolled out a new server, the Power S822LC, that's optimized for AI and other high-performance computing (HPC) workloads: It pairs Big Blue's mammoth Power8 CPUs with Tesla accelerators and Nvidia's NVLink high-speed GPU interconnect.
That day, Nvidia also announced it's teaming with Microsoft (MSFT) - Get Report on a solution that lets businesses create AI workloads by using Microsoft's Cognitive Toolkit software on systems containing Tesla GPUs. The solution is available both via Microsoft's Azure public cloud platform, and for corporate data centers through Nvidia's Tesla-powered DGX-1 HPC system.
And on Tuesday, Nvidia and Microsoft disclosed Azure server virtual machines (VMs) that feature Tesla GPUs are now generally available to Azure cloud computing users, four months after launching in preview mode. This comes less than two months after Amazon (AMZN) - Get Reportunveiled cloud computing VMs that support up to 16 Tesla GPUs.
Does Alphabet (GOOGL) - Get Report have its own GPU-supporting VMs? Well, it's about to. The web giant announced yesterday the Google Cloud Platform will start offering VM instances supporting up to 8 GPUs in early 2017. Tesla cards will be provided for VMs meant for AI/HPC workloads; AMD (AMD) - Get ReportFirePro cards will be provided for ones enabling remote workstations.
Shares are up 6% today, adding to Friday's giant post-earnings gains and making fresh highs. Positive remarks from Jim Cramer during Tuesday night's episode of Mad Money may be helping.
This week's announcements come after Nvidia disclosed last Thursday its data center revenue -- dominated by Tesla, but also covering sales of Nvidia's GRID GPUs for server-hosted PCs and cloud gaming -- rose a stunning 193% annually in the October quarter to $240 million.
Booming demand for Tesla accelerators by enterprises and cloud service providers -- the latter group is both turning to Nvidia's cards to enable AI-powered services provided to consumers, and to provide cloud services that third parties can leverage for their AI work -- played a big role.
Nvidia also announced its automotive revenue rose 61% to $127 million. While this is dominated for now by sales of Tegra processors for infotainment systems, AI-powered solutions for autonomous driving systems, such as Nvidia's Drive PX2 module (recently adopted by Tesla Motors), will play a growing role in the years to come.
As the IBM and Microsoft deals show, Nvidia's unmatched AI software ecosystem has become a key differentiator as it looks to fend off rivals -- the list of competitors includes Intel's (INTC) - Get ReportXeon Phi accelerator cards and Nervana Systems ASICs, programmable chips (FPGAs) supplied by Xilinx (XLNX) - Get Report and Intel's Altera unit and custom ASICs such as Google's Tensor Processing Unit.
In addition to supporting several popular third-party software toolkits, Nvidia's CUDA GPU programming model has been widely adopted by AI engineers/researchers, and so has its cuDNN neural networking software library.
Beyond that, the popularity of Tesla cards for AI work has given Nvidia a lot of data about how its cards are used by neural networks, and the company has put a lot of work into optimizing its cards for specific deep learning tasks.
For its new Pascal GPU architecture, Nvidia has launched a high-end Tesla card that's optimized for the computing-intensive job of "training" a neural network for a specific task (say, identifying objects in a photo or making sense of a natural-language voice search). It also had less powerful cards that can be used for "inferencing," through which a trained neural network can power services in real-time.
To date, a large portion of AI-related Tesla sales have been related to training. But as AI touches more and more of the services offered by cloud giants to billions of consumers, the inferencing opportunity could be even bigger.
Nvidia clearly faces no shortage of competition in either area. But no one can accuse the company of being complacent about its current lead.