Nvidia Teams With Google on New Cloud Computing Services

Nvidia's new A100 flagship GPU is powering a new family of Google cloud computing instances -- including one that provides access to 16 GPUs.
Author:
Publish date:

Two months after Nvidia  (NVDA) - Get Report unveiled its latest flagship server GPU, the first cloud computing instances to rely on the product are rolling out.

On Tuesday, Nvidia and Google  (GOOG) - Get Report announced that the Google Cloud Platform (GCP) is launching a series of cloud computing instances -- known as the A2 VM family -- that’s powered by Nvidia’s new A100 GPU. The most powerful of these instances provides access to 16 A100 GPUs, as well as 1.3TB of system memory; customers with less demanding needs can buy access to smaller instances.

With Nvidia positioning the A100 as its next-gen solution for AI training, AI inference and traditional high-performance computing (HPC) workloads, as well as a solution that can offload big data processing from server CPUs, the A2 VM instances are being pitched as an option for using GPUs to accelerate a variety of demanding workloads.

In addition, Google says that the A100 will soon support its Kubernetes Engine service for deploying clusters of apps running within containers, and its Cloud AI Platform, which helps developers build, run and manage AI/machine learning models.

It might not be too long before other major public cloud providers roll out A100-powered computing instances. While officially announcing the A100 in May, Nvidia said that (in addition to Google) Amazon Web Services  (AMZN) - Get Report, Microsoft Azure  (MSFT) - Get Report and Oracle  (ORCL) - Get Report are all planning to offer A100-powered services, as are Alibaba  (BABA) - Get Report, Baidu  (BIDU) - Get Report and Tencent’s  (TCEHY)  public cloud platforms. All of these platforms already offer cloud computing instances that rely on older Nvidia GPUs.

Though its exact performance gains vary from workload to workload, Nvidia has asserted that the A100, which is based on its new Ampere GPU architecture, significantly outperforms its last flagship server GPU (the Tesla V100, which launched in mid-2017) when running popular AI-related workloads. Gains of up to 20x are promised when performing training and inference using certain types of arithmetic.

Though Nvidia didn’t officially announce the A100 until May 14, it began shipping the GPU to select clients during its April quarter. Initial A100 shipments, along with healthy demand for existing server GPUs and GPU-powered servers, helped Nvidia’s Data Center segment post April quarter revenue of $1.14 billion, up 18% sequentially and 80% annually.

Nvidia shares were up 1.4% to $399.17 in mid-day trading on Tuesday while Alphabet shares were rising 0.6% to $1,509.73.

Alphabet and Nvidia are holdings in Jim Cramer’s Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells these stocks? Learn more now.