Nvidia's (NVDA) spectacular earnings and guidance last week provided good evidence that the GPU leader is on its way to making the powering of artificial intelligence workloads a 10-figure annual business.
Since then, it hasn't wasted time announcing moves that grow its AI ecosystem and could help keep hungry rivals at bay.
On Monday, Nvidia and IBM (IBM) announced the latter is rolling out a software toolkit called IBM PowerAI for IBM servers containing Nvidia's Tesla accelerator cards, which are widely used to handle a popular type of AI known as deep learning.
IBM also rolled out a new server, the Power S822LC, that's optimized for AI and other high-performance computing (HPC) workloads: It pairs Big Blue's mammoth Power8 CPUs with Tesla accelerators and Nvidia's NVLink high-speed GPU interconnect.
That day, Nvidia also announced it's teaming with Microsoft (MSFT) on a solution that lets businesses create AI workloads by using Microsoft's Cognitive Toolkit software on systems containing Tesla GPUs. The solution is available both via Microsoft's Azure public cloud platform, and for corporate data centers through Nvidia's Tesla-powered DGX-1 HPC system.
And on Tuesday, Nvidia and Microsoft disclosed Azure server virtual machines (VMs) that feature Tesla GPUs are now generally available to Azure cloud computing users, four months after launching in preview mode. This comes less than two months after Amazon (AMZN) unveiled cloud computing VMs that support up to 16 Tesla GPUs.