Intel's (INTC) - Get Reportpurchase of artificial intelligence startup Nervana Systems, made at a reported cost of $408 million, says a lot about how large Intel considers the opportunity for powering AI workloads to be. It also says a lot about how serious Intel is at challenging Nvidia's (NVDA) - Get Report leadership position in this budding market.
Nervana, founded by former Qualcomm (QCOM) - Get Report researchers, has developed hardware and software that allow companies to create and run deep learning models. Deep learning, a subset of machine learning, is a type of AI in which algorithms identify patterns within data fed into a system -- for example, the text from articles on a specific subject, or video from a camera attached to a self-driving car -- and become smarter and smarter at understanding patterns as more data is taken in.
Nervana has created a software library declared to provide "a curated, enterprise-grade collection of the world's most advanced deep learning models," an open-source software framework for creating new models and (notably) the Nervana Engine, a chip that was built from the ground up to process deep learning algorithms, and -- with the help of 32GB of on-chip High Bandwidth Memory -- is declared to provide far more raw processing power than "today's state-of-the-art GPUs."
Healthcare, energy, financial services, agriculture, automotive and online services are touted as potential end-markets. Intel says it will use Nervana's software expertise to improve its own library, and the Nervana Engine to advance its AI portfolio and improve the performance of its Xeon server CPUs and Xeon Phi processor cards -- the company can use its enormous chip manufacturing and R&D investments to help with the latter effort.
By "state-of-the-art GPUs," Nervana is undoubtedly talking about the ones found in Nvidia's Tesla graphics cards, which power AI workloads for Facebook (FB) - Get Report, Microsoft (MSFT) - Get Report , Baidu (BIDU) - Get Report and many other cloud service providers (Google, which uses its proprietary Tensor Processing Unit for AI, is a notable exception). Strong demand from cloud providers helped Nvidia's data center revenue -- it covers both Tesla cards and Nvidia's GRID GPUs for cloud gaming and virtual desktops -- rise 63% annually in the May quarter to $143 million. Nvidia recently upped the ante by launching the Tesla P100, a card promised to deliver more than a 12x increase in AI performance relative to its predecessor.
Intel claims over 97% of servers used to handle machine learning workloads today run on Xeon CPUs. But many of those servers undoubtedly also contain Tesla cards. To date, Intel has been trying to take on Nvidia with its Xeon Phi accelerator cards -- the newest Xeon Phi cards, codenamed Knight's Landing, can function both as co-processors paired with regular Xeon CPUs, and as standalone processors. But while Xeon Phi has been adopted to an extent for various high-performance computing (HPC) and analytics workloads, Nvidia still dominates when it comes to AI. Hence Intel's willingness to pay up for Nervana.
The deal is another attempt by Intel to differentiate relative to the slew of chipmakers developing server CPUs based on ARM Holdings' (ARMH) instruction set at bay. These chipmakers, which include Qualcomm, AMD (AMD) - Get Report, Cavium (CAVM) and AppliedMicro (AMCC) , have sought to end Intel's near-monopoly position in much of the server CPU market via low-power chips that are often optimized for specific tasks. Cloud giants such as Google have at least been willing to hear the ARM vendors' sales pitches.
Intel has been countering by using its resources to create architecture-level solutions for cloud providers and server makers that go beyond just CPUs. These efforts have included developing Xeon Phi, a high-speed interconnect fabric for servers and an architecture that lets computing, storage and networking resources be pooled and independently upgraded. They've also included the $16.7 billion acquisition of chipmaker Altera, whose FPGAs can (among many other things) be programmed on the fly to handle new deep learning algorithms.
While outlining Intel's five-part strategy for a post-PC world in April, CEO Brian Krzanich promised his company will "drive more and more of the footprint of the data center to Intel architecture." By spending heavily to obtain Nervana's silicon and software, Krzanich is putting Intel's money where his mouth is.