Less than two weeks after Nvidia (NVDA) - Get Report announced that it’s buying Arm for up to $40 billion, Arm is unveiling some major advances for a part of its business that Nvidia is particularly interested in.
On Tuesday morning, Arm unveiled the Neoverse V1 and N2, a pair of CPU core microarchitectures meant for servers and other “infrastructure” hardware such as networking gear and storage equipment. The V1 is already available to CPU developers, while the N2 is currently sampling and will see a full release next year.
The N2 represents an evolution of Arm’s Neoverse N1 microarchitecture, which has been used by the likes of Amazon Web Services (AWS) and private Ampere Computing to develop server CPUs. The V1, by contrast, is a brand-new platform meant to enable CPUs that can deliver bleeding-edge performance.
Relative to the N1, Arm claims the V1 will deliver more than a 50% performance gain in terms of instructions per clock (IPC) -- that is, the gain for workloads relying on a single CPU thread, assuming CPU clock speeds are unchanged. The N2, which is meant for CPUs featuring less powerful but more power-efficient cores, is promised to deliver a roughly 40% IPC gain relative to the N1.
As tech website AnandTech points out, real-life performance gains for V1 and N2-based CPUs relative to N1 CPUs will probably be larger still, since they’ll rely on more advanced manufacturing process nodes -- for example, Taiwan Semiconductor’s (TSM) - Get Report recently-commercialized 5nm node -- that will help drive clock speed increases.
Along with its IPC gains, Arm is touting the V1’s support for scalable vector extensions (SVE), an architectural feature that can drive performance gains for workloads such as simulation, analytics and the running of deep learning models.
Though x86-instruction-set CPUs from Intel (INTC) - Get Report and AMD (AMD) - Get Report are still quite dominant in the server CPU market, ARM CPUs are starting to make some headway, particularly among Internet/cloud service providers. AWS now claims hundreds of customers for cloud computing instances powered by its ARM-based Graviton2 CPUs, which are priced aggressively and have held up well in some (though not all) benchmarks pitting them against Intel and AMD server CPUs.
The x86 server CPUs still have a major edge on ARM-based rivals in terms of software support. However, some major developers, such as Red Hat and VMware (VMW) - Get Report, are now providing a measure of support for ARM, and some popular open-source software is also now ARM-compatible.
Moreover, as Cloudflare (NET) - Get Report CEO Matthew Prince recently mentioned to TheStreet while discussing his firm’s interest in ARM server CPUs, Apple’s (AAPL) - Get Report plan to migrate its Mac lineup to ARM-based processors is (due to how many developers code on Macs) expected to drive greater developer interest in writing ARM-based server software.
All of this undoubtedly isn’t lost on Nvidia, which now gets over 40% of its revenue from data center products. During a conference call that followed Nvidia’s announcement of its deal to buy ARM, Nvidia CEO Jensen Huang promised his company would “turbocharge” ARM’s server CPU R&D work.
Huang also left the door open to Nvidia developing its own ARM server CPUs. But at the same time, he promised to continue supporting third-party CPU developers and suggested Nvidia is more interested in creating end-to-end silicon and software platforms that pair ARM server CPUs with Nvidia GPUs and recently-acquired Mellanox Technologies’ data processing units (DPUs), which can offload networking, storage and security functions from a CPU.
With regulators expected to closely scrutinize the Nvidia-ARM deal and Nvidia forecasting the deal will take about 18 months to close, it could still be a while before we see such platforms arrive. But in the meantime, ARM is putting in a lot of useful groundwork for them.