Just as the PC CPU market has been overdue for some real competition to challenge Intel's (INTC) - Get Report hegemony, the same could arguably be said for server CPUs. While proprietary offerings from IBM (IBM) - Get Report and Oracle maintain a presence on the high-end, and ARM-based chips from Cavium (CAVM) and AppliedMicro (AMCC) have made modest progress elsewhere, Intel's Xeon server CPU line is as dominant as it has ever been, claiming giant shares with enterprises and cloud giants alike.

And just as AMD's recent Ryzen CPU launch has (in spite of disappointing 1080p gaming performance) is making the PC CPU market more competitive than it has been in at least several years, a pair of announcements this week -- one from AMD, the other from Qualcomm (QCOM) - Get Report and Microsoft (MSFT) - Get Report -- stand to make the server CPU market a lot more interesting. But due to both Intel's investments and the big differences that exist between the PC and server markets, taking significant share from Intel will be easier said than done.

On Tuesday, AMD shared technical details about the first server CPU based on the Zen CPU core architecture that underpins the Ryzen line. The chip, codenamed Naples, is due to ship in the second quarter, features 32 cores and supports 64 simultaneous threads. Intel's Xeon E5-2699A v4, the most powerful CPU within the Xeon E5 line that Naples takes aim at, supports only 22 cores/44 threads, albeit at a higher base clock speed (2.4GHz vs. 1.4GHz) and turbo clock speed (3.6GHz vs. 2.8GHz).

Naples is meant for the kind of modular single and dual-CPU servers that cloud giants tend to deploy by the thousands via hyperscale data center architectures. Its advanced memory system -- useful for analytics workloads and in-memory databases -- has drawn attention. A dual-CPU server can support a whopping 4TB of memory, and deliver 170GB/s of memory bandwidth per CPU. The latter exceeds the 140GB/s of effective bandwidth AMD claims the E5-2699A delivers.

AMD also states a dual-CPU Naples server will support 128 lanes of PCI Express 3.0 connectivity for things such as solid-state drives (SSDs) and networking adapters, more than the 80 supported by a dual-CPU E5-2699A server. And each chip has a 64MB level 3 cache that AMD promises is optimized for energy-efficient processing; the E5-2699A comes with a 55MB cache.

Qualcomm and Microsoft, meanwhile, announced Qualcomm's Centriq 2400 server CPU -- it began sampling in December, and will see volume shipments in the second half of 2017 -- will be used by Microsoft's Azure public cloud platform. Notably, as part of the effort, Microsoft plans to optimize a version of its Windows Server operating system that will run in its own data centers for the Centriq 2400.

In addition, a motherboard design based on the 2400 will be submitted to the Open Compute Project (OCP), a Facebook-led effort to create low-power, open-source, data center hardware designs. The design is based on Microsoft's Project Olympus open-source server hardware initiative. Cavium's ThunderX2 ARM-based CPU will also support Olympus, which initially featured only Intel processors, and are being evaluated for Azure use.

The 2400 has 48 cores based on Qualcomm's proprietary Falkor ARM-based server CPU core architecture. Like Naples, it's meant for single and dual-CPU servers. The 2400's memory and connectivity specs aren't as impressive as Naples', but between its use of low-power ARM cores and an advanced 10-nanometer manufacturing process, it could deliver impressive power efficiency.

Much as the pricing of Ryzen desktop CPUs has undercut that of comparable Intel Core i5 and i7 CPUs, look for Naples and the Centriq 2400 to undercut pricier Xeon E5 chips. The most powerful E5 v4 CPUs have prices north of $2,000, with the E5-2699A sporting a "recommended customer price" of $4,938.

Also helping AMD/Qualcomm's cause: Many cloud providers have long wanted a credible second option to Intel, if simply to keep Intel honest. Google previously flirted with using Qualcomm server CPUs, and has shown an interest in deploying IBM's Power CPUs, normally found in high-end IBM servers. Getting even, say, 15% of all of Google or Microsoft's processor orders would spell a lot of business.

But while Intel shouldn't take AMD and Qualcomm's moves lightly, it's not exactly time to panic either. For starters, AMD and Qualcomm are each announcing one server CPU, whereas Intel supplies dozens tailored to various use cases. In addition to selling numerous E5 parts, Intel also provides the Xeon E7 line for giant enterprise servers running mission-critical workloads, and the E3 line for cheap "microservers" and customers seeking chip with integrated GPUs. And for low-power servers and embedded systems, Intel supplies both its Xeon D line and Atom server CPUs.

Moreover, in addition to its listed offerings, Intel does brisk business selling custom Xeon chips to cloud giants, something its R&D budget leaves it uniquely positioned to do on a large scale. The company has also developed custom chips specifically meant to run Oracle databases.

Intel's resources also give it the ability to optimize its off-the-shelf chips for popular workloads. For example, the E5 v4 line was optimized to improve the performance of server virtual machines, and E7 chips for databases and other transactional workloads. Likewise, major server software developers such as Oracle, SAP (SAP) - Get Report and VMware (VMW) - Get Report put effort into fine-tuning their products for Intel's chips.

And Intel's sales pitch to enterprises and cloud providers doesn't just revolve around the performance of its CPUs. It's also selling them on server platforms that can include things such as Intel's Xeon Phi co-processors (useful for many high-performance computing (HPC) workloads), Omni-Path fabric (used to connect servers at high speeds), 3D XPoint next-gen memory and 100-gig silicon photonics transceivers. And with the help of its recently-acquired Altera unit, Intel is pitching cloud providers and others on products that house both a Xeon CPU and an FPGA that can be re-programmed on the fly to handle things like new AI algorithms.

The fact that major enterprise IT buyers tend to be risk-averse, and often prefer to stick with tried-and-trusted hardware and software platforms, also helps Intel's cause. This particularly holds when competing against Qualcomm and other ARM server CPU vendors, given the ARM server software ecosystem still pales relative to that for Intel and AMD's x86 CPUs.

Last but not least, whereas Intel is cutting its PC R&D spending, it plans to increase its Data Center Group's R&D spend by about 25% this year. Having its server CPUs power a large share of the many new cloud, analytics, AI and HPC workloads that will sprout up in the coming years is a priority for the company, as is selling various complementary solutions.

Intel has also promised that in the future, new manufacturing processes will first be used with server CPUs rather than PC CPUs. In the interim, the company plans to launch a new Xeon E5 line based on the company's Skylake architecture (launched for PCs in 2015, more advanced than the Broadwell architecture used by current E5 chips) in the coming months.

For all these reasons, it's best to take a wait-and-see attitude on AMD and Qualcomm's latest products. The chips could gain a following for certain use cases, but there's every reason to think that Intel will remain a colossus in the broader server processor market.