Chipmaker Xilinx's (XLNX) new CEO is betting big on growing his company's data center exposure. And if the right deal comes along, this bet could include the kind of major acquisition that Xilinx has been averse to making in the past.
That was one of several key takeaways from an interview with Victor Peng, who in January succeeded long-time Xilinx CEO Moshe Gavrielov as chief executive. Peng was previously Xilinx's COO, and held R&D-related executive roles at the company from 2008 to 2017.
Xilinx, along with Intel's (INTC) Programmable Solutions Group (formerly Altera), dominates the market for field programmable gate arrays (FPGAs) and other hardware-programmable chips. Though traditionally not as efficient at handling a given task as application specific integrated integrated circuits (ASICs) that have been hard-coded to do it, the programmability of FPGAs make them popular for projects in which hardware needs to be quickly designed or updated to support new features, or for which volumes are too low to justify the high development costs of an ASIC.
Here are seven key topics that I discussed with Peng:
1. "Data Center First" Strategy
Although chips going inside of data center hardware account for a relatively small percentage of Xilinx's sales, Peng expects this to change in the coming years, as the company aggressively pursues opportunities in fields such as AI acceleration, video-processing and high-performance computing (HPC) and storage workloads. After laying needed groundwork in fiscal 2019 (it ends in March 2019), Peng expects "more material" data center revenue in fiscal 2020.
The fact that U.S. and Chinese cloud giants have been rapidly growing their investments in hyperscale data centers aids Xilinx's cause. "We're engaged with all the hyperscalers," Peng said, while adding he's "not at liberty" to talk about some of the engagements. However, Amazon Web Services (AWS) has unveiled cloud computing services that provide access to Xilinx FPGAs, and Baidu (BIDU) has disclosed it's using Xilinx chips both to offer cloud services to third parties, and to help power its own services.
AI investments by both cloud providers and enterprises also gives Xilinx motivation to invest in its data center efforts. And so does the fact that the slowing of Moore's Law and other tech trends are making accelerator cards (whether featuring FPGAs, GPUs or something else) a more compelling computing option for many types of workloads that were once just handled by CPUs. Storage was singled out by Peng as one such area.
Peng stressed, however, that Xilinx's new "data center first" strategy isn't just about sales opportunities. The pace of innovation is much faster in the data center than in many of Xilinx's other markets, and thus being actively engaged in this field can pay dividends elsewhere. In addition, whereas much of the company's ecosystem in other markets consists of hardware developers, a major data center presence helps give Xilinx a large base of software developers interacting with its products using popular programming languages and software frameworks.
Xilinx has built a large ecosystem of data center partners.
2. The New ACAP Platform
A key pillar for Xilinx's data center push is its adaptive compute acceleration platform (ACAP) -- a new line of hardware-programmable chips that also feature powerful software-programmable compute engines, and thus can adjust to a workload's needs on the fly. "This is so different from what came before it...this is such a big leap it's really not an FPGA," Peng insists, while adding that nothing currently offered by either Xilinx or Intel can compare in terms of rapid adaptability.
ACAP is said to be the product of over four years and $1 billion worth of R&D. The first product line based on it, codenamed Project Everest, will rely on a next-gen 7-nanometer manufacturing process, and begin shipping to customers in 2019. Xilinx claims Everest chips can deliver up to 20 times the performance for AI workloads as existing Xilinx FPGAs relying on a 16-nanometer manufacturing process.
3. Battling Nvidia and Intel in AI
Though competition has begun picking up a little, Nvidia's (NVDA) Tesla server GPUs easily remain the most popular option for the demanding task of training AI/deep learning algorithms to handle a particular task. This has much to do with how well-suited GPUs are for running hundreds of small tasks in parallel.
But the field of inference -- the task of running trained AI algorithms against real-world data and content -- has long been more competitive. While Nvidia is making some headway in this space, a lot of inference work is still handled by Intel server CPUs. In addition, FPGAs (both Xilinx and Intel's) and ASICs (some developed with Broadcom's (AVGO) help) have gained a foothold.
Nvidia has talked up the performance and programmability of its inference-optimized GPUs. "[W]riting software is a lot easier than designing chips," Nvidia CEO Jensen Huang quipped when asked about FPGAs in November. However, FPGAs often have an edge in power consumption, and Xilinx is betting ACAP will change the playing field when it comes to programmability.
Peng also noted that low latency can be important for many inference workloads -- for example, fraud-detection or voice assistant services -- and that FPGAs tend to be very good at keeping latency down.
Peng also highlighted that there's a "much broader set of applications and workloads" that Xilinx can accelerate which aren't good fits for GPUs -- database search and storage acceleration, for example. Nvidia might counter by noting that there are HPC/supercomputer workloads (outside of AI) for which GPUs are better fits.
The mobile infrastructure market has long been one of Xilinx's largest; its FPGAs are a common sight within base stations and mobile backhaul equipment. And like other chipmakers, Xilinx's mobile infrastructure sales have been pressured in recent years by weak carrier capex.
Peng asserts that 5G rollouts will give Xilinx a major boost -- he expects 5G to be bigger than 4G in terms of deployments -- while noting he doesn't see deployments taking off until 2020. He also claimed Xilinx has a big edge on Intel when it comes to design wins for hardware used in initial 5G deployments. "In 5G, we have virtually all of the early deployments," Peng claimed.
The broader adoption of massive MIMO antenna arrays -- set to be widely used by 5G base stations, but also due to go into many other wireless systems -- was also said to be a growth driver. Here, Peng argued Xilinx's RFSoCs, which pair programmable hardware blocks with high-performance RF data converters, give it an edge.
5. Autonomous Driving
Peng stated Xilinx is the #2 player -- behind Intel's Mobileye unit -- in the market for driver-assistance (ADAS) processors. And that while Xilinx's autonomous driving (AD) engagements haven't gotten much attention (Nvidia and Intel/Mobileye have both made plenty of headlines), the company does have some design wins for AD systems. "Stay tuned in terms of customer announcements...over the course of the next few months, you will hear about significant customer wins," he said.
Still, in the near-to-intermediate term, Xilinx expects ADAS to be a much larger revenue contributor. "Fully autonomous driving is even further out than 5G" in terms of major deployments, Peng insisted.
Though not nearly as exposed to the cryptocurrency mining space as Nvidia and AMD (AMD) , Xilinx is now doing some business selling chips for mining hardware. "We're talking about the low tens of millions" in quarterly revenue, Peng said, while noting Xilinx has one "very large customer" in the space that's "forecasting more need." That customer might be China's Bitmain, which is using Xilinx SoCs to serve as the control system for its ASIC-powered mining hardware. (For context, Xilinx's December quarter sales totaled $631 million).
Not surprisingly, Peng declined to comment about potential M&A interest in Xilinx. With Broadcom having reportedly shown M&A interest in Xilinx back in 2015, there's been some speculation the company could target Xilinx now that the Trump Administration has squashed its bid for Qualcomm (QCOM) . However, Xilinx sports a near-$20 billion market cap, and Broadcom suggested on last week's earnings call that it's looking for deals that won't significantly increase its net debt balance.
On the flip side, Peng indicated that Xilinx, which historically has limited its M&A activity to small "tuck-in" deals, is now open to making a larger purchase, provided it grows Xilinx's data center exposure and furthers its adaptive computing vision.
"If we did anything, it would be in the context of...is it something very significant that helps us accelerate our strategy?" Peng said. "It would have to move the needle pretty substantially as opposed to what we can do organically."
If Xilinx does choose to go in this direction, its large market cap and healthy balance sheet (the company has $1.8 billion in net cash) could make it fairly easy to digest a sizable deal.