Some of the technology that will go into a custom AMD (AMD)  solution for a massive supercomputer will eventually find its way into off-the-shelf products.

That was one of the things that CEO Lisa Su shared as she discussed AMD's product strategy during a Thursday conversation with TheStreet that followed AMD's well-received Wednesday launch event for its second-generation Epyc server CPUs (code named Rome). In addition, as Intel (INTC) highlights its attempts to create extensive "platform-level" data center solutions amid tougher competition from AMD, Su went over AMD's attempts to build platforms around its server CPUs with the help of partners.

Here's the second part of a recap of notable comments Su made during her interview with TheStreet. The first part, which includes remarks from Su about expected near-term Rome adoption and her plans to stay at AMD, can be found here.

Making AMD's Supercomputer Technology More Widely Available

In May, AMD and supercomputer maker Cray (CRAY) (set to be acquired by HP Enterprise (HPE) ) announced a $600 million-plus deal to provide the Department of Energy (DOE) with Frontier, a system due out in 2021 that's set to be the world's most powerful supercomputer by far. Notably, whereas AMD to date has been using its high-speed Infinity Fabric interconnect to connect chips within a single CPU chip package, Frontier will feature nodes that use Infinity Fabric to connect a custom AMD server CPU with four custom AMD GPUs.

Su, who also mentioned that AMD expects strong adoption of Rome for high-performance computing (HPC) systems, indicated that some of what it has developed for Frontier could be provided for more general-purpose HPC offerings.

"I do think...you're going to see many of these technologies come into standard products," she said. "As we go forward, our goal is to push the envelope in HPC. And for general-purpose HPC applications, you'll see some of those technologies come into play."


How AMD's CPUs and GPUs will be paired within the Frontier supercomputer. Source: AMD.

Future Server CPU Launches

Whereas Rome is arriving two years after AMD launched its first-gen Epyc CPUs (code named Naples), the company has indicated that its third-gen Epyc CPUs, code named Milan, will arrive in mid-2020, putting them about a year away from release. Intel, meanwhile, said in May that it wants to launch new server CPU platforms every four to five quarters going forward, compared with a historical pace of every five to seven quarters.

When asked if AMD has a goal for how quickly it wants to launch new server CPU platforms in the future, Su declined to share a precise target, but did suggest that launch intervals will be closer to what's expected to exist between Rome and Milan than what existed between Naples and Rome.

"We aim to be competitive," she said. "So if I put that stake in the ground, I think, whether it's four or five or six quarters, it's in that range. But we aim to be very competitive."

Next-Gen Platforms and Manufacturing Processes

With Rome having just launched, Su remained tight-lipped about Milan's expected performance gains. But she did say that AMD is pleased with what it has done with its Zen 3 CPU core microarchitecture, which will power Milan as well as future Ryzen PC CPUs and just saw design work conclude. Su also said that both Zen 3 and a successor microarchitecture known as Zen 4 will feature "plenty of ideas" that AMD engineers came up with based on what they learned from Naples and Rome. "We'll talk more about it as we get closer," Su said.

Su also said AMD will share more later about its plans for using Taiwan Semiconductor's (TSM) next-gen, 5-nanometer (5nm), manufacturing process, which is set to enter volume production during the first half of 2020. Notably, Milan won't be using TSMC's 5nm process, but rather a 7nm+ process that recently entered volume production and delivers moderate improvements relative to the 7nm process used by Rome.

"We look at technology intercepts very carefully. We want to be early, but not too early," Su said. She added that TSMC remains "a great partner" for AMD.

No Plans to Sell Servers

Whereas GPU archrival Nvidia (NVDA) has (in addition to working with many top server and storage makers) begun selling powerful systems for AI and HPC applications that feature its Tesla server GPUs, Su indicated AMD is content to simply be a CPU and GPU supplier to third-party server OEMs and contract manufacturers (ODMs).

"We work really closely with OEM and ODM partners...I think it's the right strategy," she said. "We believe in this idea of an open ecosystem, so that you get the best of the best. And we continue to work on how...we specify those systems, so they take the best advantage of our hardware."

Nvidia is a holding in Jim Cramer's Action Alerts PLUS member club. Want to be alerted before Jim Cramer buys or sells NVDA? Learn more now.

Integrating CPUs with GPUs

For both PCs and game consoles, AMD offers processors that place a CPU and a GPU on the same chip. Given this, as well as the growth of graphics-intensive server workloads such as cloud gaming and virtual desktops, is AMD interested in launching processors that pair a server CPU with a GPU?

Su didn't rule out the possibility, but also suggested that -- much as it plans to do for the Frontier supercomputer -- AMD could focus more on enabling high-speed connectivity between server CPUs and GPUs via its Infinity Fabric.

"I wouldn't count anything out," she said. "[But] in server, it's not so much whether it's integrated on the same chip that's important. What's really important is the connectivity between CPUs and GPUs. We're really focused on our Infinity Architecture, and how do we ensure that connectivity is as efficient as it can possibly be."

Expectations for Single-CPU Servers

Since launching Naples in 2017, AMD has often talked up its efforts to provide single-CPU (single-socket) servers with advanced features that Intel has traditionally reserved for servers with two or more CPUs. And four months after Intel improved its support for single-socket servers a bit, AMD has doubled down on its single-socket push via Rome, unveiling -- in addition to many dual-socket CPUs -- aggressively-priced, single-socket CPUs that have up to 64 cores, support up to 4TB of RAM and feature plenty of memory and input/output (I/O) connectivity bandwidth.

When asked about AMD's expectations for single-socket Rome sales, Su said interest in single-socket servers is growing, but also suggested it's still early days. [It's] still one of those areas where people have to get used to it," she said. Su added that -- at a time when the adoption of GPUs, programmable chips (FPGAs) and other accelerators within servers is steadily growing -- AMD strongly believes in letting customers decide just how much CPU power they want in a server.

Partnering With Xilinx and Memory Makers

AMD and FPGA giant Xilinx (XLNX) have a common rival in Intel, and so it's no surprise that the companies have teamed to develop data center solutions that pair AMD CPUs and Xilinx FPGAs. Su suggested there's more to come on this front.

"We are pleased with the partnership with Xilinx. I think the teams are very technically engaged," she said, while adding the companies are focused on the interoperability of their products and "pushing the edge" when it comes to supporting high-speed interconnect technologies such as PCIe 4.0. Regarding applications for which AMD sees its server CPUs being used with Xilinx FPGAs, Su said cloud environments and edge computing systems (often used to process data from IoT devices) are among the early use cases.

Meanwhile, with Intel continuing to heavily promote its Optane next-gen memory offerings -- they rely on the 3D XPoint technology that Intel co-developed with Micron (MU) , and attempt to strike a middle ground between DRAM and flash memory in terms of performance, cost and density -- for use with its Xeon server CPUs, Su didn't rule out the possibility of working with Micron to support its upcoming 3D XPoint-based offerings. She also hinted at developing new solutions meant to address the memory bandwidth bottlenecks that can exist at times for server CPUs.

"We are open to partnering across the ecosystem...especially in memory. It's so important to bring memory closer to the CPU," she said. "We have a great partnership with Micron, Samsung (SSNLF) [and] Hynix...We think pushing the envelope on that connection between CPU and memory is very important going forward."

Investing in Software

When talking about their competitive strengths, both Intel and Nvidia frequently highlight their massive investments they make in optimizing their chips for popular software and more broadly building out their developer ecosystems. In May, Intel said it employs over 15,000 software engineers; for comparison, AMD had 10,100 employees overall at the end of 2018.

However, as its revenue base grows, AMD has been steadily growing its software-related work for both its CPU and GPU lineups, and -- a day after virtualization software giant VMware (VMW) announced that it will support Rome's advanced memory encryption features -- Su stressed that software investments are a company priority.

"If you look at where we're adding resources, we're just adding a ton of resources in software," she said. Su added that in addition to investing in things such as drivers and software libraries, AMD (like Nvidia) has been working to optimize its products for popular AI/machine learning frameworks such as TensorFlow and PyTorch.

Nvidia is a holding in Jim Cramer's Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells NVDA? Learn more now.