Some of the key hardware innovations that Nvidia is packing within Turing, which was officially unveiled on Monday at the SIGGRAPH industry conference, will do little or nothing to boost the performance of existing games. And they might only have a sizable impact on a fraction of the big-name titles launching over the next six to twelve months.
But in time, those innovations should have a major impact on how big-budget games are both developed and played. And Nvidia's current dominant position in the high-end gaming GPU market, along with its large R&D budget, allows it to push the envelope without having to worry too much about its competitiveness in benchmarks for games that don't leverage the new technologies.
In addition to traditional streaming multiprocessors (SMs), Turing --- like Nvidia's Volta architecture, which powers some of the server and workstation GPUs it has launched since mid-2017 -- supports Tensor Cores, a type of processor core optimized for AI/deep learning tasks. It also, notably, supports RT cores, a new type of core meant to support Nvidia's real-time ray tracing (RTX) technology, which it first showed off in March.
Ray tracing, which involves creating imagery by mapping the paths taken by light rays to reach a viewer, has long been seen as something of a holy grail for gameplay, due to its ability to produce photorealistic graphics. However, while widely used to produce imagery for films, TV shows and pre-rendered cut scenes for games, ray tracing has been much too computationally demanding to be used in live gameplay.
Nvidia asserts that RTX, along with Turing and its RT cores, will change that. On Monday, the company launched three Turing-based workstation GPUs for content creators -- the Quadro RTX 5000, 6000 and 8000 -- and claimed Turing can drive up to a 25x increase in real-time ray tracing performance relative to Nvidia's Pascal architecture, which powers its current gaming GPU lineup, among other things. In addition, CEO Jensen Huang showed a single Quadro RTX 8000 GPU running a ray-tracing demo that previously required a system featuring four of Nvidia's top-of-the-line Tesla V100 server GPUs.
On August 20, at the Gamescom conference, Nvidia is expected to unveil the first Turing-based gaming GPU(s). The company's July quarter earnings report, which should feature October quarter sales guidance, is due on August 16.
Several major game developers, including Electronic Arts (EA - Get Report) and Fortnite developer Epic Games, have signaled that they plan to create games supporting ray tracing. Nvidia previously indicated that games supporting ray tracing will arrive this year.
Even when running on GPUs featuring RT cores, these games are only expected to partly rely on ray-tracing for gameplay, given how demanding it is. And it's worth keeping in mind that the RTX 8000 GPU used in Huang's demo is expected to cost around $10,000; the first Turing gaming GPUs will undoubtedly be less powerful and much cheaper. That said, as more powerful gaming GPUs launch in the coming years, it's safe to assume that developers will get more aggressive about how much they lean on ray-tracing.
One can also assume that developers will in time launch games that make good use of the Tensor cores within Turing GPUs. On PC GPUs, these cores will accelerate inferencing -- the running of trained AI algorithms against real-world data and content. By intelligently optimizing imagery via one or more techniques, Nvidia argues algorithms running on Tensor Cores can enhance both game frame rates and image quality, and also benefit various photo and video applications.
It's worth noting that Turing also features a new streaming multiprocessor architecture that promises to deliver "unprecedented levels" of performance per core for Nvidia's traditional CUDA GPU cores. It also supports GDDR6 graphics memory, which represents a step up from the GDDR5X memory used by Pascal gaming GPUs.
However, Turing's ray-tracing and (to a lesser extent) AI features are taking the spotlight. Nvidia's ability to bring them to market has a lot to do with its unmatched GPU R&D budget; the company's total R&D spend rose 23% in fiscal 2018 (it ended in January) to $1.8 billion, and is expected to rise 27% in fiscal 2019 to $2.3 billion.
Meanwhile, the fact that Nvidia faces limited high-end gaming GPU competition gives it some leeway to reserve space on Turing chips for brand-new processing cores that won't be used much, if at all, by existing titles. A pair of Pascal GPUs launched in early 2017 -- the GeForce GTX 1080 Ti and Titan Xp -- remain the most powerful gaming GPUs on the market ahead of expected Turing launches.
AMD (AMD - Get Report) , which has to pick its fights to an extent as it battles both Nvidia and Intel (INTC - Get Report) , is expected to launch a next-gen GPU architecture codenamed Navi in 2019. However, some recent reports indicate that Navi will be aimed at the mid-range gaming market, where the company already has a solid presence, more than the high-end. Intel plans to enter the high-end discrete GPU market, but its first products aren't due until some point in 2020.
All of that leaves Nvidia in good position to both maintain its gaming performance lead for present-day titles, while also laying the groundwork to hold onto that leadership position if and when competition intensifies in the coming years.