Though PC gaming GPUs and Nintendo Switch console processors accounted for 53% of Nvidia's (NVDA - Get Report) April quarter revenue, it would be hard to fault anyone who was unfamiliar with the company and had tuned into Tuesday's earnings call or followed Wednesday's developer conference announcements for thinking that its main business is selling chips and hardware for AI workloads.

And Nvidia is far from the only big-name tech company to give AI a disproportionate amount of attention relative to the size of its current business. As various machine learning algorithms reach tipping points in terms of their effectiveness at making decisions once only capable of being handled by the human brain, and as the software frameworks and cloud services underpinning AI projects make algorithm development easier, no one wants to get left out.

On the first day of its annual GTC conference, Nvidia unveiled the Tesla V100, a new flagship server GPU meant to handle the computing-intensive task of training deep learning algorithms to handle a particular job (for example, translating text or recognizing objects in a photo). The chip is the first one based on Nvidia's next-gen Volta GPU architecture, and is due to ship in Q3.

With Tesla GPUs basically an industry standard for training projects -- Nvidia's broad software and developer ecosystem is partly responsible for this, as is the effectiveness of GPUs for parallel processing tasks such as training neural networks -- it's a safe bet that Nvidia's cloud and enterprise clients will be eager to buy the V100. During CEO Jen-Hsun Huang's GTC keynote, execs from Microsoft's (MSFT - Get Report) Azure unit and Amazon.com's (AMZN - Get Report)  AWS unit appeared to discuss their current use of Nvidia GPUs to enable cloud services, and their eagerness to adopt Volta.

The V100 will be made using a 12-nanometer Taiwan Semiconductor  (TSM - Get Report) manufacturing process -- more advanced than the 16-nanometer process used to make GPUs based on Nvidia's Pascal architecture, which launched last year. It's also the first Nvidia part to feature Tensor Cores, a new kind of processor core meant specifically for deep learning, a subset of machine learning focused on imitating the human brain. The chip also has 40% more of the traditional CUDA processing cores found in Nvidia GPUs than its predecessor, the Pascal-based P100, and communicates via an interconnect that's twice as fast.

The V100 is said to have over 40% better raw performance, as measured in TFLOPS. But much larger gains are promised for deep learning work. For example, Huang claims that a project to train a neural network relying on the Caffe2 software framework -- recently open-sourced by Nvidia client Facebook (FB - Get Report) -- that took 20 hours to carry out with 8 Pascal GPUs will only take 5 hours with 8 Volta GPUs.

Nvidia also claims the V100 is more than an order-of-magnitude faster than Intel's (INTC - Get Report) Xeon server CPUs for such tasks, but that's not an apples-to-apples comparison. Intel, leveraging technology obtained via last year's Nervana Systems acquisition, plans to launch an ASIC (codenamed Lake Crest) later this year that's focused on deep learning projects, and should be much more competitive. It also plans to eventually launch a Xeon Phi co-processor (codenamed Knight's Mill) that integrates Nervana's technology. Nvidia's ecosystem, along with the tremendous engineering resources that it's clearly pouring into the Tesla line, could help it fend off Intel. Not to mention AMD (AMD - Get Report) , which will soon begin shipping its Radeon Instinct server GPUs.

In addition to the V100, Nvidia showed off a revamped version of its DGX-1 server that swaps out the eight P100 GPUs found in the original version with eight V100 GPUs. The company claims up to three times better performance for AI workloads, and will be charging a steep $149,000 for the system. It also showed off the HGX-1, a module sporting eight V100 GPUs -- it could appeal to cloud giants that prefer to design their own servers -- and the DGX Station, a powerful workstation deep learning engineers that packs four V100s.

Other GTC announcements include a set of cloud services for AI developers, and a partnership with Toyota through which the auto giant will use Nvidia's Drive PX computing boards to power its autonomous driving systems. Toyota joins a Drive PX client list that already includes Tesla Motors, Mercedes-Benz and Audi, as well as auto parts giants Bosch and ZF. It increasingly feels as if the sheer performance of Nvidia's GPU-backed self-driving hardware is giving it a leg up on Mobileye -- due to be acquired by Intel for $15.3 billion -- and other rivals.

It's quite telling that the V100, with its Q3 ship date, appears set to arrive well before any Volta-based PC GPU. Though the P100 was the first Pascal-based GPU to be unveiled in 2016, it began shipping on a standalone basis a few months after the Pascal-based GeForce GTX 1080 and 1070 high-end desktop GPUs.

Some prior reports have indicated that the first PC Volta parts won't arrive until 2018. And the recent launch of two new high-end Pascal GPUs, the 1080 Ti and the Titan Xp, could allow Nvidia to bide its time. But even if Volta hits desktops in late 2017, it would've been unthinkable a few years ago for Nvidia to initially bring a brand-new GPU architecture to servers.

The fact that it's willing to do so now says a lot about the extent to which remaining the dominant supplier of processors for powering deep learning projects has become a priority as global R&D investments in them take off. This dominant position, along with growing sales of Tesla GPUs for other high-performance computing (HPC) projects and the adoption of Nvidia's GRID GPUs for powering server-hosted PCs, led Nvidia's Datacenter segment sales to rise 186% annually last quarter to $409 million (21% of revenue).

It's also telling that Nvidia is willing to step on the toes of its OEM clients with many of its newer offerings, in order to grow its AI-related sales. The DGX-1 and DGX Station compete against hardware from the likes of IBM (IBM - Get Report) , HP Enterprise (HPE - Get Report) and Dell, whom Nvidia leans heavily on to sell servers and workstations containing its GPUs. On the other hand, while some articles have suggested otherwise, Nvidia's new cloud services are meant to complement GPU-backed cloud infrastructure services from firms such as Amazon and Google, by making it easier for developers to get neural networks relying on their own hardware or a cloud provider's up and running.

While Nvidia drove home its AI obsession at GTC, Microsoft has done the same this week at its Build developer conference. The company unveiled a slew of new Azure "Cognitive Services" for tasks such as video analysis, creating custom search engines and making software-based recommendations, and gave developers the ability to customize the AI models behind both old and new services. The latter feature is something that cloud rivals generally lack for the time being.

Microsoft also showcased how its new Azure IoT Edge service -- it lets companies run cloud workloads close to where IoT devices are located, and competes with Amazon's Greengrass service -- can use AI to do things such as monitor and improve the performance of industrial hardware, or keep an eye on workers and objects to detect potential safety risks. And it showed off new developer tools for its Bot Framework, which launched in 2016 and lets developers build apps featuring AI-powered chatbots. Microsoft claims over 130,000 developers now support the framework.

Look for Alphabet/Google (GOOGL - Get Report) to also show AI-powered services plenty of love at its I/O developer conference, which runs from May 17-19. And for Apple  (AAPL - Get Report) to do the same at its WWDC developer conference, which runs from June 5-9.

As with past tech trends that built up into an industry fervor, not everyone making big AI investments is going to see them pay off. Some are bound to pare back their spending once the initial hype dies down. But considering how much progress has been seen in the field over the last few years, and how much attention developers the world over are giving it, one certainly can't blame the likes of Nvidia, Microsoft and Google for investing the way they are.

Jim Cramer and the AAP team hold positions in Apple and Alphabet for their Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells AAPL or GOOGL? Learn more now.

Read More Trending Articles: