The race to develop useful artificial intelligence is on, and it's not clear that it's a fair competition.
As Apple (AAPL) - Get Apple Inc. (AAPL) Reportpoaches a senior Alphabet (GOOGL) - Get Alphabet Inc. Class A Report exec to head its AI efforts, Microsoft (MSFT) - Get Microsoft Corporation (MSFT) Report creates a pair of new AI teams and Amazon.com (AMZN) - Get Amazon.com, Inc. Report hires every machine learning expert it can get its hands on, perhaps the biggest question to ask isn't how such moves put one tech giant's AI efforts ahead of another's, but rather how their massive investments in this space are leaving many other companies -- both within tech and elsewhere -- on the outside looking in.
It's also worth considering just how much all this investment benefits the chip suppliers that the tech giants are leaning on both to conduct AI research and apply what they've gained from it.
On Tuesday, The New York Times reported Apple has hired John Giannandrea, formerly Google's SVP of engineering, to head its "machine learning and A.I. strategy." Notably, he'll be one of 16 Apple execs reporting directly to Tim Cook.
At Google, Giannandrea was in charge of both AI and search, and oversaw the company's extensive efforts to apply AI/machine learning across its product line. On Monday, The Information reported Giannandrea's responsibilities will now be split between two execs: Jeff Dean, the well-respected (if not mythologized) head of the Google Brain AI research team, will oversee the company's AI work, while Ben Gomes, previously Google's VP of search engineering, will be in charge of search.
The news arrived less than a week after Microsoft unveiled a reorg that, among several other things, created an "AI Perception and Mixed Reality" team, as well as an "AI Cognitive Services and Platform" team. The teams will be reporting to Scott Guthrie, who was already in charge of many of Microsoft's enterprise software, cloud service and developer tool offerings.
Both Microsoft and Apple's moves signal a greater seriousness about making AI a core part of their product R&D work. Though Google is often seen as ahead of the pack in terms of how powerful and refined its machine learning algorithms are -- they're used by everything from search to YouTube to Gmail to Google Photos to Waymo's self-driving cars -- Microsoft and Apple are hardly beginners here.
Microsoft products leveraging machine learning include Office 365, the Cortana assistant, Power BI business intelligence software, Dynamics business apps and the latest version of its SQL Server database. Apple products that fit the bill include voice assistant Siri, the Photos app (object/scene-detection), the HomePod speaker (audio-optimization) and iPhone/iPad cameras (autofocus and lighting effects).
In addition to using AI to enable or improve various products, tech giants are steadily rolling out new AI services and programming interfaces (APIs) that cater to third-party developers. For example, Google, Microsoft and Amazon's cloud platforms each support services for building machine learning models, as well as APIs providing access to existing models for things like text, speech and photo-analysis (Google's are often considered best-in-class). And Apple and Google have launched APIs for iOS and Android, respectively, that let apps run machine learning models on a device.
All of these efforts are being enabled by a hiring spree that has contributed to a major shortage of AI talent. There are plenty of reports of brand-new PhDs with expertise in deep learning -- the advanced subset of machine learning that involves training models that function in a manner similar to neurons in the human brain -- receiving salaries of $300,000 or more. In February, startup Element AI estimated there are only 22,000 PhD-level researchers globally with the skills needed to build AI systems. A few months before that, it estimated "fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research."
Outside of those researchers still working in academic environments, or for startups willing to offer generous stock option packages, tech giants clearly possess a large portion of this talent. And a few minutes scanning job listings on LinkedIn makes it clear that they're hungry to add more.
A search for Amazon job listings featuring the term "machine learning" currently turns up 1,953 results by itself, with divisions and teams with open positions including AWS, Alexa, Amazon Robotics, Twitch, the A9 search unit and the Lab126 hardware unit. Similar searches for Apple, Google and Microsoft turn up 376, 423 and 889 listings, respectively.
Though the huge salaries being commanded by AI researchers is bound to grow the talent pool in time, it won't happen overnight. And as long as a talent shortage persists, it will work to the advantage of tech giants. That's both because they're able to pay top dollar for scarce talent, and because they have several other selling points to pitch to researchers. Namely, access to massive computing resources, the chance to work with other elite researchers and their possession of enormous amounts of user and customer data that can be used to train machine learning algorithms.
And as AI-related R&D spending keeps growing rapidly, both by tech giants and various other firms making smaller investments, so is spending on the silicon needed for the demanding task of training machine learning algorithms, as well as on chips used to handle the job of running trained algorithms against live data and content (inference).
Nvidia's (NVDA) - Get NVIDIA Corporation Report Tesla server GPUs dominate the training market, and are used by virtually every U.S. and Chinese tech/Internet giant. Intel's (INTC) - Get Intel Corporation (INTC) Report CPUs and programmable chips (FPGAs) are widely used for inference within data centers, but Nvidia and FPGA maker Xilinx (XLNX) - Get Xilinx, Inc. (XLNX) Report have also claimed a share of this market. And Broadcom (AVGO) - Get Broadcom Inc. Report has begun working with tech giants on custom chips that handle AI workloads; Google's Tensor Processing Unit (TPU), used by the company for a portion of its training and inference work, is believed to have been developed with Broadcom's help.
There is some risk for chip suppliers in this space that the current AI talent shortage will affect how much is spent on training systems and by proxy servers used for inference. But as both Nvidia's recent results and the behavior of tech giants shows, the amount of money and attention being given to AI work has been growing at a more than healthy clip in spite of the shortage.
Jim Cramer and the AAP team hold positions in Apple, Alphabet, Microsoft, Amazon, Nvidia and Broadcom for their Action Alerts PLUS Charitable Trust Portfolio. Want to be alerted before Cramer buys or sells AAPL, GOOGL, MSFT, AMZN, NVDA or AVGO? Learn more now.
Subscribe to our Youtube Channel for extended interviews, Cramer Replays, feature content, and more!