Along with being one of the world's biggest consumer electronics makers, retailers and digital content sellers, Apple (AAPL - Get Report) is now a top-tier chip developer, as judged by the estimated dollar value of all the chips it ships in a given year. This fact tends to get swept under the rug, since all of the chips Apple creates are used in its own hardware rather than sold to third parties.
But all the same, Apple's prodigious chip work has not only helped the company lower its costs, but also aided strategic objectives such as improving performance and battery life, reducing device size and enabling novel features. A recent report points to one more instance in which Apple's chip expertise is proving to be of strategic value. So might a new hire, though the book remains to be written here.
On Friday afternoon, Bloomberg reported that Apple is developing a processor for its consumer hardware that would be "devoted specifically to AI-related tasks" such as face and speech recognition. It added that Apple "has tested prototypes of future iPhones with the chip," known within the company as the Neural Engine, while cautioning it's not clear if the chip will be ready in time for this year's iPhones.
Separately, analyst Neil Shah observed (courtesy of a LinkedIn post) that Apple has hired Esin Terzioglu, a Qualcomm (QCOM - Get Report) exec who oversaw the chipmaker's central engineering organization. The hire has fueled fresh speculation that Apple, now in the midst of a messy legal dispute with Qualcomm, wants to develop a system-on-chip (SoC) that pairs a baseband modem (currently obtained from Qualcomm and Intel (INTC - Get Report) ) with an A-series app processor. VentureBeat reported in 2015 that Apple is open to creating such a chip, while leveraging Intel's modem technology and cutting-edge manufacturing processes.
The Neural Engine report meshes with Apple's unique approach to AI. The likes of Alphabet/Google (GOOGL - Get Report) , Amazon.com (AMZN - Get Report) , Facebook (FB - Get Report) and Microsoft (MSFT - Get Report) generally perform the task of running machine learning algorithms against user data and requests, and of optimizing its algorithms in response to this data, on cloud servers. Apple, by contrast, prefers to do as much of this work--known as inferencing--on its own devices as possible due to privacy concerns.
Last year, Google showed off its first-gen Tensor Processing Unit (TPU), a server-based chip dedicated specifically to inferencing. More recently, it unveiled off a second-gen TPU meant for both inferencing and the more demanding task of training a neural network for a particular job. And Nvidia (NVDA - Get Report) , whose Tesla (TSLA - Get Report) server GPUs are already widely used for training work, recently announced its next flagship Tesla GPU will have dedicated "Tensor Cores" for deep learning tasks. Qualcomm, meanwhile, has optimized the Hexagon 682 digital signal processor (DSP) found in its Snapdragon 835 SoC--used by the Galaxy S8 and other recent high-end Android phones--for Google's popular TensorFlow machine learning framework.
Though approaches vary, there's clearly value in developing AI-optimized silicon, as machine-learning algorithms are deployed to do everything from analyze voice commands to detect objects in photos to suggest the next word to type. And while inferencing may not be as taxing as training, it can still be pretty demanding for a general-purpose CPU and GPU baked into a pocket-friendly device less than 10 millimeters thick. Offloading the job to a dedicated chip makes quite a lot of sense ... provided Apple remains uncomfortable about doing it in the cloud.
As for building a mobile SoC combining a baseband modem with an app processor, it would allow Apple to save some circuit board space, as well as potentially lower its component costs. Perhaps more importantly, if the effort featured a long-term modem IP and manufacturing partnership with Intel, it could better position Apple and Intel to eliminate the performance lead that Qualcomm's high-end 4G modems still retain.
Apple throttled the performance of the Qualcomm modems found in some iPhone 7 units so that they wouldn't exceed the performance of the Intel modems found in others, and whereas Intel's soon-to-ship XMM 7480 modem has a top download speed of 450Mbps, Qualcomm's recently launched Snapdragon X16 modem (found within the Snapdragon 835) tops out at 1Gbps. And the Snapdragon X20, which has a 1.2Gbps max speed, is due in early 2018. If Apple is serious about ditching Qualcomm, it needs to eliminate this performance gap, whether on its own or with Intel's help.
A pair of early-spring stories also suggested Apple is expanding the scope of its chip design work. Back then, a German bank reported there's "strong evidence" that Apple is developing its own iPhone/iPad power management chip, a move that could sideline current supplier Dialog Semiconductor. Imagination Technologies, long the supplier of the GPU designs used by A-series processors, disclosed that Apple has told the company it would stop using Imagination's designs within two years in favor of its own.
As it is, Apple's chip efforts cover several types of embedded processors, along with flash memory controllers, fingerprint sensors and display timing controllers. Adding even four more projects to the list doesn't sound too far-fetched if there are major benefits to be had.