This column originally appeared on May 18 on Real Money, our premium site for active traders. Click here to get great columns like this.
At this point, most of those following the product announcements and R&D efforts of consumer tech giants know that the subset of AI known as machine learning -- broadly defined as the use of algorithms that can learn on their own by taking in relevant data -- is a big deal for practically all of them. And that it's being used to do things like field voice commands, detect objects within photos and get cars to drive themselves.
But there are a couple of facets to this trend that aren't as well-appreciated:
- The usefulness of a machine learning algorithm for handling tasks normally done by humans doesn't necessarily improve at a linear pace, but can improve exponentially or close to it when a tipping point is reached due to all of the data that the algorithm has been run against.
- Machine learning R&D work can often be applied to many different tasks, including some that don't have much in common at first glance.
Both of these phenomena work very much in Alphabet/Google's (GOOGL) favor. Though Amazon.com (AMZN) , Microsoft (MSFT) and others have also made tremendous progress in delivering AI-powered products and services on a large scale, Google arguably remains a step ahead when it comes to many of the tasks that both Google and rivals are trying to address. And as the announcements made at this week's Google I/O developers conference show, a lot of these offerings seem to be hitting a tipping point in terms of what they can do.
During his Wednesday I/O keynote talk, CEO Sundar Pichai pointed out that the word error rate for Google's speech recognition services had fallen to 4.9% from 8.5% just over the course of the last ten months. He also pointed out that the error rate for Google's image-recognition algorithms was now below the human error rate. Along the same lines, Google had previously disclosed that Google Translate's services now at times deliver human-level accuracy due to their adoption of an AI-based translation technique.
Pichai's disclosures were followed by the unveiling of a slew of new AI-powered Google services. Among them:
- Google Lens, an augmented reality (AR) tool that can recognize objects, buildings and text picked up by a phone's rear camera, and then act on the information. Examples shown included providing information on a detected restaurant or plant, translating a sign from Japanese to English and automatically entering a Wi-Fi router's login info upon seeing it on the device.
Lens will be integrated with Google Assistant and Photos, with the former able to use what Lens has detected to continue an interaction with a user. Indirectly, it competes with the AR developer platform that Facebook (FB) just launched.
- New Google Photos features. Photos will now be able to automatically detect the presence of contacts within pictures, and recommend the sharing of photos with them. It will also be able to automatically remove unwanted items from a photo.
- The arrival of Actions on Google, which lets third-party developers support interactions via Google Assistant, on Android phones. The service is Google's version of Amazon's Alexa Skills.
- The addition of Smart Replies -- suggested replies that are based on Google's analysis of a message, and which a user can send with a couple of taps -- to Gmail. Smart Replies were previously available on the much less popular Allo messaging app.
Such services often depend on core research done by the well-funded Google Brain AI research division. Google claimed last year it's applying machine learning to over 100 projects, and has indicated that many of these projects involve engineers taking the same basic research and adding to it to address a specific task.
This approach undoubtedly has much to do with the launching of Google.ai, which was also announced during the keynote. Google.ai isn't a new division as much as it is a cross-company initiative to bring together the company's core and applied AI work, as well as its efforts to create AI-related computing hardware, software libraries and cloud services that can be leveraged by both Google and third-party developers.
The "computing hardware" aspect of this effort took a big step forward with the unveiling of a second-gen tensor processor, a proprietary chip meant for machine learning work. Each new tensor processor can deliver an impressive 45 teraflops of performance, and is placed on a four-chip tensor processing unit (TPU). 64 TPUs can be rigged together to create an 11.5-petaflop supercomputer. Importantly, whereas the original tensor processor was only meant for running machine learning algorithms to deliver real-world services (inferencing), the new one can also handle the more demanding task of training an algorithm for a particular job.
In addition to using its new TPUs for its own projects, Google plans to sell access to them through the Google Cloud Platform (GCP). That makes the product to some degree a threat to Nvidia (NVDA) , whose Tesla server CPUs are widely used for AI training work. It should be noted, however, that Google (like other cloud giants) remains a big Nvidia client, and that Pichai gave a shout-out during his keynote to Nvidia's powerful new Tesla V100 GPU.
There were also a slew of other AI-related efforts that were discussed during Google's I/O keynote. These included the use of machine learning to improve Google Search, have Google Maps' Street View cameras automatically detect signs and to help pathologists analyze high-resolution imagery for signs of cancer.
Ultimately, however, the biggest takeaway from the keynote isn't how useful Google's AI algorithms for one particular task or another. It's how the company's AI work in general has become a core competitive advantage whose presence is being felt company-wide.