Machine learning (ML) transforms the way we use technology every few years. And we’re pleased to announce that Google products are sometimes used. We’ve seen Google Assistant improve the functionality of your devices and Google Translate break down language barriers, but we haven’t always been able to bring the best of machine learning to your smartphone. That is why Google Tensor was built. A processor that can provide Pixel users with whole new capabilities while remaining on top of the latest machine learning breakthroughs.
A few years ago, the Google research team collaborated on hardware, software, and machine learning to develop the best ML mobile computer, allowing us to fully achieve our vision of what our Pixel smartphones should be capable of. Co-creating Google Tensor with Google Research provided us with some insight into where machine learning models are headed, rather than where they are now. As a result, we were able to build an AI/ML platform that could continue our work at Google. We’re unlocking fantastic new experiences with Google Tensor that need cutting-edge machine learning, such as motion mode, face blur, speech enhancement mode for video, and applying HDRnet to video (which we’ll discuss later).Google Tensor allows us to push the limits of a smartphone’s use, transforming it from a one-of-a-kind piece of hardware to a gadget that understands and adapts to the many ways we use them.
Differently designed
Google Tensor was created in a unique way. Google Tensor is a high-end system on a chip (SoC) with all the features you’d expect from a mobile SoC and more. How did we pull it off? Our new phones’ major areas of experience are speech, language, pictures, and video, all of which are heterogeneous in character, requiring additional resources across the chip. As a result, we carefully developed Google Tensor to offer the right level of computing performance, efficiency, and security. With Android 12, we set out to build an operating system that would lay the groundwork for future hardware and software collaboration. It may be seen in real-world applications such as creating stunning movies and understanding many languages.
What Google Tensor is capable of
We were able to add new capabilities to the Pixel 6 and Pixel 6 Pro thanks to a cooperation between Google Research, hardware, and software. This is due to Google Tensor’s ability to execute more advanced and cutting-edge machine learning models while using less power than previous models. Phone number (s. Google Assistant on Google Tensor, for example, uses the most accurate Automatic Speech Recognition (ASR) Google has ever published. For the first time, we can use a high-quality ASR model even for long-running apps like Recorder or tools like Live Caption without the battery being soon depleted.
Thanks to Google Tensor and the new Live Translate function on the Pixel 6 and Pixel 6 Pro, you’ll be able to interact with people in the language you’re most comfortable with. Other chat applications, including Apple Messages and WhatsApp, will allow users to translate text right within the chat app, removing the need to clip and paste information into Google Translate. Using the device’s translation and voice models, Google Tensor now allows Live Translate to function on multimedia material such as video. When operating on Google Tensor, the device’s new Neural Machine Translation (NMT) model uses less than half the power of previous models on Pixel 4 phones. Google Tensor also enables computer photography and video, which are two of the features that make the Pixel such a great phone. Take, for example, the Motion mode.