7 ways AI is improving the use of your Pixel already

tensor_flow_ai_her

Our expertise in artificial intelligence aids in making products like Pixel as beneficial as possible. Everyday tasks are already made simpler by AI, whether you’re using your Pixel to translate a foreign language, edit images, or make a call while in a busy environment.

These are 7 ways AI improves Pixel 

most of which are made possible by the specially designed Google Tensor chip.

1. You can correct those almost-perfect images using Magic Eraser. Inappropriate distractions in your images, such as strangers in the backdrop or telephone wires, are detected and eliminated using machine learning. Then, you may tap each one to remove it individually or select to delete them all at once. Magic Eraser will utilize machine learning to pinpoint precisely what you are attempting to remove if you circle or brush over the area of the image that you wish to eliminate.

PXL_20230211_201646081_PfoR1Kv PXL_20230211_201647429_2_1.

2. A new feature exclusive to the Pixel 7 and Pixel 7 Pro, Picture Unblur makes it easy to focus your fuzzy photographs with just a few clicks. To enhance the quality of the entire image and any faces in it, it detects and removes blur and visual noise using a model we developed that runs on the device. This even restores pictures of your grandparents or children that weren’t taken with a Pixel camera.

3. In order to “see and comprehend” more skin tones and accurately, attractively, and more accurately portray them, Real Tone leverages computer vision, a form of artificial intelligence. In order to test our cameras and increase our dataset to contain 25 times more images of people of colour than before, Real Tone’s improvements to the way Pixel Camera renders skin tones were developed in collaboration with outside image experts, including photographers, cinematographers, and colourists.

5. In order to address some common issues with the primary “feature” of our smartphones—making and receiving calls—the Call Help feature suite employs Google AI. Clear Calling uses machine learning to filter out background noise, such as that from busy restaurants and windy streets, so that you can hear the person on the other end of the call clearly. While Call Screen uses on-device models to identify the caller and the reason for the call before you pick up.

A demonstration of the call assist feature on a smartphone.

6. Those who are blind or have impaired eyesight can capture selfies with the aid of Guided Frame. It assists you in snapping a selfie by delivering clear advice (such as “move your phone slightly left”) on how to tilt and rotate the camera to fit everyone in the frame. It does this using the front-facing camera, computer vision, and Google’s Talkback feature.

7. Live Translation allows for real-time translation of spoken words as well as text without the need for an app or an internet connection, interpreting live audio from one speaker to another. That means you can use Live Caption to watch a video that isn’t in your native tongue or to read the text in another language by pointing the camera at a menu or a sign. Additionally, Google Tensor allows it to execute directly on a device rather than across a network and server.

source

Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

    Leave a Reply

    Your email address will not be published. Required fields are marked *