Almost every person on Earth who has an access to the Internet or uses a smart phone would have inevitably used a Google product. So it is that time of the year, when we get to know about what Google has been up-to. Google I/O'19 lets us know all the cool features and projects that Google has been working on in order to achieve their goal -

Building a more helpful Google for everyone.

Without further ado, I will run through the features and projects that surfaced the Google keynote which excited me the most.

Google Search and Google Lens

Google Search, one of the oldest products on their belt, is going to receive Camera and AR capabilities. With these you can view a 3D model of your search right from your search results as well as view it in your own space!!! Say your shopping for a pair of shoes, you can view the 3D model of a shoe from different angles as well as place them along with your clothes to see how they match.

Screenshot--151--2

With people having used Google Lens more than a billion times for asking questions about what they see, Google has come up with new ways for making Google Lens even more helpful. Say we are at a restaurant and are confused about what to order. To save ourselves from this dilemma we can simply point the camera at the menu. Lens automatically highlights the restaurant's popular dishes right on the menu!!!

Screenshot--152-

Another exciting feature added to Lens is that when pointed at a notice or sign , Google Lens can read out the text present in it and can also translate it into your own language. The translated text is overlaid right on top of the original sign.

Google Assistant

The process of making reservations on websites makes us go through a number of pages and steps which is often time consuming. Google has come up with a feature called Duplex on the web by which the Google Assistant will be able to fill in the information during a reservation process on your behalf. You can confirm the details with just a tap.

In order to process speech today, Google relies on complex algorithms that include multiple machine learning models which require around 100 GB of storage and a network connection. Bringing these models to a mobile phone is an incredibly challenging computer science problem. But Google has reached a significant milestone as further advances in deep learning have allowed them to combine and shrink the 100 GB models down to 0.5 GB, small enough to bring it onto mobile devices. This eliminates network latency and makes the Assistant much faster.

Android Q

Android Q, the 10th version of Android, comes with pretty cool features like Live Caption and there is a major emphasis on security & privacy and also on digital wellbeing . Live Caption generates the text for the audio being played. This will be of major help for people with hearing disability as they could relish the video content or voice messages with ease. This is completely done on device and doesn't need an Internet, which ensures user privacy. This feature is OS-wide, which means we get to see captions in all our apps as well as in web content too.

Screenshot--158-

Smart Reply is another cool feature which works utilizing the on-device machine learning too. With this feature the OS suggests what we will type next, including emoji. It can even predict the action that the user might take. Smart Reply works for all messaging apps in Android.

Smart-Reply

On-device machine learning powers these features like Live Caption and Smart Reply, and it does this with no user input ever leaving the phone which protects user privacy.

Google AI

Google AI has developed a technique called Bidirectional Encoder Representations from Transformers. The BERT models can consider the full context of a word by looking at the words before and after it. The models are trained in an interesting way. About 20% of the input words are hidden and the models are trained to guess those missing words. This approach is much more effective for understanding language. When the research was published, BERT obtained state-of-the-art results on 11 different language processing tasks.

Google AI has also been working on a project that utilizes AI to catch Lung Cancer earlier. Their deep learning model can analyze CT scans and predict lung malignancies. Their model was able to detect the early signs of cancer in an initial scan (one year before the patient was actually diagnosed) of a patient which 5 out of 6 radiologists missed.

Google AI has even developed Flood Forecasting models that can more accurately predict flood timing, location and severity. It has partnered with India's Central Water Commission to send early flood warnings to the phones of users who might be affected. During the keynote it was announced that the detection and alerting system will be expanded for the upcoming monsoon season which would help millions of people living along the Ganges and the Brahmaputra river areas.

Before saying bye...

So Google's keynote at Google I/O'19 thrilled and fascinated me a lot as it reflected how technology could empower and help literally everyone. You can check out the full Google Keynote by clicking here. If you liked the post, do share the link with your family and friends. Subscribe to the blog, so that you never miss a post from us. So it's a Bye from me and see you in the next post :)