Google Assistant is going global along with new updates coming soon at the Mobile World Congress, Barcelon, these include two new core features, more languages, and deeper integrations with third-party devices.
Here’s a look at what’s coming:
Better Experience Across Devices
In the coming weeks, Assistant will be getting two new features, including: routines and location-based reminders.
Routines helps in getting series of events performed with just a single command spoken to Assistant. In the coming weeks in the U.S., six routines will be available to use. For example, just say “Hey Google, I’m home” triggers Assistant on Google Home or phone to turn on the lights, share home reminders, play favorite music, and so on.
Soon, you will also be able to set location-based reminders with a smart speaker in addition to already available ability of setting reminders based on a location with the Assistant on a phone. For example, ask the Assistant on your smart speaker, like Google Home to set a reminder to pick up the milk at the grocery store, and the Assistant on your phone will remind you when you get to home.
Assistant will be available in over 30 languages by the end of the year, a major step up from the current 8 languages support—making it available to 95 percent of all eligible Android phones worldwide.
The next languages to be supported in the immediate future include:
That’s not all, Google will add more languages throughout the year, as well as Assistant will also receive more devices and multilingual support, so that it can understand you in multiple languages fluently.
Google along with original equipment manufacturers (OEMs) is working to bring all the Assistant’s capabilities to Android phones. This efforts will be made possible through the new “Assistant Mobile OEM” program.
The program allows OEMs to build deeper integrations between the Assistant and device features all by using natural language, with hardware-based AI chips, and the conversational interfaces of Google Assistant, and other custom integrations.
Such integrations may include device-specific commands, listening for “Hey Google” when the screen is off. This includes things like make changes to theri phone plan, add new services such as international data roaming or get the balance of your bill. This gives carriers a new way to support their customers while reducing response time.
Additionally, Google also will be rolling out updates to ARCore 1.0 and Google Lens at MWC.
ARCore, an augmented reality SDK for Android help developers in building apps that can understand your environment and place objects and information in it. Now with version 1.0, devs can publish AR apps to the Play Store that works on 100 million Android smartphones with advanced AR capabilities.
Also, ARCore 1.0 features improved environmental understanding that enables users to place virtual assets on textured surfaces like posters, furniture, toy boxes, books, cans and more.
Android Studio Beta now supports ARCore in the Emulator, so you can quickly test apps in a virtual environment right from the desktop.
The following models are supported right now:
- Google Pixel and Pixel XL
- Pixel 2 and Pixel 2 XL
- Samsung Galaxy S8 and S8+
- Galaxy Note8
- Galaxy S7 and S7 edge
- LGE’s V30 and V30+ (Android O only)
- ASUS Zenfone AR
- and OnePlus 5
Google Lens using your phone camera help you understand the world around you. Now, Google is making Lens available in the Google Photos, so when you take a picture, you can get more information about what’s in your photo.
In the coming weeks, the latest version of Google Photos English will be available to all Android and iOS users, as well as all compatible flagship English devices will get camera-based Lens experience within the Assistant. More devices will add over time.
Additionally, improvements, like text selection, ability to create contacts and events from a photo in one tap are added, too.
Smarter cameras will enable our smartphones to do more. With ARCore 1.0, developers can start building delightful and helpful AR experiences for them right now. And Lens, powered by AI and computer vision, makes it easier to search and take action on what you see.
In the coming weeks, improved support for recognizing common animals and plants, like different dog breeds and flowers will also get added.
Image below shows TextSelection: