Google Assistant Better Across Devices, Comes to iOS, More in ML. VR and AR at Google I/O

New featires for Google Assistant, Google Photos making it easier than ever to share and receive, Google Lens, Jobs Search, and more in AI-powered announced.

Share online:

Google Assistant—which uses a natural conversation to get things done like to control more smart home devices on Google Home and Android phones and Wear—now getting more new ways to help do even more work.

Google says, it now has 70+ smart home partners supporting Assistant across Home and Android phones, including August locks, TP-Link, Honeywell, Logitech, and LG.

With Assistant, users on Google Home can now schedule calendar appointments and soon will be able to create reminders as well. Other new features rolling out soon include:

Make hands-free calls with Home in the U.S. or Canada, users in the coming months, will be able to ask Assistant to get connect to mobile or landlines phone.

Enjoy more music, movies and TV shows with Spotify, Soundcloud, Deezer, Bluetooth support will soon come to Hom. For streaming, Google is bringing in more partners like HBO NOW, CBS All Access, and HGTV to already available Netflix.

Visual responses from Assistant will appear on TV with Chromecast, later this year.

Also, developers can now build conversational apps for Google Assistant on phones.

In addition, new ways to help a conversation with Assistant on Home now let you also type to Assistant on phones. And, soon history within conversation will come, too.

Google Assistant, which is already available on 100+ million devices, is now added to iPhones, too. And, will also roll out to new eligible places by the end of the year along with support for Italian, Korean and Spanish.

Google Lens
Google Lens

Google Photos, now used by over 500 million people every month with more than 1.2 billion upload per day, today received three new ways to share include Suggested Sharing, that uses machine learning tech to remind, automatically select right photos, and even suggest whom to send them to, based on who was in the photos.

All sharing activities along with suggestions will now appear in new "Sharing tab," you just need to select and tap send.

Shared Libraries, provide an ability to automatically send and receive photos with another person. Just grant them access to either full photo library, or to photos of certain people, or from a certain date forward. And, when people share their library with you, you can automatically save the photos to show up in search and in the movies, collages and other creations.

Google Photos Shared Libraries

Google Photos Shared Libraries

Both the feature will be rolling out on Android, iOS and on web in the coming weeks.

Photo Books, automactically organize your library by people, places, or things that matter—just select the images, Google Photos will find best shots and will remove duplicates and poor quality photos.

Photo books, is available today in the U.S. on web, and by next week will come to Android and iOS. It starts at $9.99 for a 20-page softcover book, and $19.99 for a hardcover book.

Google is advancing in machine learning and artificial intelligence (AI)— and announced a new visual search tool called "Google Lens," is a set of vision-based computing capabilities that make smartphone camera understand what you're looking at and help you take action.

"If you see a marquee for your favorite band, you can hold up your Assistant, tap the Lens icon and get information on the band, tickets and more," says Google.

The key thing is, "you don't need to learn anything new to make this work"—the interface and the experience can be much more intuitive than, copying and pasting across apps on a smartphone, google said.

The tool will initially be available in Google Assistant and Google Photos. And, Google said, you can expect it to make its way to other products, as well.

Google Lens

Google has also announced "Jobs feature in search" with the goal to help people find the job postings that are right for them—no matter who you are or what kind of job you're looking for. The feature launching in the coming weeks will help people look for jobs across experience and wage levels—including jobs that have traditionally been much harder to search for and classify, like service and retail jobs.

Abu Qader, a high school student in Chicago, taught himself how to use TensorFlow using YouTube. He's using ML to improve mammography.

That's the motivation behind Google.ai, which pulls all AI initiatives into one effort. Another approach in this filed is a new tool called "AutoML," that will make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs.

In other latest VR and AR, Google at the I/O shared some new innvoations:

First up, more Daydream-ready phones to choose from Galaxy S8, S8+, and LG's next flagship phone will availabe later this year. Galaxy will receive software update this summer to be Daydream-ready.

Additionally, Daydream will soon add new VR devices called "standalone VR headsets," which won't require a phone or PC to put things on, as the hardware is fully optimized for VR, and features a new headset tracking technology called "WorldSense."

WorldSense enables positional tracking, meaning the headset tracks your precise movements in space – and it does this all without any external sensors to install.

For Augmented realit (AR), WorldSense can be used to enable smartphone AR experiences by placing digital objects in real spaces. The next phone with Tango technology will be the ASUS ZenFone AR, available this summer.

"Tango is also one of the core technologies behind our new Visual Positioning Service (VPS), which helps devices quickly and accurately understand their location indoors."

Google is also bringing AR lessons to schools through Pioneer Program. With Expeditions AR, "students can gather around the Statue of David, a strand of DNA, or even a whirling Category 5 hurricane without leaving the classroom," wrties google.