AI & RoboticsNews

Why Google still needs the cloud even with on-device ML

Google held its big annual hardware event Tuesday in New York to unveil the Pixel 4, Nest Mini, Pixelbook Go, Nest Wifi, and Pixel Buds. It was mostly predictable because details about virtually every piece of hardware the company revealed at the event were leaked months in advance, but if Google’s biggest hardware event of the year had an overarching theme, it was the many applications of on-device machine learning. Most of the hardware Google introduced includes a dedicated chip for running AI, continuing an industry-wide trend to power services consumers will no doubt enjoy, but there can be privacy implications too.

The new Nest Mini’s on-device machine learning recognizes your most commonly used voice commands to quicken Google Assistant response time compared to the first-generation Home Mini.

In Pixel Buds, due out next year, machine learning helps recognize ambient sound levels and increase or decrease sound the same way your smartphone dims or brightens when it’s in sunlight or shade.

Google Assistant on Pixel 4 is faster with an on-device language model. Pixel 4’s Neural Core will power facial recognition for payment verification, Face Unlock, and Frequent Faces, which is AI that trains your camera to recognize the faces of people you photograph often and then coaches you on how to take the best picture.

Traditionally, edge deployment of on-device machine learning means an AI assistant can function without the need to maintain connection to the internet, an approach that can prevent the need to share user data online or collect the kind of voice recordings that became one of the most controversial privacy concerns for the better part of 2019.

Due to privacy concerns that stem from the routine recording of users’ voices, phrases like “on-device machine learning” and “edge computing” have become synonymous with privacy. That’s why a handful of edge assistants like Snips have made privacy a selling point.

For Google’s many AI services, some — like speech recognition powered by the Neural Core processor — can entirely operate on-device, whereas others — like the new Google Assistant — require connecting to the cloud and sending your data back to the Google mothership.

Today, on-device AI for Google hardware is primarily meant to provide speed gains, Google Nest product manager Chris Chan told VentureBeat.

Tasks like speech recognition and natural language processing can be completed on-device, but they still need the cloud to deliver personalization and stitch together an ecosystem of smart home devices and streaming services like YouTube or Spotify.

It’s a hybrid model, Chan said. “If you focus too much on commands existing only on that single device, the user then doesn’t benefit from the context of that usage to even other devices, let alone say Nest or Google services when they’re on the go, when they’re in the car, and other environments,” Chan said.

In the case of on-device ML for Nest Mini, you still need an internet connection to complete a command, he said.

“There are other architectures we could definitely explore over time that might be more distributed or based in the home, but we’re not there yet,” Chan said.

The hybrid approach, as opposed to edge computing that can operate offline, raises the question: The package is powerful, but why not go all the way with an offline Google Assistant?

The answer may lie in that controversial collection of people’s voice data.

Leaders of the global smart speaker market and AI assistant market have moved in unison to address people’s privacy concerns.

In response to controversy over humans reviewing voice recordings from popular digital assistants like Siri, Cortana, Google Assistant, and Alexa, Google and Amazon both introduced voice commands to allow people to delete voice recordings every day. They also extended to users the ability to automatically remove voice data every three months or every 18 months.

So why make it easy to delete data but choose three months or 18 months?

When VentureBeat asked Alexa chief scientist Rohit Prasad this question, he said that Amazon wants to continue to track trends and follow seasonal changes in queries, and there’s still more work to do to improve Alexa’s conversational AI models.

A Google spokesperson also said the company keeps data to understand seasonal or multi-season trends, but that this could be revisited in the future.

“In our research, we found that these time frames were preferred by users as they’re inclusive of data from an entire season (a three-month period) or multiple seasons (18 months),” the spokesperson said.

Chan said Google users may find more privacy benefits from on-device machine learning in the future.

“It’s our hope that over the coming years that things go entirely local, because then you’re going to get a massive speed benefit, but we’re not there yet,” he said.

As conversational computing becomes a bigger part of people’s lives, why and when tech giants connect assistants to the internet are likely to play a role in shaping people’s perceptions of edge computing and privacy with AI. But if the competition between tech giants ever becomes about making smart home usage more private to meet consumer demand, then consumers can win.

As always, if you come across a story that merits coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel and subscribe to the AI Weekly newsletter.

Thanks for reading,

Khari Johnson

Senior AI staff writer


Author: Khari Johnson
Source: Venturebeat

Related posts
DefenseNews

After Army canceled helo program, industry had to pivot

DefenseNews

Here’s when the US Army will pick next long-range spy plane

DefenseNews

Raytheon picks Spain’s Sener to make Patriot interceptor parts

Cleantech & EV'sNews

Gogoro announces major partnership to help accelerate global expansion

Sign up for our Newsletter and
stay informed!