MobileNews

Google Assistant and Lens had a quiet 2021 as foundational advancements remain in the wings

For two products that should be very important to Google, Assistant and Lens had a quiet 2021 compared to even the year prior. Fortunately, the lack of movement seems to be due to upcoming advancements being not quite ready yet.

Looking back at the year, the biggest and most impactful Assistant product move was the introduction of Driving Mode. While the “Android Auto for Phones” replacement first appeared in Google Maps in late 2020, the full experience — which was originally targeting summer 2019 — with a homescreen did not come to the Google app until this September. Even with that addition, not too many people enjoy Assistant Driving Mode. 

When it was first announced at I/O 2019, an Assistant-branded product replacing the Android version felt like a big sign to where the center of power at Google was shifting. At that time, it seemed that Google wanted Assistant to do and encompass everything, as voice was the interaction method of the future. The smart assistant would connect all existing Google products and even form factors that had yet to arrive. 

A few years later, Assistant seems best suited for phones and Smart Displays/speakers. It’s the best way to control the latter form factor, and the vast majority of phone users come across situations that require hands-free usage daily. However, Assistant is far from being the primary interface that science-fiction/Star Trek — which was an inspiration for Google — imagined. Touch input is just faster, even before you consider that today’s smart assistants are not all that capable and only work within constrained workflows. 

I believe Google is well aware of the limitation of its smart assistant today. At I/O 2021, the company previewed LaMDA (Language Model for Dialogue Applications) as a “breakthrough conversation technology” that can “engage in a free-flowing way about a seemingly endless number of topics, an ability [they] think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications.”

It’s not hard to imagine that an advancement like this could be for Google Assistant — though it could also be aimed at Search. Hopefully, this breakthrough is what will allow people to naturally talk to computers instead of requiring them to adapt to their new device. The success of LaMDA would go a long way to further Made by Google’s “ambient computing” vision where any device in your proximity would be able to adequately carry out a task. 

That said, while we’re waiting for that core foundational advancement, the bleeding edge of Assistant development remains limited to Pixel phones. Just over two years after the next-generation Google Assistant (NGA) launched on the Pixel 4, on-device processing to speed up voice recognition and allow offline commands remains limited to a subset of phones. As such, only Pixel users have Assistant voice typing and the ability to skip “Hey Google” with Continued Conversation and Quick Phrases. More importantly, this limitation means third-party developers have little incentive to make their apps and tasks easy to navigate via just voice and always-listening microphone. Incentivizing development would do a great deal to advance the usefulness of voice assistants and make them a truly new interaction method that could rival touch and other physical input.

It remains to be seen when the NGA will become more widely available on other phones, though we did report that it might come to the Pixel Watch next year. The smart assistant experience on wearables needs to greatly improve. As of the Wear OS 2 -> 3 transition, touch interaction is the primary way Google wearable owners interact with their devices due to the inherent unreliability and limits of Assistant on the form factor. The screen is already tiny, and voice would go a long way toward making smartwatches more capable.

Meanwhile, Assistant on headphones is just an extension of the connected phone experience, while the Chrome OS experience just begs “Why?” Using Assistant on Chromebooks is simply not the fastest way to accomplish any task, and the experience badly needs a rethink if Microsoft’s Cortana retreat on Windows is any indication.

Voice — barring future advancements in brain-computer interfaces — will be very important for smart glasses. Yes, you could put a touchpad on the stem or control the device via a smartwatch or other touch surface on your wrist, but verbal commands will likely still be the most natural interaction method for something that we wear on our face. Regardless of the current state of voice assistants, their continued innovation (for Google and others) is crucial to what comes next in technology.

The other crucial technology required is visual search and awareness. Google Lens had a quiet year after a big 2020 that saw a visual redesign, new Homework and Place filters, and the useful OCR “copy to computer” capability.

In comparison, Lens in 2021 got a tweaked icon, new UI that prioritizes analyzing existing images over live capture, and more prominence in the Pixel Launcher. The visual search tool is also coming to desktop Chrome in a notable expansion. The biggest development was a preview of how a coming foundational Multitask Unified Model (MUM) upgrade will allow you to take a photo and ask questions about it. 

One example Google offered was taking a picture of a broken bike part — that you don’t know the name of — and asking “how do you fix this.”

By combining images and text into a single query, we’re making it easier to search visually and express your questions in more natural ways.

This is a very promising development and a clear shoe-in for a key glasses interaction method. 

MUM in Lens will be available in the coming months and will be Google’s latest way to get people to use its visual search tool, which now has competition from iOS 15’s Visual Look Up. The company did share earlier this year that Lens is used 3 billion times per month. While it’s chock full of useful utilities like OCR and transition, it’s still looking for a killer use case on phones.

At Google, stagnation is sometimes the sign of a product no longer seeing active development or interest from the powers that be. In this case, Assistant and Lens remain vitally important to Google’s future and this year can be chalked up to a temporary stagnation due to the company running against what’s currently achievable with today’s technology.



Author: Abner Li
Source: 9TO5Google

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!