AI & RoboticsNews

Google details how it’s using AI and machine learning to improve search

During a livestreamed event this afternoon, Google detailed the ways it’s applying AI and machine learning to improve the Google Search experience.

Soon, Google says users will be able to see how busy places are in Google Maps without having to search for specific beaches, parks, grocery stores, gas stations, laundromats, pharmacies, or other business, an expansion of Google’s existing busyness metrics. The company also says it’s adding COVID- 19 safety information to business profiles across Search and Maps, revealing whether they’re using safety precautions like temperature checks, plexiglass, and more.

An algorithmic improvement to “Did you mean,” Google’s spell-checking feature for Search, will enable more accurate and precise spelling suggestions. Google says the new underlying language model contains 680 million parameters — the variables that determine each prediction — and runs in less than three milliseconds. “This single change makes a greater improvement to spelling than all of our improvements over the last five years,” Prabhakar Raghavan, head of Search at Google, said in a blog post.

Beyond this, Google says it can now index individual passages from webpages as opposed to whole pages. When this rolls out fully, it’ll improve roughly 7% of search queries across all languages, the company claims. A complementary AI component will help Search capture the nuances of what webpages are about, ostensibly leading to a wider range of results for search queries.

“We’ve applied neural nets to understand subtopics around an interest, which helps deliver a greater diversity of content when you search for something broad,” Raghavan continued. “As an example, if you search for ‘home exercise equipment,’ we can now understand relevant subtopics, such as budget equipment, premium picks, or small space ideas, and show a wider range of content for you on the search results page.”

Google’s also bringing Data Commons, its open knowledge repository that combines data from public datasets (e.g., COVID-19 stats from the U.S. Centers for Disease Control and Prevention) using mapped common entities, to search results on the web and mobile. In the near future, users will be able to search for topics like “employment in Chicago” on Search to see information in context.

On the ecommerce and shopping front, Google says it’s built cloud streaming technology that enables users to see products in augmented reality (AR). With a cars from Volvo, Porsche, and other “top” auto brands, for example, they can zoom in to view the steering wheel and other details in a driveway, to scale, or on their smartphones. Separately, Google Lens on the Google app or Chrome on Android (and soon iOS) will let shoppers discover similar products by tapping on elements like knits, ruffles sleeves, and more.

Google Search augmented reality

Above: Augmented reality previews in Google Search.

Image Credit: Google

In another addition to Search, Google says it will deploy a feature that highlights notable points in videos — for example, a screenshot comparing different products or a key step in a recipe. (Google expects 10% of searches will use this technology by the end of 2020.) And Live View in Maps, a tool that taps AR to provide turn-by-turn walking directions, will enable users to quickly see information about restaurants including how busy they tend to get and their star ratings.

Lastly, Google says it’ll let users search for songs by simply humming or whistling melodies, initially in English on iOS and more than 20 languages on Android. On a smartphone, opening the latest version of the Google app or Search widget, tapping the mic icon, and saying “What’s this song?” or selecting the “Search a song” button will launch the feature, which requires at least 10 to 15 seconds of humming or whistling.

“After you’re finished humming, our machine learning algorithm helps identify potential song matches,” Google wrote in a blog post. “We’ll show you the most likely options based on the tune. Then you can select the best match and explore information on the song and artist, view any accompanying music videos or listen to the song on your favorite music app, find the lyrics, read analysis and even check out other recordings of the song when available.”

Google says that melodies hummed into Search are transformed by machine learning algorithms into a number-based sequence representing the song’s melody. The models are trained to identify songs based on a variety of sources, including humans singing, whistling, or humming, as well as studio recordings. They also take away all the other details, like accompanying instruments and the voice’s timbre and tone. This leaves a fingerprint that Google compares with thousands of songs from around the world and identify potential matches in real time, much like the Pixel’s New Playing feature.

“From new technologies to new opportunities, I’m really excited about the future of search and all of the ways that it can help us make sense of the world,” Raghavan said.

Last month, Google announced it will begin showing quick facts related to photos in Google Images, enabled by AI. Starting in the U.S. in English, users who search for images on mobile might see information from Google’s Knowledge Graph — Google’s database of billions of facts — including people, places, or things germane to specific pictures.

Google also recently revealed it’s using AI and machine learning techniques to more quickly detect breaking news around crises like natural disasters. In a related development, Google said it launched an update using language models to improve the matching between news stories and available fact checks.

In 2019, Google peeled back the curtains on its efforts to solve query ambiguities with a technique called Bidirectional Encoder Representations from Transformers, or BERT for short. BERT, which emerged from the tech giant’s research on Transformers, forces models to consider the context of a word by looking at the words that come before and after it. According to Google, BERT helped Google Search better understand 10% of queries in the U.S. in English — particularly longer, more conversational searches where prepositions like “for” and “to” matter a lot to the meaning.

BERT is now used in every English search, Google says, and it’s deployed across languages including Spanish, Portuguese, Hindi, Arabic, and German.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!