MobileNews

Google has everything it needs to counter ChatGPT – here’s what it’s already shown off

ChatGPT’s ability to respond to questions in a conversational and direct manner has led some to proclaim that AI chat will kill the traditional search engine. Google is seriously responding to this, and – from what it’s already shown off – should be more than able to compete. The question is the user experience.

Questions and answers

Fundamentally, Google’s mission “to organize the world’s information and make it universally accessible and useful” can be split into two components. 

Users ask questions and Google provides answers. Queries — first keywords, then naturally-phrased questions — were originally typed into a box and later they were spoken. Answers started out as links to websites that might have relevant information, but that also evolved.

Google started providing immediate answers to simpler questions that are more or less facts, using information from databases, listings, and, more times than not, Wikipedia. This shift to direct responses coincides with smartphones, and their relatively smaller screens, becoming the primary device. Then came wearables and other audio-first devices like smart speakers and displays.

Other questions cannot be answered easily, but Google still tries and uses something called a Featured Snippet, or direct quotes from a website that it thinks will answer your question. In recent years, Google has been criticized for these Snippets from all sides. It sometimes chooses to quote a source that is plainly wrong, while the owners of that content blame Google for conspiratorially stealing clicks to keep users on Search. 

That same type of complex question is something that ChatGPT excels at by being able to generate the answer for many things instead of sending you somewhere else. Early users have taken to this, and believe that the future of search will involve getting direct answers all the time through a back-and-forth with the ability to ask follow-ups. In fact, ChatGPT is also able to ask questions to get you to clarify your query as needed. Meanwhile, it can also debug code, write essays (with the ability to specify paragraphs), summarize, explain, and much more.

What Google has | 

LaMDA

Google has been working on the same language model technology underpinning ChatGPT for some time, albeit in a less flashy manner. That said, it has given its work on natural language understand (NLU) and large language models central billing at I/O for two developer conferences in a row now.

LaMDA (Language Model for Dialog Applications) is Google’s “most advanced conversational AI yet.” It was unveiled at I/O 2021 “to converse on any topic,” with the caveat that it was still in the R&D phase. Google’s examples of talking to the planet Pluto and a paper airplane were meant to demonstrate how LaMDA has “picked up on several of the nuances that distinguish open-ended conversation,” including sensible and specific responses that encourage further back-and-forth. 

Other qualities Google wants are “interestingness” (whether responses are insightful, unexpected, or witty) and “factuality,” or sticking to facts. 

A year later, LaMDA 2 was announced, and Google started letting the public experience three specific examples of LaMDA with the AI Test Kitchen app.

MUM

Besides LaMDA, Google has highlighted multimodal models that “allow people to naturally ask questions across different types of information” with MUM (Multitask Unified Model). Of note is the example query Google offered that can’t be answered by a search engine today, but is something this new technology can tackle:

I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?

MUM would understand you’re comparing two mountains, and that time range you provided is Mt. Fuji’s rainy season, thus requiring waterproof equipment. It could surface articles written in Japanese where there’s more local info, while the most impressive example was more or less tied to Google Lens:

So now imagine taking a photo of your hiking boots and asking, “Can I use these to hike Mt. Fuji?” MUM would be able to understand the content of the image and the intent behind your query, let you know that your hiking boots would just work fine and then point you to a list of recommended gear and a Mt. Fuji blog.

That was still an exploratory query, but more concretely Google has announced how it’s adding MUM to Lens so that you could take a picture of a broken part of your bicycle (that you aren’t aware of) and get instructions on how to repair it.

PaLM

If MUM allows for questions to be asked with a variety of mediums and LaMDA can continue conversations, PaLM (Pathways Language Model) is what can answer questions. It was announced in April and received an on-stage mention at I/O. PaLM is able to do: 

Question Answering, Semantic Parsing, Proverbs, Arithmetic, Code Completion, General Knowledge, Reading Comprehension, Summarization, Logical Inference Chains, Common-Sense Reasoning, Pattern Recognition, Translation, Dialogue, Joke Explanations, Physics QA, and Language Understanding.

It’s powered by a next-gen AI architecture called Pathways that can “train a single model to do thousands or millions of things” compared to the current, highly individualized approach.

Down to the products 

When Google announced LaMDA in 2021, Sundar Pichai said its “natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use.”

Google Assistant, Search, and Workspace were specifically name-checked as products where it hopes to “incorporat[e] better conversational features.” Google could also offer “capabilities to developers and enterprise customers.”

In this post-ChatGPT world, more than a few people have commented that direct responses could harm Google’s ad-based business model, with the thinking being that people would no longer need to click on links if they already got the answer. In the examples Google has provided, there’s no indication that it wants to stop linking out to content. 

There are big safety and accuracy concerns, which Google has always emphasized when demoing. The fact that these models “can make stuff up” seems to be the bigger bottleneck more than anything.

Meanwhile, it’s not clear if people want every interaction with a search engine to be a conversation. That said, Google has acknowledged internally that the conversational approach “really strikes a need that people seem to have.”

Google is said to be in “code red” over ChatGPT and reassigned various teams to work on competing AI products and demos. Another show of the technology at I/O 2023 is more than likely, but whether this means LaMDA, MUM, and PaLM is going to be prominently integrated into Google’s biggest products is up in the air.

Back in May, Pichai reiterated how “conversation and natural language processing are powerful ways to make computers more accessible to everyone.” From everything the company has previewed, the end goal is to make Google Search be able to answer questions like a human.

Unsurprisingly, Google has the technology to get there, but the company’s eternal challenge is moving R&D into actual products, and rushing it does not seem wise for the search engine that the world needs to be consistently correct.



Author: Abner Li
Source: 9TO5Google

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!