AI & RoboticsNews

Google demos LaMDA, a next-generation AI for dialogue

Elevate your enterprise data technology and strategy at Transform 2021.


During a keynote address at Google I/O 2021, Google’s annual developer conference, the company announced LaMDA, a sophisticated language model that Google says is far better at understanding the context of conversations than leading models today. Google CEO Sundar Pichai says that LaMDA, which was built for dialogue applications, is open domain and designed to converse on any topic. Importantly, the model doesn’t have to be retrained to have another conversation.

“While it’s still in research and development, we’ve been using LaMDA internally to explore novel interactions,” Pichai said. “For example, say you wanted to learn about one of my favorite black planets — Pluto. LaMDA already understands quite a lot about Pluto and millions of other topics.”

Data scientists don’t have to hand-program concepts into LaMDA, because none of the model’s responses are predefined. LaMDA attempts to answer with sensible responses, keeping the dialogue open-ended and generating natural conversations that never take the same path twice.

“It’s really impressive to see how LaMDA can carry on a conversation about any topic,” Pichai said. But he admitted that the model isn’t perfect. “It’s still early research, so it doesn’t get everything right. Sometimes it can give nonsensical responses,” he said.

MUM

In a related announcement, Google detailed MUM, a multimodal model that can multitask in order to unlock information in new ways. Trained on over 75 languages at once, MUM can simultaneously understand different forms of information including text, images, and videos.

It’s Google’s assertion that MUM can understand questions like “I want to hike to Mount Fuji next fall — what should I do to prepare?” Because of its multimodal capabilities, MUM realizes that that “prepare” could encompass things like fitness training as well as weather. The model, then, could recommend that the questioner bring a waterproof jacket and give pointers to go deeper on topics with relevant content from articles, videos, and images across the web.

MUM can also lean on context and more in imagery and dialogue turns. Given a photo of hiking boots and asked “Can I use this to hike Mount Fuji?”, MUM can comprehend the content of the image and the intent behind the query, letting the questioner know that hiking boots would be appropriate and pointing them toward a lesson in a Mount Fuji blog.

Google says it’s already started internal pilots to see the types of queries that MUM might be able to solve.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!