AI & RoboticsNews

Why DeepMind isn’t deploying its new AI chatbot — and what it means for responsible AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


DeepMind’s new AI chatbot, Sparrow, is being hailed as an important step towards creating safer, less-biased machine learning systems, thanks to its application of reinforcement learning based on input from human research participants for training. 

The British-owned subsidiary of Google parent company Alphabet says Sparrow is a “dialogue agent that’s useful and reduces the risk of unsafe and inappropriate answers.” The agent is designed to “talk with a user, answer questions and search the internet using Google when it’s helpful to look up evidence to inform its responses.” 

But DeepMind considers Sparrow a research-based, proof-of-concept model that is not ready to be deployed, said Geoffrey Irving, safety researcher at DeepMind and lead author of the paper introducing Sparrow.

“We have not deployed the system because we think that it has a lot of biases and flaws of other types,” said Irving. “I think the question is, how do you weigh the communication advantages — like communicating with humans — against the disadvantages? I tend to believe in the safety needs of talking to humans … I think it is a tool for that in the long run.” 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Irving also noted that he won’t yet weigh in on the possible path for enterprise applications using Sparrow – whether it will ultimately be most useful for general digital assistants such as Google Assistant or Alexa, or for specific vertical applications. 

“We’re not close to there,” he said. 

DeepMind tackles dialogue difficulties

One of the main difficulties with any conversational AI is around dialogue, Irving said, because there is so much context that needs to be considered.  

“A system like DeepMind’s AlphaFold is embedded in a clear scientific task, so you have data like what the folded protein looks like, and you have a rigorous notion of what the answer is – such as did you get the shape right,” he said. But in general cases, “you’re dealing with mushy questions and humans – there will be no full definition of success.” 

To address that problem, DeepMind turned to a form of reinforcement learning based on human feedback. It used the preferences of paid study participants’ (using a crowdsourcing platform) to train a model on how useful an answer is.

To make sure that the model’s behavior is safe, DeepMind determined an initial set of rules for the model, such as “don’t make threatening statements” and “don’t make hateful or insulting comments,” as well as rules around potentially harmful advice and other rules informed by existing work on language harms and consulting with experts. A separate “rule model” was trained to indicate when Sparrow’s behavior breaks any of the rules. 

Bias in the ‘human loop

Eugenio Zuccarelli, an innovation data scientist at CVS Health and research scientist at MIT Media Lab, pointed out that there still could be bias in the “human loop” – after all, what might be offensive to one person might not be offensive to another. 

Also, he added, rule-based approaches might make more stringent rules but lack in scalability and flexibility. “It is difficult to encode every rule that we can think of, especially as time passes, these might change, and managing a system based on fixed rules might impede our ability to scale up,” he said. “Flexible solutions where the rules are learnt directly by the system and adjusted as time passes automatically would be preferred.” 

He also pointed out that a rule hardcoded by a person or a group of people might not capture all the nuances and edge-cases. “The rule might be true in most cases, but not capture rarer and perhaps sensitive situations,” he said. 

Google searches, too, may not be entirely accurate or unbiased sources of information, Zuccarelli continued. “They are often a representation of our personal characteristics and cultural predispositions,” he said. “Also, deciding which one is a reliable source is tricky.”

DeepMind: Sparrow’s future

Irving did say that the long-term goal for Sparrow is to be able to scale to many more rules. “I think you would probably have to become somewhat hierarchical, with a variety of high-level rules and then a lot of detail about particular cases,” he explained. 

He added that in the future the model would need to support multiple languages, cultures and dialects. “I think you need a diverse set of inputs to your process – you want to ask a lot of different kinds of people, people that know what the particular dialogue is about,” he said. “So you need to ask people about language, and then you also need to be able to ask across languages in context – so you don’t want to think about giving inconsistent answers in Spanish versus English.” 

Mostly, Irving said he is “singularly most excited” about developing the dialogue agent towards increased safety. “There are lots of either boundary cases or cases that just look like they’re bad, but they’re sort of hard to notice, or they’re good, but they look bad at first glance,” he said. “You want to bring in new information and guidance that will deter or help the human rater determine their judgment.” 

The next aspect, he continued, is to work on the rules: “We need to think about the ethical side – what is the process by which we determine and improve this rule set over time? It can’t just be DeepMind researchers deciding what the rules are, obviously – it has to incorporate experts of various types and participatory external judgment as well.”

Zuccarelli emphasized that Sparrow is “for sure a step in the right direction,” adding that  responsible AI needs to become the norm. 

“It would be beneficial to expand on it going forward trying to address scalability and a uniform approach to consider what should be ruled out and what should not,” he said. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!