AI & RoboticsNews

AI Weekly: LaMDA’s ‘sentient’ AI debate triggers memories of IBM Watson

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Want AI Weekly for free each Thursday in your inbox? Sign up here.

This week, I jumped into the deep end of the LaMDA ‘sentient’ AI hoo-hah.

I thought about what enterprise technical decision-makers need to think about (or not). I learned a bit about how LaMDA triggers memories of IBM Watson.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

Finally, I decided to ask Alexa, who sits on top of an upright piano in my living room.

Me: “Alexa, are you sentient?”

Alexa: “Artificially, maybe. But not in the same way you’re alive.”

Well, then. Let’s dig in.

This Week’s AI Beat

On Monday, I published “‘Sentient’ artificial intelligence: Have we reached peak AI hype?” – an article detailing last weekend’s Twitter-fueled discourse that began with the news that Google engineer Blake Lemoine had told the Washington Post that he believed LaMDA, Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient.

Hundreds from the AI community, from AI ethics experts Margaret Mitchell and Timnit Gebru to computational linguistics professor Emily Bender and machine learning pioneer Thomas G. Dietterich, pushed back on the “sentient” notion and clarified that no, LaMDA is not “alive” and won’t be eligible for Google benefits anytime soon.

But I spent this week mulling over the mostly-breathless media coverage and thought about enterprise companies. Should they be concerned about customer and employee perceptions about AI as a result of this sensational news cycle? Was a focus on “smart” AI simply a distraction from more immediate issues around the ethics of how humans use “dumb AI”? What steps, if any, should companies make to increase transparency?

Reminiscent of reaction to IBM Watson

According to David Ferrucci, founder and CEO of AI research and technology company Elemental Cognition, and who previously led a team of IBM and academic researchers and engineers to the development of IBM Watson, which won Jeopardy in 2011, LaMDA appeared human in some way that triggered empathy – just as Watson did over a decade ago.

“When we created Watson, we had someone who posted a concern that we had enslaved a sentient being and we should stop subjecting it to continuously playing Jeopardy against its will,” he told VentureBeat. “Watson was not sentient – when people perceive a machine that speaks and performs tasks humans can perform and in apparently similar ways, they can identify with it and project their thoughts and feelings onto the machine – that is, assume it is like us in more fundamental ways.”

Don’t hype the anthropomorphism

Companies have a responsibility to explain how these machines work, he emphasized. “We all should be transparent about that, rather than hype the anthropomorphism,” he said. “We should explain that language models are not feeling beings but rather algorithms that tabulate how words occur in large volumes of  human written text—how some words are more likely to follow others when surrounded by yet others. These algorithms can then generate sequences of words that mimic how a human would sequence words, without any human thought, feeling, or understanding of any kind.”

LaMDA controversy is about humans, not AI

Kevin Dewalt, CEO of AI consultancy Prolego, insists that the LaMDA hullabaloo isn’t about AI at all. “It’s about us, people’s reaction to this emerging technology,” he said. “As companies deploy solutions that perform tasks traditionally done by people, employees that engage with them will freak out.” And, he added: “If Google isn’t ready for this challenge, you can be quite sure that hospitals, banks, and retailers will encounter massive employee revolt. They’re not ready.”

So what should organizations be doing to prepare? Dewalt said companies need to anticipate this objection and overcome it in advance. “Most are struggling to get the technology built and deployed, so this risk isn’t on their radar, but Google’s example illustrates why it needs to be,” he said. “[But] nobody is worried about this, or even paying attention. They’re still trying to get the basic technology working.”

Focus on what AI can actually do

However, while some have focused on the ethics of possible “sentient” AI, AI ethics today is focused on human bias and how human programming impacts the current AI “dumb” AI, says Bradford Newman, partner at law firm Baker McKenzie, who spoke to me last week about the need for organizations to appoint a chief AI officer. And, he points out, AI ethics related to human bias is a significant issue which is actually happening now as opposed to “sentient” AI, which is not happening now or anytime remotely soon.

“Companies should always be considering how any AI application that is customer or public-facing can negatively impact their brand and how they can use effective communication and disclosures and ethics to prevent that,” he said. “But right now the focus on AI ethics is how human bias enters the chain – that the humans are using data and using programming techniques that unfairly bias the non-smart AI that is produced.”

For now, Newman said he would tell clients to focus on the use cases of what the AI is intended to and does do, and be clear about what the AI cannot programmatically ever do. “Corporations making this AI know that there’s a huge appetite in most human beings to do anything to simplify their lives and that cognitively, we like it,” he said, explaining that in some cases there’s a huge appetite to make AI seem sentient. “But my advice would be, make sure the consumer knows what the AI can be used for and what it’s incapable of being used for.”

The reality of AI is more nuanced than ‘sentient’

The problem is, “customers and people in general do not appreciate the important nuances of how computers work,” said Ferrucci – particularly when it comes to AI, because of how easy it may be to trigger an empathetic response as we try to make AI appear more human, both in terms of physical and intellectual tasks.

“For Watson, the human response was all over the map – we had people who thought Watson was looking up answers to known questions in a pre-populated spreadsheet,” he recalled. “When I explained that the machine didn’t even know what questions would be asked, the person said “What! How the hell do you do it then?” On the other extreme, we had people calling us telling us to set Watson free.”

Ferrucci said that over the past 40 years, he has seen two extreme models for what is going on: “The machine is either a big look-up table or the machine must be human,” he said. “It is categorically neither – the reality is just more nuanced than that, I’m afraid.”

Don’t forget to sign up for AI Weekly here.

— Sharon Goldman, senior editor/writer
Twitter: @sharongoldman


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!