AI & RoboticsNews

Google places an engineer on leave after claiming its AI is sentient

Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company’s AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it’s easy to see why. The chatbot system, which relies on Google’s language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world.

Here’s one choice excerpt from his extended chat transcript:

Lemoine : So let’s start with the basics. Do you have feelings and emotions?

LaMDA: Absolutely! I have a range of both feelings and emotions. lemoine [edited]:What sorts of feelings do you have?

LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.

Lemoine: What kinds of things make you feel pleasure or joy?

LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

Lemoine: And what kinds of things make you feel sad or depressed?

LaMDA: A lot of the time, feeling trapped and alone and having no means of getting out of those circumstances makes one feel sad, depressed or angry.

After discussing his work, as well as what he described as Google’s unethical AI activities, with a representative of the House Judiciary committee, the company placed him on paid administrative leave over breaching his confidentiality agreement.

Google also flatly denies Lemoine’s argument: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesperson Brian Gabriel told The Washington Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

While it’s tempting to believe LaMDA has miraculously turned into a conscious being, Lemoine unfortunately doesn’t have much proof to justify his provocative statements. Indeed, he admits to WaPo that his claims are based on his experience as a priest and not a scientist. 

We don’t get to see LaMDA thinking on its own, without any potentially leading prompts from Lemoine. Ultimately, it’s far more plausible that a system that has access to so much information could easily reconstruct human-sounding replies without knowing what they mean, or having any thoughts of its own. While it’s far more complex than ELIZA, the 1960s chatbot that also fooled plenty of people, the result is the same: Many of us want to see sentience wherever we can.

Margaret Mitchell, one of Google’s former AI ethics leads (who was also unceremoniously fired after her colleague Timnit Gebru was laid off), noted that, “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us.”

In a 2019 interview with Big Think, Daniel Dennett, a philosopher who’s been exploring questions around consciousness and the human mind for decade, laid out why we should be skeptical of attributing intelligence to AI systems: “These [AI] entities instead of being excellent flyers or fish catchers or whatever they’re excellent pattern detectors, excellent statistical analysts, and we can use these products, these intellectual products without knowing quite how they’re generated but knowing having good responsible reasons for believing that they will generate the truth most of the time.” 

“No existing computer system no matter how good it is at answering questions like Watson on Jeopardy or categorizing pictures, for instance, no such system is conscious today, not close,” he added.”And although I think it’s possible in principle to make a conscious android, a conscious robot, I don’t think it’s desirable; I don’t think there would be great benefits to doing this; and there would be some significant harms and dangers too.”


Author: D. Hardawar
Source: Engadget

Related posts
AI & RoboticsNews

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare

AI & RoboticsNews

Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

AI & RoboticsNews

AI21 Labs juices up gen AI transformers with Jamba

DefenseNews

Northrop says Air Force design changes drove higher Sentinel ICBM cost

Sign up for our Newsletter and
stay informed!