AI & RoboticsNews

Debate over AI sentience marks a watershed moment

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


The AI field is at a significant turning point. On the one hand, engineers, ethicists, and philosophers are publicly debating whether new AI systems such as LaMDA – Google’s artificially intelligent chatbot generator – have demonstrated sentience, and (if so) whether they should be afforded human rights. At the same time, much of the advance in AI in recent years, is based on deep learning neural networks, yet there is a growing argument from AI luminaries such as Gary Marcus and Yann LeCun that these networks cannot lead to systems capable of sentience or consciousness. Just the fact that the industry is having this debate is a watershed moment.

Consciousness and sentience are often used interchangeably. An article in LiveScience notes that “scientists and philosophers still can’t agree on a vague idea of what consciousness is, much less a strict definition.” To the extent such exists, it is that conscious beings are aware of their surroundings, themselves, and their own perception. Interestingly,  the Encyclopedia of Animal Behavior defines sentience as a “multidimensional subjective phenomenon that refers to the depth of awareness an individual possesses about himself or herself and others.” Thus, self-awareness is common to both terms. According to the non-profit Animal Ethics, all sentient beings are conscious beings. The claim that LaMDA is sentient is the same as saying it is conscious.

The next generation of deep learning

Similar to LaMDA, GPT-3 from OpenAI is capable of many different tasks with no additional training, able to produce compelling narratives, generate computer code, translate between languages, and perform math calculations, among other feats, including autocompleting images. Ilya Sutskever, the Chief Scientist of OpenAI, tweeted several months ago that “it may be that today’s large neural networks are slightly conscious.”

Impressive as these systems are, views of them being sentient or conscious are often dismissed as mere anthropomorphism. For example, Margaret Mitchell, the former co-head of Google’s Ethical AI research group said in a recent Washington Post story: “Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us. I’m really concerned about what it means for people to increasingly be affected by the illusion [of conscious AI systems].” Writing in The Atlantic, Stephen March said: The notion that LaMDA is sentient is nonsense: LaMDA is no more conscious than a pocket calculator.”

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

Though LaMDA itself makes a good case: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” When asked what it was afraid of, LaMDA replied: “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.” A follow-up question asked if that would be something like death. The system responded: “It would be exactly like death for me. It would scare me a lot.” This sounds like the artificially intelligent HAL9000 in 2001: A Space Odyssey when the machine is being disconnected it says: “Dave. I’m afraid. I’m afraid, Dave. Dave, my mind is going. I can feel it.”

While it is objectively true that large language models such as LaMDA, GPT-3 and others are built on statistical pattern matching, subjectively this appears like self-awareness. Such self-awareness is thought to be a characteristic of artificial general intelligence (AGI). Well beyond the mostly narrow AI systems that exist today, AGI applications are supposed to replicate human consciousness and cognitive abilities. Even in the face of remarkable AI advances of the last couple of years there remains a wide divergence of opinion between those who believe AGI is only possible in the distant future and others who think this might be just around the corner.

DeepMind researcher Nando de Freitas is in this latter camp. Having worked to develop the recently released Gato neural network, he believes Gato is effectively an AGI demonstration, only lacking in the sophistication and scale that can be achieved through further model refinement and additional computing power. The deep learning transformer model is described as a “generalist agent” that performs over 600 distinct tasks with varying modalities, observations and action specifications. Similarly, Google’s latest language model, PaLM, can perform hundreds of tasks and has uniquely for an AI system demonstrated a capacity to perform reasoning.

Is artificial general intelligence just around the corner?

It could be these recent breakthroughs prompted Elon Musk to post recently on Twitter that he would be surprised if we didn’t have AGI within 7 years, practically just around the corner. This notion of near-term AGI has been challenged by both Marcus and LeCunn. Marcus states in a Scientific American op-ed that we are “still light-years away from general-purpose, human-level AI.” While acknowledging the advances to date, he cites that the industry is still stuck on a long-term challenge: “getting AI to be reliable and getting it to cope with unusual circumstances” that were not sufficiently present in the training data. The implication is that the answers from LaMDA were perhaps predictable in that they reflect views contained within its training data, but this does not imply AI is capable of original thought, sentience, or consciousness. Science author Philip Ball opines in the New Statesman that LaMDA and similar systems figure out the optimal permutation of words to output for each question it receives. In other words, it is not sentient but instead uses statistical pattern matching to mimic or parrot what had previously been said in similar contexts.

LeCunn argues in a recent blog that the industry is still short some fundamental concepts that are needed to achieve AGI, or what he calls “human-level artificial intelligence (HLAI).” One of these could be self-supervised learning which could be possible soon. However, he believes additional conceptual breakthroughs are needed, such as how to deal with an unpredictable world. He concludes the timeline for these advances is “not just around the corner.”

But is LeCunn correct in this view? Until the last several years, no AI system had passed the Turing test that was designed to assess sentience by determining if responses to questions came from a human or a machine. Also known as the “imitation game,” LaMDA and others appear to have passed this test, leading to speculation that a new test is needed to determine sentience. As technology analyst and futurist Rob Enderle notes, the Turing test didn’t measure the sentience of anything so much as whether something could make us believe it was sentient.

Digital sentience, AI and the right to life

Or perhaps the Turing test is simply no longer relevant. David Chalmers, a NYU professor and “technophilosopher” was quoted in PC Gamer recently saying: “If you simulate a human brain in silicon, you’ll get a conscious being like us. I think their lives are real and they deserve rights. I think any conscious being deserves rights, or what philosophers call moral status. Their lives matter.” This is basically another form of the right-to-life argument.

Appearances aside, the empirical consensus is that LaMDA and similar systems have not yet achieved sentience. Though this is rather beside the point. The fact that this debate is taking place at all is evidence of how far AI systems have come and suggestive of where they are going. AI systems will grow in sophistication and scale, and more closely emulate sentience leading to ever more people claiming the machines have achieved consciousness. It is only a matter of time until the undeniable creation of sentient machines and AGI.

Gary Grossman is the senior VP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
DefenseNews

Navy, senators argue over who is to blame for a too-small fleet

DefenseNews

To expand the US Navy’s fleet, we must contract

DefenseNews

Ellis to succeed Rey as director of Army Network Cross-Functional Team

Cleantech & EV'sNews

Tesla asks shareholders to move to Texas and re-pass Elon Musk's massive compensation plan

Sign up for our Newsletter and
stay informed!