AI & RoboticsNews

I, Chatbot: The perception of consciousness in conversational AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


In the field of artificial intelligence (AI), the development of artificial general intelligence (AGI) is regarded as the “Holy Grail” of machine learning. AGI reflects the ability of a computer to resolve tasks and develop independent autonomy on a par with a human agent. Under a “strong” interpretation of AGI, the machine would exhibit the characteristics of consciousness manifest in a sentient being. As such, strong AGI provides the basis for the heady mixture of utopian or dystopian visions of tomorrow generated by Hollywood. Think Ex Machina, Blade Runner, and the Star Wars saga for examples of autonomous machines with self-perception

The fundamental test for discerning AGI was defined in the “The Imitation Game” postulated by Alan Turing, in his seminal paper, published in 1950, entitled “I. – Computing Machinery and Intelligence.” Within the game, a human interrogator is tasked with evaluating the answers to a series of questions that are provided by both a human and a machine respondent. The test is passed by the machine where the human interrogator is unable to distinguish the identity of the respondent, without prior knowledge of which respondent is the human being. It is fair to say that the possibility of creating a true AGI – an independent thinking machine – is one that divides those involved in AI research.

Fascination with the notion of machines that emulate the human psyche inevitably leads to a media clamor when new breakthroughs are claimed, or when controversial ideas are published. The reported suspension of a Google software engineer for allegedly claiming that the company’s LaMDA chatbot displayed sentient behavior has, inevitably, made headlines around the world. Understanding how chatbots are created, however, helps to clarify the difference between LaMDA’s synthetic responses and a machine with a soul. There is also the historical lesson of Microsoft’s Tay chatbot, which in 2016 was corrupted by training data from Twitter users that transformed the desired conversational output, analogous to a 19-year-old girl, into that of a racist bigot.

Chatbots are an example of the application of natural language processing (NLP) as a form of machine learning. Chatbots are familiar to anyone that engages with a virtual agent when interacting with an organization through its website. The chatbot algorithm interprets the human side of the “conversation” and selects an appropriate response based on the combination of words detected. The success of the chatbot in its conversational exchange with the human participant, and the level of human imitation achieved, are contingent upon the training data used to develop the algorithm and the reinforcement learning obtained through multiple conversations.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

So how can LaMDA provide responses that might be perceived by a human user as conscious thought or introspection? Ironically, this is due to the corpus of training data used to train LaMDA and the associativity between potential human questions and possible machine responses. It all boils down to probabilities. The question is how those probabilities evolve such that a rational human interrogator can be confused as to the functionality of the machine?

This brings us to the need for improved “explainability” in AI. Complex artificial neural networks, the basis for a variety of useful AI systems, are capable of computing functions that are beyond the capabilities of a human being. In many cases, the neural network incorporates learning functions that enable adaptation to tasks outside the initial application for which the network was developed. However, the reasons why a neural network provides a specific output in response to a given input are often unclear, even indiscernible, leading to criticism of human dependence upon machines whose intrinsic logic is not properly understood. The size and scope of training data also introduce bias to the complex AI systems, yielding unexpected, erroneous, or confusing outputs to real-world input data. This has come to be referred to as the “black box” problem where a human user, or the AI developer, cannot determine why the AI system behaves as it does.

The case of LaMDA’s perceived consciousness appears no different from the case of Tay’s learned racism. Without sufficient scrutiny and understanding of how AI systems are trained, and without sufficient knowledge of why AI systems generate their outputs from the provided input data, it is possible for even an expert user to be uncertain as to why a machine responds as it does. Unless the need for an explanation of AI behavior is embedded throughout the design, development, testing, and deployment of the systems we will depend upon tomorrow, we will continue to be deceived by our inventions, like the blind interrogator in Turing’s game of deception.

Richard Searle is VP of Confidential Computing at Fortanix

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Richard Searle, Fortanix
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!