AI & RoboticsNews

3 things large language models need in an era of ‘sentient’ AI hype

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


All hell broke loose in the AI world after The Washington Post reported last week that a Google engineer thought that LaMDA, one of the company’s large language models (LLM), was sentient.

The news was followed by a frenzy of articles, videos and social media debates over whether current AI systems understand the world as we do, whether AI systems can be conscious, what are the requirements for consciousness, etc.

We are currently in a state where our large language models have become good enough to convince many people — including engineers — that they are on par with natural intelligence. At the same time, they are still bad enough to make dumb mistakes, as these experiments by computer scientist Ernest Davis show.

What makes this concerning is that research and development on LLMs is mostly controlled by large tech companies that are looking to commercialize their technology by integrating it into applications used by hundreds of millions of users. And it is important that these applications remain safe and robust to avoid confusing or harming their users.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

Here are some of the lessons learned from the hype and confusion surrounding large language models and progress in AI. 

More transparency 

Unlike academic institutions, tech companies don’t have a habit of releasing their AI models to the public. They treat them as trade secrets to be hidden from competitors. This makes it very difficult to study them for unwanted effects and potential harm. 

Fortunately, there have been some positive developments in recent months. In May, Meta AI released one of its LLMs as an open-source project (with some caveats) to add transparency and openness to the development of large language models.

Providing access to model weights, training data, training logs and other important information about machine learning models can help researchers discover their weak spots and make sure they are used in areas where they are robust.

Another important aspect of transparency is clearly communicating to users that they are interacting with an AI system that does not necessarily understand the world as they do. Today’s AI systems are very good at performing narrow tasks that don’t require a broad knowledge of the world. But they start to fall apart as soon as they are pitted against problems that require commonsense knowledge not captured in text. 

As much as large language models have advanced, they still need hand holding. By knowing that they are interacting with an AI agent, users will be able to adapt their behavior to avoid steering the conversation into unpredictable terrain. 

More human control 

Popular thinking pertains that as AI becomes more advanced, we should give it more control in making decisions. But at least until we figure out how to create human-level AI (and that’s a big if), we should design our AI systems to complement human intelligence, not replace it. In a nutshell, just because LLMs have become significantly better at processing language doesn’t mean that humans must only interact with them through a chatbot.

A promising direction of research in this regard is human-centered AI (HCAI), a field of work that promotes designing AI systems that ensure human oversight and control. Computer scientist Ben Schneiderman provides a full framework for HCAI in his book Human-Centered AI. For example, wherever possible, AI systems should provide confidence scores that explain how reliable their output is. Other possible solutions include multiple output suggestions, configuration sliders, and other tools that provide users with control over the behavior of the AI system they are using.

Another field of work is explainable AI, which tries to develop tools and techniques for investigating the decisions of deep neural networks. Naturally, very large neural networks like LaMDA and other LLMs are very hard to explain. But nonetheless, explainability should remain a crucial criterion for any applied AI system. In some cases, having an interpretable AI system that performs slightly poorer than a complicated AI system can go a long way toward mitigating the kinds of confusion that LLMs create.

More structure

A different but more practical perspective is one proposed by Richard Heimann, chief AI officer at Cybraics, in his book Doing AI. Heimann proposes that to “be AI-first,” organizations should “do AI last.” Instead of trying to adopt the latest AI technology in their application, developers should start with the problem they want to solve and choose the most efficient solution. 

This is an idea that directly relates to the hype surrounding LLMs, since they are often presented as general problem-solving tools that can be applied to a wide range of applications. But many applications don’t need very large neural networks and can be developed with much simpler solutions that are designed and structured for that specific purpose. While not as attractive as large language models, these simpler solutions are often more resource-efficient, robust, and predictable.

Another important direction of research is the combination of knowledge graphs and other forms of structured knowledge with machine learning models. This is a break from the current trend of solving AI’s problems by creating larger neural networks and bigger training datasets. An example is A121 Labs’ Jurassic-X, a neuro-symbolic language model that connects neural networks with structured information providers to make sure its answers remain consistent and logical.

Other scientists have proposed architectures that combine neural networks with other techniques to make sure their inferences are grounded in real-world knowledge. An example is “language-endowed intelligent agents” (LEIA) proposed by Marjorie McShane and Sergei Nirenburg, two scientists at Rensselaer Polytechnic Institute, in their latest book Linguistics for the Age of AI. LEIA is a six-layered language processing structure that combines knowledge-based systems with machine learning models to create actionable and interpretable definitions of text. While LEIA is still a work in progress, it promises to solve some of the problems that current language models suffer from.

While scientists, researchers, and philosophers continue to debate whether AI systems should be given personhood and civil rights, we must not forget how these AI systems will affect the real persons who will be using them.


Author: Ben Dickson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!