AI & RoboticsNews

What we learned about AI and deep learning in 2022

AI and deep learning models

Check out all the on-demand sessions from the Intelligent Security Summit here.

It’s as good a time as any to discuss the implications of advances in artificial intelligence (AI). 2022 saw interesting progress in deep learning, especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them.

On the one hand, advanced models such as ChatGPT and DALL-E are displaying fascinating results and the impression of thinking and reasoning. On the other hand, they often make errors that prove they lack some of the basic elements of intelligence that humans have.

The science community is divided on what to make of these advances. At one end of the spectrum, some scientists have gone as far as saying that sophisticated models are sentient and should be attributed personhood. Others have suggested that current deep learning approaches will lead to artificial general intelligence (AGI). Meanwhile, some scientists have studied the failures of current models and are pointing out that although useful, even the most advanced deep learning systems suffer from the same kind of failures that earlier models had.

It was against this background that the online AGI Debate #3 was held on Friday, hosted by Montreal AI president Vincent Boucher and AI researcher Gary Marcus. The conference, which featured talks by scientists from different backgrounds, discussed lessons from cognitive science and neuroscience, the path to commonsense reasoning in AI, and suggestions for architectures that can help take the next step in AI.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

What’s missing from current AI systems?

“Deep learning approaches can provide useful tools in many domains,” said linguist and cognitive scientist Noam Chomsky. Some of these applications, such as automatic transcription and text autocomplete have become tools we rely on every day.

“But beyond utility, what do we learn from these approaches about cognition, thinking, in particular language?” Chomsky said. “[Deep learning] systems make no distinction between possible and impossible languages. The more the systems are improved the deeper the failure becomes. They will do even better with impossible languages and other systems.”

This flaw is evident in systems like ChatGPT, which can produce text that is grammatically correct and consistent but logically and factually flawed. Presenters at the conference provided numerous examples of such flaws, such as large language models not being able to sort sentences based on length, making grave errors on simple logical problems, and making false and inconsistent statements.

According to Chomsky, the current approaches for advancing deep learning systems, which rely on adding training data, creating larger models, and using “clever programming,” will only exacerbate the mistakes that these systems make.

“In short, they’re telling us nothing about language and thought, about cognition generally, or about what it is to be human or any other flights of fantasy in contemporary discussion,” Chomsky said.

Marcus said that a decade after the 2012 deep learning revolution, considerable progress has been made, “but some issues remain.”

He laid out four key aspects of cognition that are missing from deep learning systems:

  1. Abstraction: Deep learning systems such as ChatGPT struggle with basic concepts such as counting and sorting items.
  2. Reasoning: Large language models fail to reason about basic things, such as fitting objects in containers. “The genius of ChatGPT is that it can answer the question, but unfortunately you can’t count on the answers,” Marcus said.
  3. Compositionality: Humans understand language in terms of wholes comprised of parts. Current AI continues to struggle with this, which can be witnessed when models such as DALL-E are asked to draw images that have hierarchical structures.
  4. Factuality: “Humans actively maintain imperfect but reliable world models. Large language models don’t and that has consequences,” Marcus said. “They can’t be updated incrementally by giving them new facts. They need to be typically retrained to incorporate new knowledge. They hallucinate.”

AI and commonsense reasoning

Deep neural networks will continue to make mistakes in adversarial and edge cases, said Yejin Choi, computer science professor at the University of Washington.

“The real problem we’re facing today is that we simply do not know the depth or breadth of these adversarial or edge cases,” Choi said. “My haunch is that this is going to be a real challenge that a lot of people might be underestimating. The true difference between human intelligence and current AI is still so vast.”

Choi said that the gap between human and artificial intelligence is caused by lack of common sense, which she described as “the dark matter of language and intelligence” and “the unspoken rules of how the world works” that influence the way people use and interpret language.

According to Choi, common sense is trivial for humans and hard for machines because obvious things are never spoken, there are endless exceptions to every rule, and there is no universal truth in commonsense matters. “It’s ambiguous, messy stuff,” she said.

AI researcher and neuroscientist, Dileep George, emphasized the importance of mental simulation for common sense reasoning via language. Knowledge for commonsense reasoning is acquired through sensory experience, George said, and this knowledge is stored in the perceptual and motor system. We use language to probe this model and trigger simulations in the mind.

“You can think of our perceptual and conceptual system as the simulator, which is acquired through our sensorimotor experience. Language is something that controls the simulation,” he said.

George also questioned some of the current ideas for creating world models for AI systems. In most of these blueprints for world models, perception is a preprocessor that creates a representation on which the world model is built.

“That is unlikely to work because many details of perception need to be accessed on the fly for you to be able to run the simulation,” he said. “Perception has to be bidirectional and has to use feedback connections to access the simulations.”

The architecture for the next generation of AI systems

While many scientists agree on the shortcomings of current AI systems, they differ on the road forward.

David Ferrucci, founder of Elemental Cognition and a former member of IBM Watson, said that we can’t fulfill our vision for AI if we can’t get machines to “explain why they are producing the output they’re producing.”

Ferrucci’s company is working on an AI system that integrates different modules. Machine learning models generate hypotheses based on their observations and project them onto an explicit knowledge module that ranks them. The best hypotheses are then processed by an automated reasoning module. This architecture can explain its inferences and its causal model, two features that are missing in current AI systems. The system develops its knowledge and causal models from classic deep learning approaches and interactions with humans.

AI scientist Ben Goertzel stressed that “the deep neural net systems that are currently dominating the current commercial AI landscape will not make much progress toward building real AGI systems.”

Goertzel, who is best known for coining the term AGI, said that enhancing current models such as GPT-3 with fact-checkers will not fix the problems that deep learning faces and will not make them capable of generalization like the human mind.

“Engineering true, open-ended intelligence with general intelligence is totally possible, and there are several routes to get there,” Goertzel said.

He proposed three solutions, including doing a real brain simulation; making a complex self-organizing system that is quite different from the brain; or creating a hybrid cognitive architecture that self-organizes knowledge in a self-reprogramming, self-rewriting knowledge graph controlling an embodied agent. His current initiative, the OpenCog Hyperon project, is exploring the latter approach.

Francesca Rossi, IBM fellow and AI Ethics Global Leader at the Thomas J. Watson Research Center, proposed an AI architecture that takes inspiration from cognitive science and the “Thinking Fast and Slow Framework” of Daniel Kahneman.

The architecture, named SlOw and Fast AI (SOFAI), uses a multi-agent approach composed of fast and slow solvers. Fast solvers rely on machine learning to solve problems. Slow solvers are more symbolic and attentive and computationally complex. There is also a metacognitive module that acts as an arbiter and decides which agent will solve the problem. Like the human brain, if the fast solver can’t address a novel situation, the metacognitive module passes it on to the slow solver. This loop then retrains the fast solver to gradually learn to address these situations.

“This is an architecture that is supposed to work for both autonomous systems and for supporting human decisions,” Rossi said.

Jürgen Schmidhuber, scientific director of The Swiss AI Lab IDSIA and one of the pioneers of modern deep learning techniques, said that many of the problems raised about current AI systems have been addressed in systems and architectures introduced in the past decades. Schmidhuber suggested that solving these problems is a matter of computational cost and that in the future, we will be able to create deep learning systems that can do meta-learning and find new and better learning algorithms.

Standing on the shoulders of giant datasets

Jeff Clune, associate professor of computer science at the University of British Columbia, presented the idea of “AI-generating algorithms.”

“The idea is to learn as much as possible, to bootstrap from very simple beginnings all the way through to AGI,” Clune said.

Such a system has an outer loop that searches through the space of possible AI agents and ultimately produces something that is very sample-efficient and very general. The evidence that this is possible is the “very expensive and inefficient algorithm of Darwinian evolution that ultimately produced the human mind,” Clune said.

Clune has been discussing AI-generating algorithms since 2019, which he believes rests on three key pillars: Meta-learning architectures, meta-learning algorithms, and effective means to generate environments and data. Basically, this is a system that can constantly create, evaluate and upgrade new learning environments and algorithms.

At the AGI debate, Clune added a fourth pillar, which he described as “leveraging human data.”

“If you watch years and years of video on agents doing that task and pretrain on that, then you can go on to learn very very difficult tasks,” Clune said. “That’s a really big accelerant to these efforts to try to learn as much as possible.”

Learning from human-generated data is what has allowed GPT, CLIP and DALL-E to find efficient ways to generate impressive results. “AI sees further by standing on the shoulders of giant datasets,” Clune said.

Clune finished by predicting a 30% chance of having AGI by 2030. He also said that current deep learning paradigms — with some key enhancements — will be enough to achieve AGI.

Clune warned, “I don’t think we’re ready as a scientific community and as a society for AGI arriving that soon, and we need to start planning for this as soon as possible. We need to start planning now.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Ben Dickson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!