AI & RoboticsNews

How hybrid AI could enhance GPT-4 and GPT-5 and address LLM concerns

Hybrid AI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The explosion of new generative AI products and capabilities over the last several months — from ChatGPT to Bard and the many variations from others based on large language models (LLMs) — has driven an overheated hype cycle. In turn, this situation has led to a similarly expansive and passionate discussion about needed AI regulation.

AI regulation showdown

The AI regulation firestorm was ignited by the Future of Life Institute open letter, now signed by thousands of AI researchers and concerned others. Some of the notable signees include Apple cofounder Steve Wozniak, SpaceX, Tesla and Twitter CEO Elon Musk; Stability AI CEO Emad Mostaque; Sapiens author Yuval Noah Harari; and Yoshua Bengio, founder of AI research institute Mila.

Citing “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the letter called for a 6-month pause in the development of anything more powerful than GPT-4. The letter argues this additional time would allow ethical, regulatory and safety concerns to be considered and states that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Signatory Gary Marcus told TIME: “There are serious near-term and far-term risks and corporate AI responsibility seems to have lost fashion right when humanity needs it most.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

Like the letter, this perspective seems reasonable. After all, we are currently unable to explain exactly how LLMs work. On top of that, these systems also occasionally hallucinate, producing output that sounds credible but is not correct.

Two sides to every story

Not everyone agrees with the assertions in the letter or that a pause is warranted. In fact, many in the AI industry have pushed back, saying a pause would do little good. According to a report in VentureBeat, Meta chief scientist Yann LeCun said, “I don’t see the point of regulating research and development. I don’t think that serves any purpose other than reducing the knowledge that we could use to actually make technology better, safer.”

Pedro Domingos, a professor at the University of Washington and author of the seminal AI book The Master Algorithm went further.

Source: https://twitter.com/pmddomingos/status/1642429471247712256 
Source: https://twitter.com/pmddomingos/status/1642771575379496964 

According to reporting in Forbes, Domingos believes the level of urgency and alarm about existential risk expressed in the letter is overblown, assigning capabilities to these systems well beyond reality.

Nevertheless, the ensuing industry conversation may have prompted OpenAI CEO Sam Altman to say that the company is not currently testing GPT-5. Moreover, Altman added that the Transformer network technology underlying GPT-4 and the current ChatGPT may have run its course and that the age of giant AI models is already over.

The implication of this is that building ever larger LLMs may not yield appreciably better results, and by extension, GPT-5 would not be based on a larger model. This could be interpreted as Altman saying to supporters of the pause, “There’s nothing here to worry about, move along.”

Taking the next step: Combining AI models

This begs the question of what GPT-5 might look like when it eventually appears. Clues can be found in the innovation taking place currently, and that’s based on the present state of these LLMs. For example, OpenAI is releasing plug-ins for ChatGPT that add specific additional capabilities.

These plug-ins are meant to both augment its capabilities as well as offset weaknesses, such as poor performance on math problems, the tendency to make things up and the inability to explain how the model produces results. These are all problems typical of “connectionist” neural networks that are based on theories of how the brain is thought to operate.

In contrast, “symbolic” learning AIs do not have these weaknesses because they are reasoning systems based on facts. It could be that what OpenAI is creating — initially through plug-ins — is a hybrid AI model combining two AI paradigms, the connectionist LLMs with symbolic reasoning.

At least one of the new ChatGPT plug-ins is a symbolic reasoning AI. The Wolfram|Alpha plug-in provides a knowledge engine known for its accuracy and reliability that can be used to answer a wide range of questions. Combining these two AI approaches effectively makes a more robust system that would reduce the hallucinations of purely connectionist ChatGPT and — importantly — could also offer a more comprehensive explanation of the system’s decision-making process.

I asked Bard if this was plausible. Specifically, I asked if a hybrid system would be better at explaining what goes on within the hidden layers of a neural network. This is especially relevant since the issue of explainability is a notoriously difficult problem and at the root of many expressed concerns about all deep learning neural networks, including GPT-4.

If true, this could be an exciting advance. However, I wondered if this answer was a hallucination. As a double-check, I posed the same question to ChatGPT. The response was similar, though more nuanced.

In other words, a hybrid system combining connectionist and symbolic AI would be a notable improvement over a purely LLM-based approach, but it is not a panacea.

Although combining different AI models might seem like a new idea, it is already in use. For example, AlphaGo, the deep learning system developed by DeepMind to defeat top Go players, utilizes a neural network to learn how to play Go while also employing symbolic AI to comprehend the game’s rules.

While effectively combining these approaches presents unique challenges, further integration between them could be a step towards AI that is more powerful, offers better explainability and provides greater accuracy.

This approach would not only enhance the capabilities of the current GPT-4, but could also address some of the more pressing concerns about the current generation of LLMs. If, in fact, GPT-5 embraces this hybrid approach, it might be a good idea to speed up its development instead of slowing it down or enforcing a development pause.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!