AI & RoboticsNews

Smarter than humans in 5 years? The breakneck pace of AI

Geoffrey Hinton, often dubbed one of the “Godfathers of AI,” has been particularly outspoken since his retirement from Google earlier this year. He is credited with perfecting and popularizing “backpropagation,” a pivotal algorithm that enables multi-layer neural networks to correct their mistakes.

This breakthrough has been instrumental in the success of deep learning technologies, which are the backbone of today’s generative AI models. In recognition of his groundbreaking contributions, Hinton was honored with the Turing Award, often considered the Nobel Prize of computer science.

Hinton transitioned from an AI optimist to more of an AI doomsayer when he realized that the time when AI could be smarter than people was not 50 to 60 years as he had thought but possibly within five years. Last spring, he warned about the potential existential threats posed by an AI that could soon be smarter than humans. The reason for his growing concern is the great leap seen with gen AI through large language models (LLM).

Five years from now is 2028, and that prediction is even more aggressive than that of AI optimist Ray Kurzweil, the head of Google Engineering.

“By 2029, computers will have human-level intelligence,” Kurzweil said in an interview several years ago. He further predicted that by 2045, AI will have achieved the “Singularity,” the point when “we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.”   

In a recent 60 Minutes interview, Hinton asserted that current leading AI models, like those developed by OpenAI and Google, already possess genuine intelligence and reasoning abilities. Notably, he added that those models can have experiences of their own in the same sense that humans do. While he does not believe they are conscious now (in our general sense of the concept), Hinton said that in time the AI systems will have consciousness.

Hinton believes that in five years there is a good chance that advanced AI models “may be able to reason better than people can.” When asked whether humans will be the second most intelligent beings on the planet, Hinton said yes. He added: “I think my main message is there’s enormous uncertainty about what’s [going to] happen next. These things do understand.”

We seem to have entered the growth phase for AI — not unlike when parents need to be careful about what they say in front of the child. “And because they understand,” Hinton added, “we need to think hard about what’s going to happen next.”

It is clear we need to act now, as the acceleration of development is only increasing. Recent developments have put to rest any questions about whether an AI arms race is underway. Specifically, CNBC reported that China plans to increase its computing power by 50% by 2025 as it looks to keep pace with the U.S. in AI and supercomputing applications. That is a huge amount of computing power to build and train ever larger LLMs.

According to Hinton, the human brain has about 100 trillion neural connections. By contrast, the largest current AI systems have just 1 trillion parameters. However, he believes the knowledge encoded in those parameters far surpasses human capabilities. This suggests the learning and especially the knowledge retention of AI models is much more efficient than that of humans.

On top of that, there are reports that the next generation of LLMs is coming soon, possibly before the end of this year, and could be 5 to 20X more advanced than GPT-4 models now on the market.

Mustafa Suleyman, CEO and cofounder of Inflection AI and cofounder of DeepMind, predicted during an Economist conversation that “in the next five years, the frontier model companies — those of us at the very cutting edge who are training the very largest AI models — are going to train models that are over a thousand times larger than what you see today in GPT-4.”

There is huge upside potential for these larger models. Beyond serving as extremely capable personal assistants, these tools could help to solve our greatest challenges such as fusion reactions for unlimited energy and providing precision medicine for longer and healthier lives.

The worry is that as AI becomes smarter than people and develops consciousness, its interests may diverge from those of humanity.

Will that happen, and if so when will it happen? As Hinton says: “We just don’t know.” 

While the technological advances in AI are exhilarating, they have put significant pressure on global governance, prompting another AI race — that of governments to regulate AI tools. The speed of AI development puts tremendous strain on regulators, however. They must understand the technology and how to regulate it without stifling innovation.

The E.U. is thought to be in front of these matters, closing in on the final rounds of debate over comprehensive legislation (the AI Act). However, recent reporting shows that the U.S. believes that the E.U. law would favor companies with the resources to cover the costs of compliance while hurting smaller firms, “dampening the expected boost to productivity.”

This concern suggests that the U.S., at least, may pursue a different approach to regulation. But regulations in other countries could result in a fragmented global landscape for AI regulation. This reality could potentially create challenges for companies operating in multiple countries, as they would have to navigate and comply with varying regulatory frameworks.

In addition, this fragmentation could stifle innovation if smaller firms are unable to bear the costs of compliance in different regions.

However, there may still be potential for global cooperation in AI regulation. According to The Register, leaders of the G7 are expected to establish international AI regulations by the end of the year. Earlier in the year, the G7 agreed to establish working groups related to gen AI to discuss governance, IP rights, disinformation and responsible use. However, China is notably absent from this list of counties as are twenty-four of the EU countries, calling to question the impact of any G7 agreement.

In the 60 Minutes interview, Hinton also said: “It may be [when] we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did.” He added that now is the opportunity to pass laws to ensure the ethical use of AI.

As AI continues to advance at a breakneck pace — outstripping even its own creators’ expectations — our ability to steer this technology in a direction beneficial to humanity becomes ever more challenging, yet crucial. Governments, businesses and civil society must overcome provincial concerns in favor of collective and collaborative action to quickly find an ethical and sustainable path.

There is an urgency for comprehensive, global governance of AI. Getting this right could be critical: The future of humanity may be determined by how we approach and address the challenges of advanced AI.

Gary Grossman is the EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence. 

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


Geoffrey Hinton, often dubbed one of the “Godfathers of AI,” has been particularly outspoken since his retirement from Google earlier this year. He is credited with perfecting and popularizing “backpropagation,” a pivotal algorithm that enables multi-layer neural networks to correct their mistakes.

This breakthrough has been instrumental in the success of deep learning technologies, which are the backbone of today’s generative AI models. In recognition of his groundbreaking contributions, Hinton was honored with the Turing Award, often considered the Nobel Prize of computer science.

The pace of progress

Hinton transitioned from an AI optimist to more of an AI doomsayer when he realized that the time when AI could be smarter than people was not 50 to 60 years as he had thought but possibly within five years. Last spring, he warned about the potential existential threats posed by an AI that could soon be smarter than humans. The reason for his growing concern is the great leap seen with gen AI through large language models (LLM).

Five years from now is 2028, and that prediction is even more aggressive than that of AI optimist Ray Kurzweil, the head of Google Engineering.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

“By 2029, computers will have human-level intelligence,” Kurzweil said in an interview several years ago. He further predicted that by 2045, AI will have achieved the “Singularity,” the point when “we will multiply our effective intelligence a billion-fold by merging with the intelligence we have created.”   

In a recent 60 Minutes interview, Hinton asserted that current leading AI models, like those developed by OpenAI and Google, already possess genuine intelligence and reasoning abilities. Notably, he added that those models can have experiences of their own in the same sense that humans do. While he does not believe they are conscious now (in our general sense of the concept), Hinton said that in time the AI systems will have consciousness.

The growth phase of AI

Hinton believes that in five years there is a good chance that advanced AI models “may be able to reason better than people can.” When asked whether humans will be the second most intelligent beings on the planet, Hinton said yes. He added: “I think my main message is there’s enormous uncertainty about what’s [going to] happen next. These things do understand.”

We seem to have entered the growth phase for AI — not unlike when parents need to be careful about what they say in front of the child. “And because they understand,” Hinton added, “we need to think hard about what’s going to happen next.”

It is clear we need to act now, as the acceleration of development is only increasing. Recent developments have put to rest any questions about whether an AI arms race is underway. Specifically, CNBC reported that China plans to increase its computing power by 50% by 2025 as it looks to keep pace with the U.S. in AI and supercomputing applications. That is a huge amount of computing power to build and train ever larger LLMs.

The next generation of LLMs

According to Hinton, the human brain has about 100 trillion neural connections. By contrast, the largest current AI systems have just 1 trillion parameters. However, he believes the knowledge encoded in those parameters far surpasses human capabilities. This suggests the learning and especially the knowledge retention of AI models is much more efficient than that of humans.

On top of that, there are reports that the next generation of LLMs is coming soon, possibly before the end of this year, and could be 5 to 20X more advanced than GPT-4 models now on the market.

Mustafa Suleyman, CEO and cofounder of Inflection AI and cofounder of DeepMind, predicted during an Economist conversation that “in the next five years, the frontier model companies — those of us at the very cutting edge who are training the very largest AI models — are going to train models that are over a thousand times larger than what you see today in GPT-4.”

There is huge upside potential for these larger models. Beyond serving as extremely capable personal assistants, these tools could help to solve our greatest challenges such as fusion reactions for unlimited energy and providing precision medicine for longer and healthier lives.

The worry is that as AI becomes smarter than people and develops consciousness, its interests may diverge from those of humanity.

Will that happen, and if so when will it happen? As Hinton says: “We just don’t know.” 

The governance challenge

While the technological advances in AI are exhilarating, they have put significant pressure on global governance, prompting another AI race — that of governments to regulate AI tools. The speed of AI development puts tremendous strain on regulators, however. They must understand the technology and how to regulate it without stifling innovation.

The E.U. is thought to be in front of these matters, closing in on the final rounds of debate over comprehensive legislation (the AI Act). However, recent reporting shows that the U.S. believes that the E.U. law would favor companies with the resources to cover the costs of compliance while hurting smaller firms, “dampening the expected boost to productivity.”

This concern suggests that the U.S., at least, may pursue a different approach to regulation. But regulations in other countries could result in a fragmented global landscape for AI regulation. This reality could potentially create challenges for companies operating in multiple countries, as they would have to navigate and comply with varying regulatory frameworks.

In addition, this fragmentation could stifle innovation if smaller firms are unable to bear the costs of compliance in different regions.

A turning point?

However, there may still be potential for global cooperation in AI regulation. According to The Register, leaders of the G7 are expected to establish international AI regulations by the end of the year. Earlier in the year, the G7 agreed to establish working groups related to gen AI to discuss governance, IP rights, disinformation and responsible use. However, China is notably absent from this list of counties as are twenty-four of the EU countries, calling to question the impact of any G7 agreement.

In the 60 Minutes interview, Hinton also said: “It may be [when] we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did.” He added that now is the opportunity to pass laws to ensure the ethical use of AI.

Global cooperation needed now

As AI continues to advance at a breakneck pace — outstripping even its own creators’ expectations — our ability to steer this technology in a direction beneficial to humanity becomes ever more challenging, yet crucial. Governments, businesses and civil society must overcome provincial concerns in favor of collective and collaborative action to quickly find an ethical and sustainable path.

There is an urgency for comprehensive, global governance of AI. Getting this right could be critical: The future of humanity may be determined by how we approach and address the challenges of advanced AI.

Gary Grossman is the EVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence. 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!