AI & RoboticsNews

AI pioneers Hinton, Ng, LeCun, Bengio amp up x-risk debate

In a series of online articles, blog posts and posts on X/LinkedIn over the past few days, AI pioneers (sometimes called “godfathers” of AI) Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio have amped up their debate over existential risks of AI by commenting publicly on each other’s posts. The debate clearly places Hinton and Bengio on the side that is highly concerned about AI’s existential risks, or x-risks, while Ng and LeCun believe the concerns are overblown, or even a conspiracy theory Big Tech firms are using to consolidate power.

It’s a far cry from the united front of AI positivity they have shown over the years since leading the way on the deep learning ‘revolution’ that began in 2012. Even a year ago, LeCun and Hinton pushed back in interviews with VentureBeat against Gary Marcus and other critics who said deep learning had “hit a wall.” 

But today, Hinton, who quit his role at Google in May to speak out freely about the risks of AI, posted on X about recent comments from computer scientist Andrew Ng, who did pioneering work in image recognition after co-founding Google Brain in 2011.

Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat.

Hinton was responding to Ng’s comments in a recent interview with the Australian Financial Review that Big Tech is “lying” about some AI risks to shut down competition and trigger strict regulation.

And today, in an issue of his newsletter The Batch, Ng wrote that “My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation.”

LeCun, who is chief AI scientist at Meta, responded to Ng’s comments with a recent post saying: “Well, at least *one* Big Tech company is open sourcing AI models and not lying about AI existential risk.” He was referring, of course, to his own company, Meta. He added: “Lying is a big word that I haven’t used. I think some of these tech leaders are genuinely worried about existential risk. I think they are wrong. I think they exaggerate it. I think they have an unwarranted superiority complex that leads them to believe that 1. It’s okay if *they* do it, but not okay if the populace does it. 2. Superhuman AI is just around the corner and will have all the characteristics of current LLMs.”

LeCun also responded to Hinton’s post:

You and Yoshua are inadvertently helping those who want to put AI research and development under lock and key and protect their business by banning open research, open-source code, and open-access models.

This will inevitably lead to bad outcomes in the medium term.

While Hinton responded to one of LeCun’s:

Let’s open source nuclear weapons too to make them safer. The good guys (us) will always have bigger ones than the bad guys (them) so it should all be OK.

Meanwhile, just last week Hinton and Bengio — who received the 2018 ACM A.M. Turing Award (often referred to as the “Nobel Prize of Computing“), together with Hinton and LeCun, for their work on deep learning — joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence. 

The paper said companies and governments should devote one-third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.

Just a few days ago, Bengio wrote an opinion piece for Canada’s Globe and Mail, in which he said that as ChatGPT and similar LLMs continued to make giant leaps over the past year, his “apprehension steadily grew.”  He said that major AI risks are a grave source of concern for me, keeping me up at night, especially when I think about my grandson and the legacy we will leave to his generation.”

The debate does not diminish the long friendship between the quartet. Andrew Ng posted a photo of himself at a recent party celebrating Hinton’s retirement from Google, while LeCun did the same — posting a photo of himself with Hinton and Bengio with a caption saying: “A reminder that people can disagree about important things but still be good friends.”

A reminder that people can disagree about important things but still be good friends. pic.twitter.com/4yLXBmxOHr

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


In a series of online articles, blog posts and posts on X/LinkedIn over the past few days, AI pioneers (sometimes called “godfathers” of AI) Geoffrey Hinton, Andrew Ng, Yann LeCun and Yoshua Bengio have amped up their debate over existential risks of AI by commenting publicly on each other’s posts. The debate clearly places Hinton and Bengio on the side that is highly concerned about AI’s existential risks, or x-risks, while Ng and LeCun believe the concerns are overblown, or even a conspiracy theory Big Tech firms are using to consolidate power.

It’s a far cry from the united front of AI positivity they have shown over the years since leading the way on the deep learning ‘revolution’ that began in 2012. Even a year ago, LeCun and Hinton pushed back in interviews with VentureBeat against Gary Marcus and other critics who said deep learning had “hit a wall.” 

Hinton responded to claims that x-risk is Big Tech conspiracy

But today, Hinton, who quit his role at Google in May to speak out freely about the risks of AI, posted on X about recent comments from computer scientist Andrew Ng, who did pioneering work in image recognition after co-founding Google Brain in 2011.

Hinton was responding to Ng’s comments in a recent interview with the Australian Financial Review that Big Tech is “lying” about some AI risks to shut down competition and trigger strict regulation.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

And today, in an issue of his newsletter The Batch, Ng wrote that “My greatest fear for the future of AI is if overhyped risks (such as human extinction) lets tech lobbyists get enacted stifling regulations that suppress open-source and crush innovation.”

LeCun and Ng say tech leaders are exaggerating existential risks

LeCun, who is chief AI scientist at Meta, responded to Ng’s comments with a recent post saying: “Well, at least *one* Big Tech company is open sourcing AI models and not lying about AI existential risk.” He was referring, of course, to his own company, Meta. He added: “Lying is a big word that I haven’t used. I think some of these tech leaders are genuinely worried about existential risk. I think they are wrong. I think they exaggerate it. I think they have an unwarranted superiority complex that leads them to believe that 1. It’s okay if *they* do it, but not okay if the populace does it. 2. Superhuman AI is just around the corner and will have all the characteristics of current LLMs.”

LeCun also responded to Hinton’s post:

While Hinton responded to one of LeCun’s:

Bengio says AI risks are ‘keeping me up at night’

Meanwhile, just last week Hinton and Bengio — who received the 2018 ACM A.M. Turing Award (often referred to as the “Nobel Prize of Computing“), together with Hinton and LeCun, for their work on deep learning — joined with 22 other leading AI academics and experts to propose a framework for policy and governance that aims to address the growing risks associated with artificial intelligence. 

The paper said companies and governments should devote one-third of their AI research and development budgets to AI safety, and also stressed urgency in pursuing specific research breakthroughs to bolster AI safety efforts.

Just a few days ago, Bengio wrote an opinion piece for Canada’s Globe and Mail, in which he said that as ChatGPT and similar LLMs continued to make giant leaps over the past year, his “apprehension steadily grew.”  He said that major AI risks are a grave source of concern for me, keeping me up at night, especially when I think about my grandson and the legacy we will leave to his generation.”

X-risk debate does not diminish friendship, say ‘godfathers’ of AI

The debate does not diminish the long friendship between the quartet. Andrew Ng posted a photo of himself at a recent party celebrating Hinton’s retirement from Google, while LeCun did the same — posting a photo of himself with Hinton and Bengio with a caption saying: “A reminder that people can disagree about important things but still be good friends.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!