AI & RoboticsNews

Open letter calling for AI ‘pause’ shines light on fierce debate around risks vs. hype

AI pause

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


A new open letter calling for a six-month “pause” on large-scale AI development beyond OpenAI’s GPT-4 highlights the complex discourse and fast-growing, fierce debate around AI’s various stomach-churning risks, both short-term and long-term.

Critics of the letter — which was signed by Elon Musk, Steve Wozniak, Yoshua Bengio, Gary Marcus and several thousand other AI experts, researchers and industry leaders — say it fosters unhelpful alarm around hypothetical dangers, leading to misinformation and disinformation about actual, real-world concerns. Others pointed out the unrealistic nature of a “pause” and said the letter did not address current efforts towards global AI regulation and legislation.

The letter was published by the nonprofit Future of Life Institute, which was founded to “reduce global catastrophic and existential risk from powerful technologies” (founders include by MIT cosmologist Max Tegmark, Skype co-founder Jaan Tallinn, and DeepMind research scientist Viktoriya Krakovna). The letter says that “With more data and compute, the capabilities of AI systems are scaling rapidly. The largest models are increasingly capable of surpassing human performance across many domains. No single company can forecast what this means for our societies.”

The letter points out that superintelligence is far from the only harm to be concerned about when it comes to large AI models — the potential for impersonation and disinformation are others. However, it does emphasize that the stated goal of many commercial labs is to develop AGI (artificial general intelligence) and adds that some researchers believe that we are close to AGI, with accompanying concerns for AGI safety and ethics.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

“We believe that Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.

Marcus spoke to the New York Times’ Cade Metz about the letter, saying it was important because “we have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”

Critics say letter ‘further fuels AI hype’ 

The letter’s critics called out what they considered continued hype around the long-term hypothetical dangers of AGI at the expense of near-term risks such as bias and misinformation that are already happening.

Arvind Narayanan, professor of computer science at Princeton, said on Twitter that the letter “further fuels AI hype and makes it harder to tackle real, already occurring AI harms,” adding that he suspected that it will “benefit the companies that it is supposed to regulate, and not society.”

And Alex Engler, a research fellow at the Brookings Institution, told Tech Policy Press that “It would be more credible and effective if its hypotheticals were reasonably grounded in the reality of large machine learning models, which, spoiler, they are not,” adding that he “strongly endorses” independent third-party access to and auditing of large ML models. “That is a key intervention to check corporate claims, enable safe use and identify the real emerging threats.”

Joanna Bryson, a professor at Hertie School in Berlin who works on AI and ethics, called the letter “more BS libertarianism,” tweeting that “we don’t need AI to be arbitrarily slowed, we need AI products to be safe. That involves following and documenting good practice, which requires regulation and audits.”

The issue, she continued, referring to the EU AI Act, is that “we are well-advanced in a European legislative process not acknowledged here.” She also added that “I don’t think this moratorium call makes any sense. If they want this, why aren’t they working through the Internet Governance Forum, or UNESCO?”

Emily M. Bender, professor of linguistics at the University of Washington and co-author of “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” went further, tweeting that the Stochastic Parrots paper pointed to a “headlong” rush to ever larger language models without considering risks.

“But the risks and harms have never been about ‘too powerful AI,’” she said. Instead, “they’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).”

In response to the criticism, Marcus pointed out on Twitter that while he doesn’t agree with all elements of the open letter, he “didn’t let perfect be the enemy of the good.” He is “still a skeptic,” he said, “who thinks that large language models are shallow, and not close to AGI. But they can still do real damage.”  He supported the letter’s “overall spirit,” and promoted it “because this is the conversation we desperately need to have.”

Open letter similar to other mainstream media warnings

While the release of GPT-4 has filled the pages and pixels of mainstream media there has been a parallel media focus on the risks of large-scale AI development — particularly hypothetical possibilities over the long haul.

That was at the heart of a VentureBeat interview yesterday with Suresh Venkatasubramanian, former White House AI policy advisor to the Biden Administration from 2021-2022 (where he helped develop the Blueprint for an AI Bill of Rights) and professor of computer science at Brown University.

The article detailed Venkatasubramanian’s critical response to Senator Chris Murphy (D-CT)’s tweets about ChatGPT that received backlash from many in the AI community. He said that Murphy’s comments, as well as a recent op-ed from the New York Times and similar op-eds, perpetuate “fear-mongering around generative AI systems that are not very constructive and are preventing us from actually engaging with the real issues with AI systems that are not generative.”

We should “focus on the harms that are already seen with AI, then worry about the potential takeover of the universe by generative AI,” he added.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
DefenseNews

Lockheed to supply Australia with air battle management system

DefenseNews

First upgraded F-35s won’t be ready for combat until next year

DefenseNews

US Army faces uphill battle to fix aviation mishap crisis

Cleantech & EV'sNews

GreenPower just launched this versatile electric utility truck

Sign up for our Newsletter and
stay informed!