AI & RoboticsNews

Survey says there’s a 50% chance AI beats humans at all tasks in 20 years

AI is moving at an astronomical pace: In a little more than a year, it has shifted the entire conversation not just in enterprise, but in everyday life. 

Things are accelerating so fast, in fact, that even those working in the field are taken aback. Many report being surprised — and increasingly concerned — by AI’s rapid progress, according to a new survey. 

In 2023 Expert Survey on Progress in AI, the largest study of its kind, researchers at AI Impacts, the University of Bonn and the University of Oxford sought opinion from 2,778 authors whose work had appeared in top industry publications and forums. 

Most notably, participants reported that if science continues undisrupted, the chance of unaided machines outperforming humans in every possible task could hit 10% in just three years (by 2027) and 50% by 2047. 

Also, respondents said the chance that all human occupations becoming fully automatable could reach 10% by 2037, and, alarmingly, that there’s at least a 10% chance advanced AI could cause “severe disempowerment” and even extinction of the human race, echoing the concerns of those in the industry who subscribe to “existential risk,” or “x-risk” beliefs about AI, a school of thought closely intertwined with the effective altruism, or “EA” movement. (Critics of these beliefs paint them as unrealistic and say focusing on them minimizes the real, short term harms of AI such as job loss or inequality.)

“While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions — particularly those involving extinction-level risks — serve as a stark reminder of the high stakes involved in AI development and deployment,” researchers reflected. 

The survey was the third in a series, following others conducted in 2016 and 2022 — and many opinions and projections have dramatically changed. 

Participants — of which four times as many as the previous year study — were surveyed in fall 2023, after an “eventful year of broad progress,” including the launch of ChatGPT, Anthropic’s Claude 2, Google’s Bard and Gemini, and many more models; the dissemination of two AI safety letters; and government action in the U.S., UK and EU. 

Respondents were first asked how soon 39 specific tasks would become “feasible” for AI; feasible meaning that “one of the best resourced labs could implement it in less than a year.”

Some of these tasks included: 

All but four of the 39 tasks were predicted to have at least a 50% chance of being feasible within the next 10 years. In just one year between surveys, aggregate predictions for 21 out of 32 tasks moved earlier. 

Abilities expected to take longer than 10 years included: 

Researchers also polled respondents about how soon human-level performance might be feasible for “High-Level Machine Intelligence” (HLMI) (related to tasks) and “Full Automation of Labor” (FAOL) (for occupations).

Respondents replied that HLMI would be achieved when unaided machines could accomplish each task better and cheaper than humans. FAOL, meanwhile, would occur when an occupation becomes fully automatable because “unaided machines can accomplish it better and more cheaply than human workers.”

Those polled predicted a 50% chance of HLMI by 2047, a jump forward of 13 years from 2060 in the 2022 survey. They put FAOL at a 50% chance by the year 2116, down a significant 48 years from the prior year. 

“While the range of views on how long it will take for milestones to be feasible can be broad, this year’s survey saw a general shift towards earlier expectations,” researchers write.

Of course, there are many concerns about risk in AI systems — often related to alignment, trustworthiness, predictability, self-directedness, capabilities and jailbreakability, researchers note. 

To gauge top worries in AI, respondents were asked how likely it was for state-of-the-art AI systems to have certain traits by 2043. 

Within the next 20 years, a large majority of participants thought that models would be able to:

Respondents also said that by as soon as 2028, AI would be able to puzzle humans: We will likely often be unable to know the true reasons for an AI systems’ outputs. 

Participants also expressed “substantial” or “extreme” concern that AI could be used for dissemination of false information via deepfakes; for manipulation of large-scale public opinion trends; by dangerous groups to make powerful tools (such as viruses); and by authoritarian rulers to control their populations. Furthermore, they reported that AI systems could worsen economic inequality. 

In light of all this, there was strong consensus that AI safety research should be prioritized as AI tools continue to evolve. 

Going further, participants were close to evenly split on positive and negative impacts of AI: More than half (68%) said good outcomes of AI were more likely than bad, while close to 58% said extremely bad outcomes were a “nontrivial possibility.” 

Depending on how questions were posed, roughly half of all respondents said there was a greater than 10% chance of human extinction or severe disempowerment. 

“On the very pessimistic end, one in 10 participants put at least a 25% chance on outcomes in the range of human extinction,” researchers report. 

Still, the researchers were quick to emphasize that, while those in the field are familiar with the technology and “the dynamics of past progress,” predicting things can be difficult, even for experts.

As the paper drolly puts it: Participants “do not, to our knowledge, have any unusual skill at forecasting in general.” 

Because respondents can give very different answers to many questions, it’s difficult to reach a true consensus; answers are also to an extent influenced by how questions are framed. Therefore, the researchers advise that forecasts be part of a broader discussion including sources such as trends in computer hardware advancements in AI capabilities and economic analyses, researchers caution. 

Still, “despite these limitations, AI researchers are well-positioned to contribute to the accuracy of our collective guesses about the future,” the report asserts. “While unreliable, educated guesses are what we must all rely on.” 

AI is moving at an astronomical pace: In a little more than a year, it has shifted the entire conversation not just in enterprise, but in everyday life. 

Things are accelerating so fast, in fact, that even those working in the field are taken aback. Many report being surprised — and increasingly concerned — by AI’s rapid progress, according to a new survey. 

In 2023 Expert Survey on Progress in AI, the largest study of its kind, researchers at AI Impacts, the University of Bonn and the University of Oxford sought opinion from 2,778 authors whose work had appeared in top industry publications and forums. 

Most notably, participants reported that if science continues undisrupted, the chance of unaided machines outperforming humans in every possible task could hit 10% in just three years (by 2027) and 50% by 2047. 

Also, respondents said the chance that all human occupations becoming fully automatable could reach 10% by 2037, and, alarmingly, that there’s at least a 10% chance advanced AI could cause “severe disempowerment” and even extinction of the human race, echoing the concerns of those in the industry who subscribe to “existential risk,” or “x-risk” beliefs about AI, a school of thought closely intertwined with the effective altruism, or “EA” movement. (Critics of these beliefs paint them as unrealistic and say focusing on them minimizes the real, short term harms of AI such as job loss or inequality.)

“While the optimistic scenarios reflect AI’s potential to revolutionize various aspects of work and life, the pessimistic predictions — particularly those involving extinction-level risks — serve as a stark reminder of the high stakes involved in AI development and deployment,” researchers reflected. 

AI will soon feasibly perform a range of tasks and occupations

The survey was the third in a series, following others conducted in 2016 and 2022 — and many opinions and projections have dramatically changed. 

Participants — of which four times as many as the previous year study — were surveyed in fall 2023, after an “eventful year of broad progress,” including the launch of ChatGPT, Anthropic’s Claude 2, Google’s Bard and Gemini, and many more models; the dissemination of two AI safety letters; and government action in the U.S., UK and EU. 

Respondents were first asked how soon 39 specific tasks would become “feasible” for AI; feasible meaning that “one of the best resourced labs could implement it in less than a year.”

Some of these tasks included: 

  • Translating text in newfound language
  • Recognizing objects only seen once
  • Writing simple Python code specifics and examples
  • Penning The New York Times list best-selling fiction
  • Autonomously constructing a payment processing site from scratch
  • Fine-tuning a large language model

All but four of the 39 tasks were predicted to have at least a 50% chance of being feasible within the next 10 years. In just one year between surveys, aggregate predictions for 21 out of 32 tasks moved earlier. 

Abilities expected to take longer than 10 years included: 

  • After spending time in a virtual world, outputting the differential equations governing that world in symbolic form (12 years)
  • Physically installing the electrical wiring in a new home (17 years)
  • Proving mathematical theorems that are publishable in top mathematics journals today (22 years)
  • Solving long-standing unsolved problems in mathematics such as a Millennium Prize problem (27 years)

When AI can work unaided or outperform humans

Researchers also polled respondents about how soon human-level performance might be feasible for “High-Level Machine Intelligence” (HLMI) (related to tasks) and “Full Automation of Labor” (FAOL) (for occupations).

Respondents replied that HLMI would be achieved when unaided machines could accomplish each task better and cheaper than humans. FAOL, meanwhile, would occur when an occupation becomes fully automatable because “unaided machines can accomplish it better and more cheaply than human workers.”

Those polled predicted a 50% chance of HLMI by 2047, a jump forward of 13 years from 2060 in the 2022 survey. They put FAOL at a 50% chance by the year 2116, down a significant 48 years from the prior year. 

“While the range of views on how long it will take for milestones to be feasible can be broad, this year’s survey saw a general shift towards earlier expectations,” researchers write.

AI leading to outcomes good and bad

Of course, there are many concerns about risk in AI systems — often related to alignment, trustworthiness, predictability, self-directedness, capabilities and jailbreakability, researchers note. 

To gauge top worries in AI, respondents were asked how likely it was for state-of-the-art AI systems to have certain traits by 2043. 

Within the next 20 years, a large majority of participants thought that models would be able to:

  • Find unexpected ways to achieve goals (82%)
  • Be able to talk like a human expert on most topics (81%)
  • Frequently behave in ways that are surprising to humans (69%)

Respondents also said that by as soon as 2028, AI would be able to puzzle humans: We will likely often be unable to know the true reasons for an AI systems’ outputs. 

Participants also expressed “substantial” or “extreme” concern that AI could be used for dissemination of false information via deepfakes; for manipulation of large-scale public opinion trends; by dangerous groups to make powerful tools (such as viruses); and by authoritarian rulers to control their populations. Furthermore, they reported that AI systems could worsen economic inequality. 

In light of all this, there was strong consensus that AI safety research should be prioritized as AI tools continue to evolve. 

Going further, participants were close to evenly split on positive and negative impacts of AI: More than half (68%) said good outcomes of AI were more likely than bad, while close to 58% said extremely bad outcomes were a “nontrivial possibility.” 

Depending on how questions were posed, roughly half of all respondents said there was a greater than 10% chance of human extinction or severe disempowerment. 

“On the very pessimistic end, one in 10 participants put at least a 25% chance on outcomes in the range of human extinction,” researchers report. 

AI experts aren’t fortune tellers (yet)

Still, the researchers were quick to emphasize that, while those in the field are familiar with the technology and “the dynamics of past progress,” predicting things can be difficult, even for experts.

As the paper drolly puts it: Participants “do not, to our knowledge, have any unusual skill at forecasting in general.” 

Because respondents can give very different answers to many questions, it’s difficult to reach a true consensus; answers are also to an extent influenced by how questions are framed. Therefore, the researchers advise that forecasts be part of a broader discussion including sources such as trends in computer hardware advancements in AI capabilities and economic analyses, researchers caution. 

Still, “despite these limitations, AI researchers are well-positioned to contribute to the accuracy of our collective guesses about the future,” the report asserts. “While unreliable, educated guesses are what we must all rely on.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!