AI & RoboticsNews

AI doom, AI boom and the possible destruction of humanity

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday scenarios due to a runaway superintelligence. The CAIS statement mirrors the dominant concerns expressed in AI industry conversations over the last two months: Namely, that existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale. 

The statement has been signed by a who’s who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT). In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making. 

In a New York Times article, CAIS executive director Dan Hendrycks said: “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.”

“Doomers” is the keyword in this statement. Clearly, there is a lot of doom talk going on now. For example, Hinton recently departed from Google so that he could embark on an AI-threatens-us-all doom tour.

Throughout the AI community, the term “P(doom)” has become fashionable to describe the probability of such doom. P(doom) is an attempt to quantify the risk of a doomsday scenario in which AI, especially superintelligent AI, causes severe harm to humanity or even leads to human extinction.

On a recent Hard Fork podcast, Kevin Roose of The New York Times set his P(doom) at 5%. Ajeya Cotra, an AI safety expert with Open Philanthropy and a guest on the show, set her P(doom) at 20 to 30%. However, it needs to be said that P(doom) is purely speculative and subjective, a reflection of individual beliefs and attitudes toward AI risk — rather than a definitive measure of that risk.

Not everyone buys into the AI doom narrative. In fact, some AI experts argue the opposite. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm).  They argue, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and that AI can be part of how these are addressed and hopefully mitigated.

Melanie Mitchell, a prominent AI researcher, is also skeptical of doomsday thinking. Mitchell is the Davis Professor of complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. Among her arguments is that intelligence cannot be separated from socialization.

In Towards Data Science, Jeremie Harris, co-founder of AI safety company Gladstone AI, interprets Mitchell as arguing that a genuinely intelligent AI system is likely to become socialized by picking up common sense and ethics as a byproduct of their development and would, therefore, likely be safe.

While the concept of P(doom) serves to highlight the potential risks of AI, it can inadvertently overshadow a crucial aspect of the debate: The positive impact AI could have on mitigating existential threats.

Hence, to balance the conversation, we should also consider another possibility that I call “P(solution)” or “P(sol),” the probability that AI can play a role in addressing these threats. To give you a sense of my perspective, I estimate my P(doom) to be around 5%, but my P(sol) stands closer to 80%. This reflects my belief that, while we shouldn’t discount the risks, the potential benefits of AI could be substantial enough to outweigh them.

This is not to say that there are no risks or that we should not pursue best practices and regulations to avoid the worst imaginable possibilities. It is to say, however, that we should not focus solely on potential bad outcomes or claims, as does a post in the Effective Altruism Forum, that doom is the default probability. 

The primary worry, according to many doomers, is the problem of alignment, where the objectives of a superintelligent AI are not aligned with human values or societal objectives. Although the subject seems new with the emergence of ChatGPT, this concern emerged nearly 65 years ago. As reported by The Economist, Norbert Weiner — an AI pioneer and the father of cybernetics — published an essay in 1960 describing his worries about a world in which “machines learn” and “develop unforeseen strategies at rates that baffle their programmers.” 

The alignment problem was first dramatized in the 1968 film 2001: A Space Odyssey. Marvin Minsky, another AI pioneer, served as a technical consultant for the film. In the movie, the HAL 9000 computer that provides the onboard AI for the spaceship Discovery One begins to behave in ways that are at odds with the interests of the crew members. The AI alignment problem surfaces when HAL’s objectives diverge from those of the human crew.

When HAL learns of the crew’s plans to disconnect it due to concerns about its behavior, HAL perceives this as a threat to the mission’s success and responds by trying to eliminate the crew members. The message is that if an AI’s objectives are not perfectly aligned with human values and goals, the AI might take actions that are harmful or even deadly to humans, even if it is not explicitly programmed to do so.

Fast forward 55 years, and it is this same alignment concern that animates much of the current doomsday conversation. The worry is that an AI system may take harmful actions even without anybody intending them to do so. Many leading AI organizations are diligently working on this problem. Google DeepMind recently published a paper on how to best assess new, general-purpose AI systems for dangerous capabilities and alignment and to develop an “early warning system” as a critical aspect of a responsible AI strategy. 

Given these two sides of the debate — P(doom) or P(sol) — there is no consensus on the future of AI. The question remains: Are we heading toward a doom scenario or a promising future enhanced by AI? This is a classic paradox. On one side is the hope that AI is the best of us and will solve complex problems and save humanity. On the other side, AI will bring out the worst of us by obfuscating the truth, destroying trust and, ultimately, humanity. 

Like all paradoxes, the answer is not clear. What is certain is the need for ongoing vigilance and responsible development in AI. Thus, even if you do not buy into the doomsday scenario, it still makes sense to pursue common-sense regulations to hopefully prevent an unlikely but dangerous situation. The stakes, as the Center for AI Safety has reminded us, are nothing less than the future of humanity itself.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

This statement, released this week by the Center for AI Safety (CAIS), reflects an overarching — and some might say overreaching — worry about doomsday scenarios due to a runaway superintelligence. The CAIS statement mirrors the dominant concerns expressed in AI industry conversations over the last two months: Namely, that existential threats may manifest over the next decade or two unless AI technology is strictly regulated on a global scale. 

The statement has been signed by a who’s who of academic experts and technology luminaries ranging from Geoffrey Hinton (formerly at Google and the long-time proponent of deep learning) to Stuart Russell (a professor of computer science at Berkeley) and Lex Fridman (a research scientist and podcast host from MIT). In addition to extinction, the Center for AI Safety warns of other significant concerns ranging from enfeeblement of human thinking to threats from AI-generated misinformation undermining societal decision-making. 

Doom gloom

In a New York Times article, CAIS executive director Dan Hendrycks said: “There’s a very common misconception, even in the AI community, that there only are a handful of doomers. But, in fact, many people privately would express concerns about these things.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

“Doomers” is the keyword in this statement. Clearly, there is a lot of doom talk going on now. For example, Hinton recently departed from Google so that he could embark on an AI-threatens-us-all doom tour.

Throughout the AI community, the term “P(doom)” has become fashionable to describe the probability of such doom. P(doom) is an attempt to quantify the risk of a doomsday scenario in which AI, especially superintelligent AI, causes severe harm to humanity or even leads to human extinction.

On a recent Hard Fork podcast, Kevin Roose of The New York Times set his P(doom) at 5%. Ajeya Cotra, an AI safety expert with Open Philanthropy and a guest on the show, set her P(doom) at 20 to 30%. However, it needs to be said that P(doom) is purely speculative and subjective, a reflection of individual beliefs and attitudes toward AI risk — rather than a definitive measure of that risk.

Not everyone buys into the AI doom narrative. In fact, some AI experts argue the opposite. These include Andrew Ng (who founded and led the Google Brain project) and Pedro Domingos (a professor of computer science and engineering at the University of Washington and author of The Master Algorithm).  They argue, instead, that AI is part of the solution. As put forward by Ng, there are indeed existential dangers, such as climate change and future pandemics, and that AI can be part of how these are addressed and hopefully mitigated.

Source: https://twitter.com/pmddomingos/status/1663598551975473153

Overshadowing the positive impact of AI

Melanie Mitchell, a prominent AI researcher, is also skeptical of doomsday thinking. Mitchell is the Davis Professor of complexity at the Santa Fe Institute and author of Artificial Intelligence: A Guide for Thinking Humans. Among her arguments is that intelligence cannot be separated from socialization.

In Towards Data Science, Jeremie Harris, co-founder of AI safety company Gladstone AI, interprets Mitchell as arguing that a genuinely intelligent AI system is likely to become socialized by picking up common sense and ethics as a byproduct of their development and would, therefore, likely be safe.

While the concept of P(doom) serves to highlight the potential risks of AI, it can inadvertently overshadow a crucial aspect of the debate: The positive impact AI could have on mitigating existential threats.

Hence, to balance the conversation, we should also consider another possibility that I call “P(solution)” or “P(sol),” the probability that AI can play a role in addressing these threats. To give you a sense of my perspective, I estimate my P(doom) to be around 5%, but my P(sol) stands closer to 80%. This reflects my belief that, while we shouldn’t discount the risks, the potential benefits of AI could be substantial enough to outweigh them.

This is not to say that there are no risks or that we should not pursue best practices and regulations to avoid the worst imaginable possibilities. It is to say, however, that we should not focus solely on potential bad outcomes or claims, as does a post in the Effective Altruism Forum, that doom is the default probability. 

The alignment problem

The primary worry, according to many doomers, is the problem of alignment, where the objectives of a superintelligent AI are not aligned with human values or societal objectives. Although the subject seems new with the emergence of ChatGPT, this concern emerged nearly 65 years ago. As reported by The Economist, Norbert Weiner — an AI pioneer and the father of cybernetics — published an essay in 1960 describing his worries about a world in which “machines learn” and “develop unforeseen strategies at rates that baffle their programmers.” 

The alignment problem was first dramatized in the 1968 film 2001: A Space Odyssey. Marvin Minsky, another AI pioneer, served as a technical consultant for the film. In the movie, the HAL 9000 computer that provides the onboard AI for the spaceship Discovery One begins to behave in ways that are at odds with the interests of the crew members. The AI alignment problem surfaces when HAL’s objectives diverge from those of the human crew.

When HAL learns of the crew’s plans to disconnect it due to concerns about its behavior, HAL perceives this as a threat to the mission’s success and responds by trying to eliminate the crew members. The message is that if an AI’s objectives are not perfectly aligned with human values and goals, the AI might take actions that are harmful or even deadly to humans, even if it is not explicitly programmed to do so.

Fast forward 55 years, and it is this same alignment concern that animates much of the current doomsday conversation. The worry is that an AI system may take harmful actions even without anybody intending them to do so. Many leading AI organizations are diligently working on this problem. Google DeepMind recently published a paper on how to best assess new, general-purpose AI systems for dangerous capabilities and alignment and to develop an “early warning system” as a critical aspect of a responsible AI strategy. 

A classic paradox

Given these two sides of the debate — P(doom) or P(sol) — there is no consensus on the future of AI. The question remains: Are we heading toward a doom scenario or a promising future enhanced by AI? This is a classic paradox. On one side is the hope that AI is the best of us and will solve complex problems and save humanity. On the other side, AI will bring out the worst of us by obfuscating the truth, destroying trust and, ultimately, humanity. 

Like all paradoxes, the answer is not clear. What is certain is the need for ongoing vigilance and responsible development in AI. Thus, even if you do not buy into the doomsday scenario, it still makes sense to pursue common-sense regulations to hopefully prevent an unlikely but dangerous situation. The stakes, as the Center for AI Safety has reminded us, are nothing less than the future of humanity itself.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Gary Grossman, Edelman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!