AI & RoboticsNews

The future of AI is unknown. That’s the problem with tech ‘prophets’ influencing AI policy | The AI Beat

The skies above where I reside near New York City were noticeably apocalyptic last week. But to some in Silicon Valley, the fact that we wimpy East Coasters were dealing with a sepia hue and a scent profile that mixed cigar bar, campfire and old-school happy hour was nothing to worry about. After all, it is AI, not climate change, that appears to be top of mind to this cohort, who believe future superintelligence is either going to kill us all, save us all, or almost kill us all if we don’t save ourselves first. 

Whether they predict the “existential risks” of runaway AGI that could lead to human “extinction” or foretell an AI-powered utopia, this group seems to have equally strong, fixed opinions (for now, anyway — perhaps they are “loosely held”) that easily tip into biblical prophet territory. 

For example, back in February OpenAI published a blog post called “Planning for AGI and Beyond” that some found fascinating but others found “gross.”

The manifesto-of-sorts seemed comically Old Testament-like to me, especially as OpenAI had just accepted an estimated $10 billion investment from Microsoft. The blog post offered revelations, foretold events, warned the world of what is coming, and presented OpenAI as the trustworthy savior. The grand message seemed oddly disconnected from its product-focused PR around how tools like ChatGPT or Microsoft’s Bing might help in use cases like search results or essay writing. In that context, considering how AGI could “empower humanity to maximally flourish in the universe” made me giggle.

But the prophecies keep coming: Last week, on the same day New Yorkers viewed the Empire State Building choked by smoke, venture capitalist Marc Andreessen published a new essay, “Why AI Will Save the World,” in which he casts himself as a soothsayer, predicting an AI utopia as ideal as the Garden of Eden. 

“Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it,” Andreesen wrote. He quickly launched into how that will happen, including the fact that every child will have an AI tutor that is “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” This AI tutor, obviously a far cry from any human teacher who is not infinitely anything, will loyally remain by each child’s side throughout their development, he explained, “helping them maximize their potential with the machine version of infinite love.” AI, he claimed, could turn Earth into a perfect, nurturing womb: “Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer,” he said.

While some immediately compared Andreesen’s essay to Neal Stephenson’s futuristic novel The Diamond Age, his vision still reminded me of a mystical Promised Land that offers happiness and abundance for all eternity — a far more appealing, although equally unlikely, scenario than the one where humanity is destroyed because a rogue AI leads the world into a paperclip apocalypse.

The problem with all of these confident forecasts is that no one knows the future of AI — let alone how, or when, artificial general intelligence will emerge. That is very different than issues like climate change, which has “unequivocal evidence” behind it and hard data behind rates of change that go far beyond observing the orange skies over Manhattan.

That, in turn, is a problem for societies looking to develop appropriate regulations to address AI risks. If the tech prophets are the ones with the power to influence AI policy makers, will we end up with regulations that focus on an unlikely apocalypse or unicorn-laden utopia, rather than ones that tackle near-term risks related to bias, misinformation, labor shifts and societal disruption?

Are Big Tech CEOs who are open about their efforts to build AGI the right ones to talk with world leaders about their willingness to address AI risks? Are VCs like Marc Andreessen, who is known for leading the charge towards Web3 and crypto, the right influencers to corral the public towards whatever AI future awaits us?

In a New York Times article yesterday, author David Sheffield pointed out that apocalyptic talk is not new to Silicon Valley, with stocked bunkers a common possession of many tech executives. In a 2016 article, he pointed out, Mr. Altman said he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

Now, Sheffield wrote, this group is prepping for the Singularity. “They like to think they’re sensible people making sage comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of “The Intelligence Illusion,” a critical examination of AI. “It’s a bit frightening,” he said.

Yet some of these are the very leaders leading the charge to deal with AI risk and safety. For example, two weeks ago the UK prime minister, Rishi Sunak, acknowledged the “existential” risk of artificial intelligence after meeting with the heads of OpenAI, DeepMind and Anthropic — three AI research labs with ongoing efforts to develop AGI.

What is concerning is that this could lead to displaced visibility and resources for researchers working on present-day risks of AI, Sara Hooker, formerly of Google Brain and now head of Cohere for AI, told me recently.

“While it is good for some people in the field to work on long-term risks, the amount of those people is currently disproportionate to the ability to accurately estimate that risk,” she said. “I wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, that’s what a lot of researchers work on day in, day out.”

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The skies above where I reside near New York City were noticeably apocalyptic last week. But to some in Silicon Valley, the fact that we wimpy East Coasters were dealing with a sepia hue and a scent profile that mixed cigar bar, campfire and old-school happy hour was nothing to worry about. After all, it is AI, not climate change, that appears to be top of mind to this cohort, who believe future superintelligence is either going to kill us all, save us all, or almost kill us all if we don’t save ourselves first. 

Whether they predict the “existential risks” of runaway AGI that could lead to human “extinction” or foretell an AI-powered utopia, this group seems to have equally strong, fixed opinions (for now, anyway — perhaps they are “loosely held”) that easily tip into biblical prophet territory. 

For example, back in February OpenAI published a blog post called “Planning for AGI and Beyond” that some found fascinating but others found “gross.”

The manifesto-of-sorts seemed comically Old Testament-like to me, especially as OpenAI had just accepted an estimated $10 billion investment from Microsoft. The blog post offered revelations, foretold events, warned the world of what is coming, and presented OpenAI as the trustworthy savior. The grand message seemed oddly disconnected from its product-focused PR around how tools like ChatGPT or Microsoft’s Bing might help in use cases like search results or essay writing. In that context, considering how AGI could “empower humanity to maximally flourish in the universe” made me giggle.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

New AI prophecies keep coming

But the prophecies keep coming: Last week, on the same day New Yorkers viewed the Empire State Building choked by smoke, venture capitalist Marc Andreessen published a new essay, “Why AI Will Save the World,” in which he casts himself as a soothsayer, predicting an AI utopia as ideal as the Garden of Eden. 

“Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it,” Andreesen wrote. He quickly launched into how that will happen, including the fact that every child will have an AI tutor that is “infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.” This AI tutor, obviously a far cry from any human teacher who is not infinitely anything, will loyally remain by each child’s side throughout their development, he explained, “helping them maximize their potential with the machine version of infinite love.” AI, he claimed, could turn Earth into a perfect, nurturing womb: “Rather than making the world harsher and more mechanistic, infinitely patient and sympathetic AI will make the world warmer and nicer,” he said.

While some immediately compared Andreesen’s essay to Neal Stephenson’s futuristic novel The Diamond Age, his vision still reminded me of a mystical Promised Land that offers happiness and abundance for all eternity — a far more appealing, although equally unlikely, scenario than the one where humanity is destroyed because a rogue AI leads the world into a paperclip apocalypse.

Confident AI forecasts are not facts

The problem with all of these confident forecasts is that no one knows the future of AI — let alone how, or when, artificial general intelligence will emerge. That is very different than issues like climate change, which has “unequivocal evidence” behind it and hard data behind rates of change that go far beyond observing the orange skies over Manhattan.

That, in turn, is a problem for societies looking to develop appropriate regulations to address AI risks. If the tech prophets are the ones with the power to influence AI policy makers, will we end up with regulations that focus on an unlikely apocalypse or unicorn-laden utopia, rather than ones that tackle near-term risks related to bias, misinformation, labor shifts and societal disruption?

Are Big Tech CEOs who are open about their efforts to build AGI the right ones to talk with world leaders about their willingness to address AI risks? Are VCs like Marc Andreessen, who is known for leading the charge towards Web3 and crypto, the right influencers to corral the public towards whatever AI future awaits us?

Should preppers be leading the way?

In a New York Times article yesterday, author David Sheffield pointed out that apocalyptic talk is not new to Silicon Valley, with stocked bunkers a common possession of many tech executives. In a 2016 article, he pointed out, Mr. Altman said he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

Now, Sheffield wrote, this group is prepping for the Singularity. “They like to think they’re sensible people making sage comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of “The Intelligence Illusion,” a critical examination of AI. “It’s a bit frightening,” he said.

Yet some of these are the very leaders leading the charge to deal with AI risk and safety. For example, two weeks ago the UK prime minister, Rishi Sunak, acknowledged the “existential” risk of artificial intelligence after meeting with the heads of OpenAI, DeepMind and Anthropic — three AI research labs with ongoing efforts to develop AGI.

What is concerning is that this could lead to displaced visibility and resources for researchers working on present-day risks of AI, Sara Hooker, formerly of Google Brain and now head of Cohere for AI, told me recently.

“While it is good for some people in the field to work on long-term risks, the amount of those people is currently disproportionate to the ability to accurately estimate that risk,” she said. “I wish more of the attention was placed on the current risk of our models that are deployed every day and used by millions of people. Because for me, that’s what a lot of researchers work on day in, day out.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
Cleantech & EV'sNews

RIZON class 4 and 5 electric MD trucks arrive in Canada

Cleantech & EV'sNews

777 hp electric overland concept from Italdesign bows in Beijing [video]

CryptoNews

Does Money Transmitting Require Control? DOJ Says No in Tornado Cash Litigation – Legal Bitcoin News

CryptoNews

Veteran Trader Peter Brandt Suggests BTC May Have Topped, Predicts a Decline to Mid-$30K – Featured Bitcoin News

Sign up for our Newsletter and
stay informed!