We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Want AI Weekly for free each Thursday in your inbox? Sign up here.
Is an AI alien invasion headed for earth? The VentureBeat editorial staff marveled at the possibility this week, thanks to the massive online traffic earned by one Data Decision Makers community article, with its impossible-to-ignore title, Prepare for arrival: Tech pioneer warns of alien invasion.
The column, written by Louis Rosenberg, founder of Unanimous AI, was certainly buoyed not only by its SEO-friendly title, but its breathless opener: “An alien species is headed for planet Earth and we have no reason to believe it will be friendly. Some experts predict it will get here within 30 years, while others insist it will arrive far sooner. Nobody knows what it will look like, but it will share two key traits with us humans – it will be intelligent and self-aware.”
But a fuller read reveals Rosenberg’s focus on some of today’s hottest AI debates, including the potential for AGI in our lifetimes and why organizations need to prepare with AI ethics: “…while there’s an earnest effort in the AI community to push for safe technologies, there’s also a lack of urgency. That’s because too many of us wrongly believe that a sentient AI created by humanity will somehow be a branch of the human tree, like a digital descendant that shares a very human core. This is wishful thinking … the time to prepare is now.”
DeepMind says ‘game over’ for AGI
Coincidentally, this past week was filled with claims, counterclaims and critiques of claims around the potential to realize AGI anytime soon.
Last Friday, Nando De Freitas, a lead researcher at Google’s DeepMind AI division, tweeted that “The Game is Over!” in the decades-long quest for AGI, after DeepMind unveiled its new Gato AI, which is capable of complex tasks ranging from stacking blocks to writing poetry.
According to De Freitas, Gato AI simply needs to be scaled up in order to create an AI that rivals human intelligence. Or, as he wrote on Twitter, “It’s all about scale now! It’s all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline… Solving these challenges is what will deliver AGI.”
Pushback on AGI and scaling
Plenty of experts are pushing back on De Freitas’ claims and those of others insisting that AGI or its equivalent is at hand.
Yann LeCun, the French computer scientist who is chief AI scientist at Meta, had this to say (on Facebook, of course):
“About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:
(0) there is no such thing as AGI. Reaching “Human Level AI” may be a useful goal, but even humans are specialized.
(1) the research community is making some progress towards HLAI
(2) scaling up helps. It’s necessary but not sufficient, because….
(3) we are still missing some fundamental concepts
(4) some of those new concepts are possibly “around the corner” (e.g. generalized self-supervised learning)
(5) but we don’t know how many such new concepts are needed. We just see the most obvious ones.
(6) hence, we can’t predict how long it’s going to take to reach HLAI.“
Current AGI efforts as “alt intelligence”
Meanwhile, Gary Marcus, founder of Robust.AI and author of Rebooting AI, added to the debate on his new Substack, with its first post dedicated to the discussion of current efforts to develop AGI (including Gato AI), which he calls “alt intelligence”:
“Right now, the predominant strand of work within Alt Intelligence is the idea of scaling. The notion that the bigger the system, the closer we come to true intelligence, maybe even consciousness.
There is nothing new, per se, about studying Alt Intelligence, but the hubris associated with it is. I’ve seen signs for a while, in the dismissiveness with which the current AI superstars,= and indeed vast segments of the whole field of AI, treat human cognition, ignoring and even ridiculing scholars in such fields as linguistics, cognitive psychology, anthropology and philosophy.
But this morning I woke to a new reification, a Twitter thread that expresses, out loud, the Alt Intelligence creed, from Nando de Freitas, a brilliant high-level executive at DeepMind, Alphabet’s rightly-venerated AI wing, in a declaration that AI is “all about scale now.”
Marcus closes by saying:
“Let us all encourage a field that is open-minded enough to work in multiple directions, without prematurely dismissing ideas that happen to be not yet fully developed. It may just be that the best path to artificial (general) intelligence isn’t through Alt Intelligence, after all.
As I have written, I am fine with thinking of Gato as an “Alt Intelligence” — an interesting exploration in alternative ways to build intelligence — but we need to take it in context: it doesn’t work like the brain, it doesn’t learn like a child, it doesn’t understand language, it doesn’t align with human values and it can’t be trusted with mission-critical tasks.
It may well be better than anything else we currently have, but the fact that it still doesn’t really work, even after all the immense investments that have been made in it, should give us pause.”
AI alien invasion not arriving anytime soon (whew!)
It’s nice to know that most experts don’t believe the AGI alien invasion will arrive anytime soon.
But the fierce debate around AI and its ability to develop human-level intelligence will certainly continue – on social media and off.
Let me know your thoughts!
— Sharon Goldman, senior editor and writer
Twitter: @sharongoldman
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Author: Sharon Goldman
Source: Venturebeat