AI & RoboticsNews

The possibility of general AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


One of the challenges in following the news about developments in the field of artificial intelligence is that the term “AI” is often used indiscriminately to mean two unrelated things.

The first use of the term AI is something more precisely called narrow AI. It is powerful technology, but it is also pretty simple and straightforward: You take a bunch of data about the past, use a computer to analyze it and find patterns, and then use that analysis to make predictions about the future. This type of AI touches all our lives many times a day, as it filters spam out of our email and routes us through traffic. But because it is trained with data about the past, it works only where the future resembles the past. That’s why it can identify cats and play chess, because they don’t change on an elemental level from day to day.

The other use of the term AI is to describe something we call general AI, or often AGI. It doesn’t exist yet except in science fiction, and no one knows how to make it. A general AI is a computer program that is as intellectually versatile as a human. It can teach itself entirely new things that it has never been trained on before.

The difference between narrow and general AI

In the movies, AGI is Data from “Star Trek,” C-3PO from “Star Wars” and the replicants in “Blade Runner.” While it might intuitively seem that narrow AI is the same kind of thing as general AI, just a less mature and sophisticated implementation, this is not the case. General AI is something different. Identifying a spam email, for instance, isn’t computationally the same as being truly creative, which a general intelligence would be.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

I used to host a podcast about AI called “Voices in AI.” It was a lot of fun to do because most of the great practitioners of the science are accessible and were willing to be on the podcast. Thus, I ended up with a gallery of over a hundred great AI thinkers talking in depth about the topic. There were two questions I would ask most guests. The first was, “Is general AI possible?” Virtually everyone — with just four exceptions — said yes, it is possible. Then I would ask them when we will build it. Those answers were all over the map, some as soon as five years and others as long as 500.

Why would this be?

Why would virtually all of my guests say general AI is possible, yet offer such a wide range of informed estimates as to when we will make it? The answer goes back to a statement I made earlier: We don’t know how to build general intelligence, so your guess is as good as anyone else’s.

“But wait!” you might be saying. “If we don’t know how to make it, why do the experts so overwhelmingly agree that it is possible?” I would ask them that question as well, and I usually got a variant of the same answer. Their confidence that we will build a truly intelligent machine is based on a single core belief: that people are intelligent machines. Because we are machines, the reasoning goes, and have general intelligence, building machines with general intelligence must be possible.

Human vs. machine

To be sure, if people are machines, then those experts are right: General intelligence isn’t merely possible, but inevitable. However, if it turns out that people are not merely machines, then there is something about people that may not be able to be reproduced in silicon.

The interesting thing about this is the disconnect between those hundred or so AI experts and everyone else. When I give talks on this topic to general audiences and ask them who believes they are machines, roughly 15% raise their hand, not the 96% of the AI experts.

On my podcast, when I would push back on this assumption about the nature of human intelligence, my guests would usually accuse me — quite politely, of course — of indulging in some kind of magical thinking that is at its core antiscience. “What else could we possibly be if not biological machines?”

It is a fair question and an important one. We know of only one thing in the universe with general intelligence, and that is us. How do we happen to have such a powerful creative superpower? We don’t really know.

Intelligence as a superpower

Try to recall the color of your first bicycle or the name of your first-grade teacher. Maybe you haven’t thought about either of those in years, yet your brain was probably able to retrieve them with little effort, which is all the more impressive when you consider that “data” isn’t stored in your brain as it is on a hard drive. In fact, we don’t know how it is stored. We may come to discover that each of the hundred-billion neurons in your brain is as complicated as our most advanced supercomputer.

But that’s just where the mystery of our intelligence starts. It gets trickier from there. It turns out we have something called a mind, which is different from the brain. The mind is everything that the three pounds of goo in your head can do that seems like it shouldn’t be able to, like having a sense of humor or falling in love. Your heart doesn’t do those, nor does your liver. But somehow you do.

We don’t even know for certain that the mind is solely a product of the brain. More than a few people are born missing up to 95% of their brains, yet still have normal intelligence and often don’t even know of their condition until later in life when getting a diagnostic exam. Further, we seem to have a lot of intelligence that isn’t stored in our brains but is distributed throughout our bodies.

General AI: The added complexity of consciousness

Even though we don’t understand the brain or the mind, it actually gets more difficult from there: General intelligence might well require consciousness. Consciousness is the experience you have of the world. A thermometer may accurately tell you the temperature, but it cannot feel warmth. That difference, between knowing something and experiencing something, is consciousness, and we have little reason to believe that computers can experience the world any more than a chair can.

So here we are with brains we don’t understand, minds we cannot explain and, as for consciousness, we don’t even have a good theory on how it is even possible for mere matter to have an experience. Yet, in spite of all of this, the AI folks who believe in general AI are confident that we can replicate all human abilities in computers. To my ear, that is the argument that seems to appeal to magical thinking.

I don’t say that to be dismissive or to trivialize anyone’s beliefs. They may well be correct. I just regard the idea of general AI as an unproven hypothesis, not an obvious scientific truth. The desire to build such a creature, and then to control it, is an ancient dream of humanity. In the modern era, it is centuries old, beginning perhaps with Mary Shelley’s Frankenstein, and then manifesting in a thousand later stories. But it is really much older than that. As far back as we have writing, we have such imaginings, such as the story of Talos, a robot created by the Greek god of technology, Hephaestus, to guard the island of Crete.

Somewhere deep inside us is a desire to create this creature and command its awesome power, but nothing so far should be taken as an indication that we actually can.

Byron Reese is a technologist and author.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Byron Reese
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!