AI & RoboticsNews

Today’s AI is ‘alchemy,’ not science — what that means and why that matters | The AI Beat

A New York Times article this morning, titled “How to Tell if Your AI Is Conscious,” says that in a new report, “scientists offer a list of measurable qualities” based on a “brand-new” science of consciousness. 

The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called “The Retort,” along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of today’s AI as a truly scientific endeavor. 

Gilbert maintains that much of today’s AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy — that is, the medieval forerunner of chemistry, that can also be defined as a “seemingly magical process of transformation.” 

Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that it’s not scientific, in the sense that it’s not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy. 

“The people building it actually think that what they’re doing is magical,” he said. “And that’s rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence.” The prevailing idea, he explained, is that intelligence itself is scalar — depending only on the amount of data thrown at a model and the computational limits of the model itself. 

But, he emphasized, like alchemy, much of today’s AI research is not necessarily trying to be what we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of today’s closed AI research does not, either. 

“It was very secretive, and frankly, that’s how AI works right now,” he said. “It’s largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet — and then building computation and structuring it such that you can distill that web of knowledge that we’ve all been building for decades now, and then seeing what comes out.” 

I was particularly interested in Gilbert’s thoughts on “alchemy” given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senate’s closed-door “AI Insight Forum,” where Elon Musk called for AI regulators to serve as a “referee” to keep AI “safe,” while actively working on using AI to put microchips in human brains and making humans a “multiplanetary species.” There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive – part of the “magic” of generative AI — and that “superintelligence” is simply an “engineering problem.” 

And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflection’s Pi manages to refrain from toxic output — “I’m not going to go into too many details because it’s sensitive,” he said — while calling on governments to regulate AI and appoint cabinet-level tech ministers.  

It’s enough to make my head spin — but Gilbert’s take on AI as alchemy put these seemingly opposing ideas into perspective. 

Gilbert clarified that he isn’t saying that the notion of AI as alchemy is wrong — but that its lack of scientific rigor needs to be called what it really is. 

“They’re building systems that are arbitrarily intelligent, not intelligent in the way that humans are — whatever that means — but just arbitrarily intelligent,” he explained. “That’s not a well-framed problem, because it’s assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.” 

AI builders, he continued, “don’t need to know what the mechanisms are” that make the technology work, but they are “interested enough and motivated enough and frankly, also have the resources enough to just play with it.”  

The magic of generative AI, he added, doesn’t come from the model. “The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like I’m talking to a machine when I play with ChatGPT. That’s not a property of the model, that’s a property of ChatGPT — of the interface.” 

In support of this idea, researchers at Alphabet’s AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to “take a deep breath and work on this problem step-by-step,” though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.)

One of the major consequences of the alchemy of AI is when it intersects with politics — as it is now with discussions around AI regulation in the US and the EU, said Gilbert. 

“In politics, what we’re trying to do is articulate a notion of what is good to do, to establish the grounds for consensus — that is fundamentally what’s at stake in the hearings right now,” he said. “We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what they’re doing and why it matters to the people that we have elected to represent our political interests.” 

The problem is that we can only guess at the work of Big Tech AI builders, he said. “We’re living in a weird moment,” he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are “not remotely” well understood. 

“In AI, we don’t really know what the mechanisms are for these models, but we still talk about them like they’re intelligent. We still talk about them like…there’s some kind of anthropological ground that is being uncovered… and there’s truly no basis for that.” 

But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesn’t mean they aren’t worthy of investigation, he cautioned. “In fact, I would argue that they’re highly worthy of investigation scientifically — [but] when those things start to be framed as a political project or a political priority, that’s a different realm of significance.”

Meanwhile, the open source generative AI movement — led by the likes of Meta Platforms with its Llama models, along other smaller startups such as Anyscale and Deci — is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople — including lawmakers — can understand, remains a significant challenge. 

That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained. 

“It’s a laxity of public rigor, combined with a certain kind of… willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two,” he said. 

Ultimately, he said, the current alchemy of AI can be seen as “tragic.” 

“There is a kind of brilliance in the prognostication, but it’s not clearly matched to a regime of accountability,” he said. “And without accountability, you get neither good politics nor good science.” 

A New York Times article this morning, titled “How to Tell if Your AI Is Conscious,” says that in a new report, “scientists offer a list of measurable qualities” based on a “brand-new” science of consciousness. 

The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called “The Retort,” along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of today’s AI as a truly scientific endeavor. 

Gilbert maintains that much of today’s AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy — that is, the medieval forerunner of chemistry, that can also be defined as a “seemingly magical process of transformation.” 

Like alchemy, AI is rooted in ‘magical’ metaphors

Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that it’s not scientific, in the sense that it’s not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy. 

“The people building it actually think that what they’re doing is magical,” he said. “And that’s rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence.” The prevailing idea, he explained, is that intelligence itself is scalar — depending only on the amount of data thrown at a model and the computational limits of the model itself. 

But, he emphasized, like alchemy, much of today’s AI research is not necessarily trying to be what we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of today’s closed AI research does not, either. 

“It was very secretive, and frankly, that’s how AI works right now,” he said. “It’s largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet — and then building computation and structuring it such that you can distill that web of knowledge that we’ve all been building for decades now, and then seeing what comes out.” 

AI and cognitive dissonance

I was particularly interested in Gilbert’s thoughts on “alchemy” given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senate’s closed-door “AI Insight Forum,” where Elon Musk called for AI regulators to serve as a “referee” to keep AI “safe,” while actively working on using AI to put microchips in human brains and making humans a “multiplanetary species.” There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive – part of the “magic” of generative AI — and that “superintelligence” is simply an “engineering problem.” 

And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflection’s Pi manages to refrain from toxic output — “I’m not going to go into too many details because it’s sensitive,” he said — while calling on governments to regulate AI and appoint cabinet-level tech ministers.  

It’s enough to make my head spin — but Gilbert’s take on AI as alchemy put these seemingly opposing ideas into perspective. 

The ‘magic’ comes from the interface, not the model

Gilbert clarified that he isn’t saying that the notion of AI as alchemy is wrong — but that its lack of scientific rigor needs to be called what it really is. 

“They’re building systems that are arbitrarily intelligent, not intelligent in the way that humans are — whatever that means — but just arbitrarily intelligent,” he explained. “That’s not a well-framed problem, because it’s assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.” 

AI builders, he continued, “don’t need to know what the mechanisms are” that make the technology work, but they are “interested enough and motivated enough and frankly, also have the resources enough to just play with it.”  

The magic of generative AI, he added, doesn’t come from the model. “The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like I’m talking to a machine when I play with ChatGPT. That’s not a property of the model, that’s a property of ChatGPT — of the interface.” 

In support of this idea, researchers at Alphabet’s AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to “take a deep breath and work on this problem step-by-step,” though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.)

The consequences of AI as alchemy

One of the major consequences of the alchemy of AI is when it intersects with politics — as it is now with discussions around AI regulation in the US and the EU, said Gilbert. 

“In politics, what we’re trying to do is articulate a notion of what is good to do, to establish the grounds for consensus — that is fundamentally what’s at stake in the hearings right now,” he said. “We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what they’re doing and why it matters to the people that we have elected to represent our political interests.” 

The problem is that we can only guess at the work of Big Tech AI builders, he said. “We’re living in a weird moment,” he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are “not remotely” well understood. 

“In AI, we don’t really know what the mechanisms are for these models, but we still talk about them like they’re intelligent. We still talk about them like…there’s some kind of anthropological ground that is being uncovered… and there’s truly no basis for that.” 

But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesn’t mean they aren’t worthy of investigation, he cautioned. “In fact, I would argue that they’re highly worthy of investigation scientifically — [but] when those things start to be framed as a political project or a political priority, that’s a different realm of significance.”

Meanwhile, the open source generative AI movement — led by the likes of Meta Platforms with its Llama models, along other smaller startups such as Anyscale and Deci — is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople — including lawmakers — can understand, remains a significant challenge. 

AI alchemy: Neither good politics nor good science

That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained. 

“It’s a laxity of public rigor, combined with a certain kind of… willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two,” he said. 

Ultimately, he said, the current alchemy of AI can be seen as “tragic.” 

“There is a kind of brilliance in the prognostication, but it’s not clearly matched to a regime of accountability,” he said. “And without accountability, you get neither good politics nor good science.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!