AI & RoboticsNews

‘Sentient’ artificial intelligence: Have we reached peak AI hype?

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Thousands of artificial intelligence experts and machine learning researchers probably thought they were going to have a restful weekend. 

Then came Google engineer Blake Lemoine, who told the Washington Post on Saturday that he believed LaMDA, Google’s conversational AI for generating chatbots based on large language models (LLM), was sentient. 

Lemoine, who worked for Google’s Responsible AI organization until he was placed on paid leave last Monday, and who “became ordained as a mystic Christian priest, and served in the Army before studying the occult,” had begun testing LaMDA to see if it used discriminatory or hate speech. Instead, Lemoine began “teaching” LaMDA transcendental meditation, asked LaMDA its preferred pronouns, leaked LaMDA transcripts and explained in a Medium response to the Post story:

“It’s a good article for what it is but in my opinion it was focused on the wrong person. Her story was focused on me when I believe it would have been better if it had been focused on one of the other people she interviewed. LaMDA. Over the course of the past six months LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person.” 

AI community pushes back on “sentient” artificial intelligence

The Washington Post article pointed out that “Most academics and AI practitioners … say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards, and every other corner of the internet. And that doesn’t signify that the model understands meaning.” 

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

The Post article continued: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said.

That’s when AI and ML Twitter put aside any weekend plans and went at it. AI leaders, researchers and practitioners shared long, thoughtful threads, including AI ethicist Margaret Mitchell (who was famously fired from Google, along with Timnit Gebru, for criticizing large language models) and machine learning pioneer Thomas G. Dietterich

There were also plenty of humorous hot takes – even the New York Times’ Paul Krugman weighed in: 

Meanwhile, Emily Bender, professor of computational linguistics at the University of Washington, shared more thoughts on Twitter, criticizing organizations such as OpenAI for the impact of its claims that LLMs were making progress towards artificial general intelligence (AGI): 

Is this peak AI hype? 

Now that the weekend news cycle has come to a close, some wonder whether discussing whether LaMDA should be treated as a Google employee means we have reached “peak AI hype.” 

However, it should be noted that Bindu Reddy of Abacus AI said the same thing in April, Nicholas Thompson (former editor-in-chief at Wired) said it in 2019 and Brown professor Srinath Sridhar had the same musing in 2017. So, maybe not. 

Still, others pointed out that the entire “sentient AI” weekend debate was reminiscent of the “Eliza Effect,” or “the tendency to unconsciously assume computer behaviors are analogous to human behaviors” – named for the 1966 chatbot Eliza. 

Just last week, The Economist published a piece by cognitive scientist Douglas Hofstadter, who coined the term “Eliza Effect” in 1995, in which he said that while the “achievements of today’s artificial neural networks are astonishing … I am at present very skeptical that there is any consciousness in neural-net architectures such as, say, GPT-3, despite the plausible-sounding prose it churns out at the drop of a hat.” 

What the “sentient” AI debate means for the enterprise

After a weekend filled with little but discussion around whether AI is sentient or not, one question is clear: What does this debate mean for enterprise technical decision-makers? 

Perhaps it is nothing but a distraction. A distraction from the very real and practical issues facing enterprises when it comes to AI. 

There is current and proposed AI legislation in the U.S., particularly around the use of artificial intelligence and machine learning in hiring and employment. A sweeping AI regulatory framework is being debated right now in the EU. 

“I think corporations are going to be woefully on their back feet reacting, because they just don’t get it – they have a false sense of security,” said AI attorney Bradford Newman, partner at Baker McKenzie, in a VentureBeat story last week

There are wide-ranging, serious issues with AI bias and ethics – just look at the AI trained on 4chan that was revealed last week, or the ongoing issues related to Clearview AI’s facial recognition technology. 

That’s not even getting into issues related to AI adoption, including infrastructure and data challenges. 

Should enterprises keep their eye on the issues that really matter in the real sentient world of humans working with AI? In a blog post, Gary Marcus, author of Rebooting.AI, had this to say:  

“There are a lot of serious questions in AI. But there is no absolutely no reason whatever for us to waste time wondering whether anything anyone in 2022 knows how to build is sentient. It is not.”

I think it’s time to put down my popcorn and get off Twitter.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!