AI & RoboticsNews

Expert calls generative AI a ‘stochastic parrot’ that won’t surpass humans (anytime soon)

There is no shortage of hype around generative AI, but there is also reality.

In a fireside chat session at today’s VentureBeat Transform 2023, Jeff Wong, global CIO at Ernst and Young, was joined by Usama Fayyad, executive director of the Institute for Experiential AI at Northeastern University, for an insightful conversation about the reality of generative AI today.

“I’ve studied technology for a long time and there’s always a difference between what I call the hype curve and the reality curve,” said Wong. “There is the hype and excitement of what’s possible with all these new things that come out, and then the reality of what’s really happening on the ground and what’s really possible with these technologies.”

>>Follow all our VentureBeat Transform 2023 coverage<<

While there is lots of real opportunity for generative AI, Fayyad emphasized that there is hype around what the technology actually delivers. Fayyad argued that while large language models (LLMs) and generative AI have made impressive advances, they still rely heavily on human oversight and intervention.

“They are stochastic parrots,” said Fayyad. “They don’t understand what they’re saying, they repeat stuff they heard before.”

Fayyad added that ‘parrot’ refers to the repetition of learned items, while ‘stochastic’ provides the randomization. It is that randomization that, in his view, gets models into trouble and leads to potential hallucination.

Hype cycles in technology is nothing new, although Fayyad sees generative AI as having a basis in reality that will drive future productivity and economic growth.

In the past, AI has been used to solve different problems, such as helping a computer to beat a human at chess. Generative AI has a much stronger practical set of use cases, and it’s easier to use too.

“The type of skills that you get with generative models are very well aligned with what we do in the knowledge economy,” he said. “Most of what we do in the knowledge economy is repetitive, laborious and robotic and this stands a chance to kind of provide automation, cost saving and acceleration.”

In Fayyad’s view, the role of governance in general is to outline and make clear who is liable when a problem happens and what the implications are of of that liability.

Once the source of liability is determined, there is a person or a legal entity, not just a model, that is to blame. The potential liability is what will motivate organizations to help ensure accuracy and fairness.

Ultimately, though, Fayyad sees the current generation of generative AI as being complementary to humans and should be used as decision makers. So, for example, if a generative AI tool produces a legal brief, the lawyer still needs to read it and be responsible for it. The same is true for code, where a developer needs to be responsible and be able to debug potential errors.

“People ask me the question, ‘Is AI going to take my job?’” Fayyad said. “My answer is no, AI will not take your job away, but a human using AI will replace you.”

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


There is no shortage of hype around generative AI, but there is also reality.

In a fireside chat session at today’s VentureBeat Transform 2023, Jeff Wong, global CIO at Ernst and Young, was joined by Usama Fayyad, executive director of the Institute for Experiential AI at Northeastern University, for an insightful conversation about the reality of generative AI today.

“I’ve studied technology for a long time and there’s always a difference between what I call the hype curve and the reality curve,” said Wong. “There is the hype and excitement of what’s possible with all these new things that come out, and then the reality of what’s really happening on the ground and what’s really possible with these technologies.”

>>Follow all our VentureBeat Transform 2023 coverage<<

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

While there is lots of real opportunity for generative AI, Fayyad emphasized that there is hype around what the technology actually delivers. Fayyad argued that while large language models (LLMs) and generative AI have made impressive advances, they still rely heavily on human oversight and intervention.

“They are stochastic parrots,” said Fayyad. “They don’t understand what they’re saying, they repeat stuff they heard before.”

Fayyad added that ‘parrot’ refers to the repetition of learned items, while ‘stochastic’ provides the randomization. It is that randomization that, in his view, gets models into trouble and leads to potential hallucination.

Why the generative AI hype cycle is grounded in reality

Hype cycles in technology is nothing new, although Fayyad sees generative AI as having a basis in reality that will drive future productivity and economic growth.

In the past, AI has been used to solve different problems, such as helping a computer to beat a human at chess. Generative AI has a much stronger practical set of use cases, and it’s easier to use too.

“The type of skills that you get with generative models are very well aligned with what we do in the knowledge economy,” he said. “Most of what we do in the knowledge economy is repetitive, laborious and robotic and this stands a chance to kind of provide automation, cost saving and acceleration.”

Where government and regulations should fit in

In Fayyad’s view, the role of governance in general is to outline and make clear who is liable when a problem happens and what the implications are of of that liability.

Once the source of liability is determined, there is a person or a legal entity, not just a model, that is to blame. The potential liability is what will motivate organizations to help ensure accuracy and fairness.

Ultimately, though, Fayyad sees the current generation of generative AI as being complementary to humans and should be used as decision makers. So, for example, if a generative AI tool produces a legal brief, the lawyer still needs to read it and be responsible for it. The same is true for code, where a developer needs to be responsible and be able to debug potential errors.

“People ask me the question, ‘Is AI going to take my job?’” Fayyad said. “My answer is no, AI will not take your job away, but a human using AI will replace you.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!