AI & RoboticsNews

With GPT-4, dangers of ‘Stochastic Parrots’ remain, say researchers. No wonder OpenAI CEO is a ‘bit scared’ | The AI Beat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


It was another epic week in generative AI: Last Monday, there was Google’s laundry list-like lineup, including a PaLM API and new integrations in Google Workspace. Tuesday brought the surprise release of OpenAI’s GPT-4 model, as well as Anthropic’s Claude. On Thursday, Microsoft announced Copilot 365, which the company said would “change work as we know it.”

This was all before the comments by OpenAI CEO Sam Altman over the weekend that admitted, just a few days after releasing GPT-4, the company is, in fact, “a little bit scared” of it all.

>>Follow VentureBeat’s ongoing generative AI coverage<<

By the time Friday came, I was more than ready for a dose of thoughtful reality amid the AI hype.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

A look back at research that foreshadowed current AI debates

I got it from the authors of a March 2021 AI research paper, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”

Two years after its publication — which led to the firing of two of its authors, Google ethics researchers Timnit Gebru and Margaret Mitchell — the researchers decided was time for a look back on an explosive paper that now seems to foreshadow the current debates around the risks of LLMs such as GPT-4.

According to the paper, a language model is a “system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.”

In the paper’s abstract, the authors said they are addressing the possible risks associated with large language models and the available paths for mitigating those risks:

“We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.”

Among other criticisms, the paper argued that much of the text mined to build GPT-3 — which was initially released in June 2020 — comes from forums that do not include the voices of women, older people and marginalized groups, leading to inevitable biases that affect the decisions of systems built on top of them.

Fast forward to now: There was no research paper attached to the GPT-4 launch that shares details about its architecture (including model size), hardware, training compute, dataset construction or training method. But in an interview over the weekend with ABC News, Altman acknowledged its risks:

“The thing that I try to caution people the most is what we call the ‘hallucinations problem,’” Altman said. “The model will confidently state things as if they were facts that are entirely made up.”

‘Dangers of Stochastic Parrots’ more relevant than ever, say authors

Gebru and Mitchell, along with co-authors Emily Bender, professor of linguistics at the University of Washington, and Angelina McMillan-Major, a computational linguist Ph.D. student at the University of Washington, led a series of virtual discussions on Friday celebrating the original paper, called “Stochastic Parrots Day.”

“I see all of this effort going into ever-larger language models, with all the risks that are laid out in the paper, sort of ignoring those risks and saying, but see, we’re building something that really understands,” said Bender.

At the time the researchers wrote “On the Dangers of Stochastic Parrots,” Mitchell said she realized that deep learning was at a point where language models were about to take off, but there were still no citations of harms and risks.

“I was like, we have to do this right now or that citation won’t be there. Or else the discussion will go in a totally different direction that really doesn’t address or even acknowledge some of the very obvious harms and risks that I know from my thesis work, for example, which was on the cognitive and psychological side of language perception,” Mitchell recalled.

Lessons for GPT-4 and beyond from ‘On the Dangers of Stochastic Parrots’

There are plenty of lessons from the original paper that the AI community should keep in mind today, said the researchers. “It turns out that we hit on a lot of the things that are happening now,” said Mitchell.

One of those lessons they didn’t see coming, said Gebru, were the worker exploitation and content-moderation issues involved in training ChatGPT and other LLMs that became widely publicized over the past year.

“That’s one thing I didn’t see at all,” she said. “I didn’t think about that back then because I didn’t see the explosion of information which would then necessitate so many people to moderate the horrible toxic text that people output.”

McMillan-Major added that she thinks about how much the average person now needs to know about this technology, because it has become so ubiquitous.

“In the paper, we mentioned something about watermarking texts, that somehow we could make it clear,” she said. “That’s still something we need to work on — making these things more perceptible to the average person.”

Bender pointed out that she also wanted the public to be more aware of the importance of transparency of the source data in LLMs, especially when OpenAI has said “it’s a matter of safety to not tell people what this data is.”

In the Stochastic Parrots paper, she recalled, the authors emphasized that it might be wrongly assumed that “because a dataset is big, it is therefore representative and sort of a ground truth about the world.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!