AI & RoboticsNews

OpenAI product leader denies claims GPT-4 has gotten ‘lazier and dumber’

OpenAI has a lot on its plate. The Washington Post reported yesterday that the Federal Trade Commission is investigating the generative AI leader for possible violations of consumer protection law. And on Monday, comedian and author Sarah Silverman sued OpenAI and Meta for copyright infringement of her humorous memoir, The Bedwetter: Stories of Courage, Redemption, and Pee, published in 2010.

But while OpenAI lawsuits and investigations may be flying fast and furiously — and product releases such as Code Interpreter for ChatGPT Plus users have continued apace — it was a report Wednesday by Business Insider that OpenAI’s GPT-4 model, which powers ChatGPT, had become “lazier and dumber” due to a “radical redesign” that prompted a response from the company’s product team.

Community members on OpenAI’s developer forum had been discussing what they perceived as a decrease in GPT-4 quality — losses in reasoning and logic capabilities, API denials and poorer results overall. They speculated OpenAI might have modified the learning algorithm, changed the training data or modified the model’s infrastructure. The complaints and reports of degraded service followed similar posts on the grassroots Reddit communities or subreddits of r/OpenAI and r/ChatGPT for the last several months.

One commenter, self-identified as a paying OpenAI subscriber, said that “it went from being a great assistant sous-chef to dishwasher.”

In response, Peter Welinder, VP of product at OpenAI, tweeted that not only had the company not made GPT-4 dumber, but each new version was smarter than the one before. His current hypothesis, he said, was that “when you use it more heavily, you start noticing issues you didn’t see before.” He continued: “If you have examples where you believe it’s regressed, please reply to this thread and we’ll investigate.”

No, we haven’t made GPT-4 dumber. Quite the opposite: we make each new version smarter than the previous one.

Current hypothesis: When you use it more heavily, you start noticing issues you didn’t see before.

While some supported Welinder’s comments, others disagreed, with one respondent calling GPT-4 “plain worse.” And certainly part of the problem is that GPT-4 remains a “black box,” so developers don’t know whether changes are being made to the model.

That has been a sticking point since the highly anticipated model’s release in March. At that time, there was a raft of online criticism about what accompanied the announcement: a 98-page technical report about the “development of GPT-4.”

Many said the report was notable mostly for what it did not include. In a section called Scope and Limitations of this Technical Report, it says: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


OpenAI has a lot on its plate. The Washington Post reported yesterday that the Federal Trade Commission is investigating the generative AI leader for possible violations of consumer protection law. And on Monday, comedian and author Sarah Silverman sued OpenAI and Meta for copyright infringement of her humorous memoir, The Bedwetter: Stories of Courage, Redemption, and Pee, published in 2010.

But while OpenAI lawsuits and investigations may be flying fast and furiously — and product releases such as Code Interpreter for ChatGPT Plus users have continued apace — it was a report Wednesday by Business Insider that OpenAI’s GPT-4 model, which powers ChatGPT, had become “lazier and dumber” due to a “radical redesign” that prompted a response from the company’s product team.

Community members on OpenAI’s developer forum had been discussing what they perceived as a decrease in GPT-4 quality — losses in reasoning and logic capabilities, API denials and poorer results overall. They speculated OpenAI might have modified the learning algorithm, changed the training data or modified the model’s infrastructure. The complaints and reports of degraded service followed similar posts on the grassroots Reddit communities or subreddits of r/OpenAI and r/ChatGPT for the last several months.

One commenter, self-identified as a paying OpenAI subscriber, said that “it went from being a great assistant sous-chef to dishwasher.”

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

In response, Peter Welinder, VP of product at OpenAI, tweeted that not only had the company not made GPT-4 dumber, but each new version was smarter than the one before. His current hypothesis, he said, was that “when you use it more heavily, you start noticing issues you didn’t see before.” He continued: “If you have examples where you believe it’s regressed, please reply to this thread and we’ll investigate.”

While some supported Welinder’s comments, others disagreed, with one respondent calling GPT-4 “plain worse.” And certainly part of the problem is that GPT-4 remains a “black box,” so developers don’t know whether changes are being made to the model.

That has been a sticking point since the highly anticipated model’s release in March. At that time, there was a raft of online criticism about what accompanied the announcement: a 98-page technical report about the “development of GPT-4.”

Many said the report was notable mostly for what it did not include. In a section called Scope and Limitations of this Technical Report, it says: “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!