AI & RoboticsNews

OpenAI updates ChatGPT to new GPT-4o model based on user feedback


OpenAI continues to push the envelope on generative AI. Yesterday, without much fanfare or warning, the company’s official ChatGPT account on the social network X posted an update stating: “there’s a new GPT-4o model out in ChatGPT since last week. hope you all are enjoying it and check it out if you haven’t! we think you’ll like it” followed by a smiley face emoji.

While the ChatGPT app account on X did not provide further information immediately following that post, sources at OpenAI told VentureBeat that the new model was updated based on user feedback.

OpenAI later posted an update on its release notes blog about the model, stating:

we’ve introduced an update to GPT-4o that we’ve found, through experiment results and qualitative feedback, ChatGPT users tend to prefer. It’s not a new frontier-class model. Although we’d like to tell you exactly how the model responses are different, figuring out how to granularly benchmark and communicate model behavior improvements is an ongoing area of research in itself (which we’re working on!).

Not a new reasoning style, despite user speculation

Intrepid users speculated the new GPT-4o model within ChatGPT was exhibiting step-by-step or multi-step reasoning and more detailed explanations of its processes to the user, delivered in natural language.

However, an OpenAI spokesperson told VentureBeat that there was not a new reasoning process in the model update, and that ChatGPT describing its reasoning could be triggered by a user’s specific prompt.

Prior to the announcement, users noted that the underlying model powering ChatGPT, OpenAI’s GPT-4o, seemed to be behaving differently and better than in the recent past.

Other users are reporting that GPT-4o’s native image generation capabilities through ChatGPT also appear to be activated.

Why is this a big deal, especially when ChatGPT could already generate images? Because ChatGPT, when powered by the prior GPT-4 model, relies on tapping OpenAI’s DALL-E 3 diffusion-based image generation model to create images based on user text prompts.

One of the big selling points and assets of GPT-4o when it was announced back in May was that it was trained to be natively multimodal, turning not just text but pixels into tokens, allowing it to generate imagery on its own in higher quality than DALL-E 3, faster, more efficiently, with greater comprehension of text prompts and more accurate and realistic generations of illustrated text within images.

Not everyone is pleased

Others have offered a more critical or even cynical take on the ChatGPT update, stating that OpenAI should do more to explain what’s changed from a model behavior and user experience standpoint.

Some even think the change is ultimately superficial or not hugely noticeable.

Different versions of GPT-4o for ChatGPT and the API?

Asked by VentureBeat about the update, an OpenAI spokesperson stated that: “We’re often making small improvements to our models in ChatGPT and the API. The ChatGPT variant may differ from what’s in the API as we always optimize for what’s best for developers in the variant we ship to the API.”

VentureBeat reported last week that OpenAI updated its GPT-4o model. GPT-4o powers ChatGPT as well as third-party developer apps through OpenAI’s application programming interface (API).

On X, the official OpenAI Developers account posted an update clarifying:

“This model is also now available in the API as `chatgpt-4o-latest`. We recommend `gpt-4o-2024-08-06` for most API usage, but are excited to give developers access to test our latest improvements for chat use cases.”

 

That wasn’t enough information for some developers, such as Aidan McLau, who asked for the company to elaborate on why it is operating two different models of GPT-4o.

But a member of OpenAI’s technical staff, Michelle Pokrass, quickly responded, stating: “chatgpt-4o-latest will track our 4o model in chatgpt, and is a chat-optimized model. our model from last week (gpt-4o-2024-08-06) is optimized for api usage (eg. function calling, instruction following) so we generally recommend that!”

Correction: Tuesday, August 13 at 4:11 pm ET: This piece originally stated in the headline and article text that the new OpenAI model exhibited step-by-step reasoning based on a user’s post on X. However, after speaking with OpenAI, the company denied this was the case. We have since updated the piece to reflect this as well as retained the original user tweet that inspired by the conclusion.

 

Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!