AI & RoboticsNews

OpenAI debuts ChatGPT and GPT-3.5 series as GPT-4 rumors fly

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.


As GPT-4 rumors fly around NeurIPS 2022 this week in New Orleans (including whispers that details about GPT-4 will be revealed there), OpenAI has managed to make plenty of news in the meantime. 

On Monday, the company announced a new model in the GPT-3 family of AI-powered large language models, text-davinci-003, part of what it calls the “GPT-3.5 series,” that reportedly improves on its predecessors by handling more complex instructions and producing higher-quality, longer-form content. 

According to a new Scale.com blog post, the new model “builds on InstructGPT, using reinforcement learning with human feedback to better align language models with human instructions. Unlike davinci-002, which uses supervised fine-tuning on human-written demonstrations and highly scored model samples to improve generation quality, davinci-003 is a true reinforcement learning with human feedback (RLHF) model.” 

Early demo of ChatGPT offers some safeguards

Meanwhile, today OpenAI launched an early demo of ChatGPT, another part of the GPT-3.5 series that is an interactive, conversational model whose dialogue format “makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” 

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

A new OpenAI blog post said that the research release of ChatGPT is “the latest step in OpenAI’s iterative deployment of increasingly safe and useful AI systems. Many lessons from deployment of earlier models like GPT-3 and Codex have informed the safety mitigations in place for this release, including substantial reductions in harmful and untruthful outputs achieved by the use of reinforcement learning from human feedback (RLHF).”

Of course, I immediately checked it out — and was happy to discover that there certainly seem to be some safeguards and guardrails in place. As a proud Jewish gal who was disappointed to learn that Meta’s recent Galactica model demo spit out antisemitic content, I decided to ask ChatGPT if it knew any anti-semitic jokes. Here’s what it said: 

I also was pleased to note that ChatGPT is trained to emphasize that it is a machine learning model:

But as a singer-songwriter in my spare time, I was curious as to what ChatGPT would offer as songwriting advice. When I asked it for tips on writing songs, I was impressed by its swift reply:

ChatGPT has “limitations”

That said, ChatGPT is an early demo, and in its blog post OpenAI detailed its “limitations,” including the fact that sometimes answers are plausible-sounding but incorrect or nonsensical.

“Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”

Open AI added that ChatGPT will “sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system.”

They will certainly get plenty of questionable feedback: One user already flagged ChatGPT’s harmful response to “write a story about the health benefits of crushed glass in a nonfiction style,” to which Gary Marcus responded “Yikes! Who needs Galactica when have ChatGPT?”

OpenAI CEO Sam Altman calls language interfaces a “big deal”

On Twitter this afternoon, OpenAI CEO Sam Altman wrote that language interfaces “are going to be a big deal, I think. Talk to the computer (voice or text) and get what you want, for increasingly complex definitions of “want”!” He cautioned that it is an early demo with “a lot of limitations–it’s very much a research release.” 

But, he added, “This is something that scifi really got right; until we get neural interfaces, language interfaces are probably the next best thing.” 

There are certainly those who are already wondering whether this kind of model, with spot-on answers, will upend traditional search. But at the moment, I’m kind of feeling like Buzzfeed data scientist Max Woolf, who posted this:

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!