AI & RoboticsNews

The hidden danger of ChatGPT and generative AI | The AI Beat

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.


Since OpenAI launched its early demo of ChatGPT last Wednesday, the tool already has over a million users, according to CEO Sam Altman — a milestone, he points out, that took GPT-3 nearly 24 months to get to and DALL-E over 2 months. 

The “interactive, conversational model,” based on the company’s GPT-3.5 text-generator, certainly has the tech world in full swoon mode. Aaron Levie, CEO of Box, tweeted that “ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward.” Y Combinator cofounder Paul Graham tweeted that “clearly something big is happening.” Alberto Romero, author of The Algorithmic Bridge, calls it “by far, the best chatbot in the world.” And even Elon Musk weighed in, tweeting that ChatGPT is “scary good. We are not far from dangerously strong AI.” 

But there is a hidden problem lurking within ChatGPT: That is, it quickly spits out eloquent, confident responses that often sound plausible and true even if they are not. 

ChatGPT can sound plausible even if its output is false

Like other generative large language models, ChatGPT makes up facts. Some call it “hallucination” or “stochastic parroting,” but these models are trained to predict the next word for a given input, not whether a fact is correct or not. 

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

Some have noted that what sets ChatGPT apart is that it is so darn good at making its hallucinations sound reasonable. 

Technology analyst Benedict Evans, for example, asked ChatGPT to “write a bio for Benedict Evans.” The result, he tweeted, was “plausible, almost entirely untrue.” 

More troubling is the fact that there are obviously an untold number of queries where the user would only know if the answer was untrue if they already knew the answer to the posed question. 

That’s what Arvind Narayanan, a computer science professor at Princeton, pointed out in a tweet: “People are excited about using ChatGPT for learning. It’s often very good. But the danger is that you can’t tell when it’s wrong unless you already know the answer. I tried some basic information security questions. In most cases the answers sounded plausible but were in fact BS.” 

Fact-checking generative AI

Back in the waning days of print magazines in the 2000s, I spent several years as a fact-checker for publications including GQ and Rolling Stone. Each fact had to include authoritative primary or secondary sources — and Wikipedia was frowned upon. 

Few publications have staff fact-checkers anymore, which puts the onus on reporters and editors to make sure they get their facts straight — especially at a time when misinformation already moves like lightning across social media, while search engines are constantly under pressure to surface verifiable information and not BS. 

That’s certainly why Stack Overflow, the Q&A site for coders and programmers, has temporarily banned users from sharing ChatGPT responses. 

And if Stack Overflow can’t keep up with misinformation due to AI, it’s hard to imagine others being able to manage a tsunami of potential AI-driven BS. As Gary Marcus tweeted, “If StackOverflow can’t keep up with plausible but incorrect information, what about social media and search engines?”

And while many are salivating at the idea that LLMs like ChatGPT could someday replace traditional search engines, others are strongly pushing back. 

Emily Bender, professor of linguistics at the University of Washington, has long pushed back on this notion. 

She recently emphasized again that LLMs are “not fit” for search — ”both because they are designed to just make sh** up and because they don’t support information literacy.” She pointed to a paper she co-authored on the topic published in March. 

Is it better for ChatGPT to look right? Or be right? 

BS is obviously something that humans have perfected over the centuries. And ChatGPT and other large language models have no idea what it means, really, to “BS.” But OpenAI made this weakness very clear in its blog announcing the demo and explained that fixing it is “challenging,” saying: 

“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL [reinforcement learning] training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.” 

So it’s clear that OpenAI knows perfectly well that ChatGPT is filled with BS under the surface. They never meant the technology to offer up a source of truth. 

But the question is: Are human users okay with that? 

Unfortunately, they might be. If it sounds good, many humans may think that’s good enough. And, perhaps, that’s where the real danger lies beneath the surface of ChatGPT. The question is, how will enterprise users respond?

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!