AI & RoboticsNews

If you wouldn’t take advice from a parrot, don’t listen to ChatGPT: Putting the tool to the test

ChatGPT has taken the world by storm since OpenAI revealed the beta version of its advanced chatbot. OpenAI also released a free ChatGPT app for iPhones and iPads, putting the tool directly in consumers’ hands. The chatbot and other generative AI tools flooding the tech scene have stunned and frightened many users because of their human-like responses and nearly instant replies to questions.  

People fail to realize that although these chatbots provide answers that sound “human,” what they lack is fundamental understanding. ChatGPT was trained on a plethora of internet data — billions of pages of text — and draws its responses from that information alone.

The data ChatGPT is trained from, called the Common Crawl, is about as good as it gets when it comes to training data. Yet we never actually know why or how the bot comes to certain answers. And if it’s generating inaccurate information, it will say so confidently; it doesn’t know it’s wrong. Even with deliberate and verbose prompts and premises, it can output both correct and incorrect information. 

We can compare gen AI to a parrot that mimics human language. While it is good that this tool doesn’t have unique thoughts or understanding, too many people mindlessly listen to and follow its advice. When a parrot speaks, you know it’s repeating words it overheard, so you take it with a grain of salt. Users must treat natural language models with the same dose of skepticism. The consequences of blindly following “advice” from any chatbot could be costly. 

A recent study by researchers at Stanford University, “How Is ChatGPT’s Behavior Changing Over Time?” found that the bot’s accuracy in solving a simple math problem was 98% in March 2023 but drastically dropped to just 2% in June 2023. This underscores its unreliability. Keep in mind, this research was on a basic math problem — imagine if the math or topic is more complex and a user can’t easily validate that it’s wrong.

If a person is asking ChatGPT a question, chances are they are not an expert in the topic, and therefore wouldn’t know the difference between correct and incorrect information. Users might not invest time in fact-checking the answer and might make decisions based on incorrect data.

I asked ChatGPT for proposed solutions and tactical steps for building cybersecurity resilience against bad actors — a topic with which I’m deeply familiar. It provided some helpful advice and some bad advice. Based on my years of experience in cybersecurity, it was immediately obvious to me that the tips were questionable, but someone who is not a subject matter expert likely wouldn’t understand which responses were helpful versus harmful. Each of the tips underscored the need for the human element when assessing advice from a bot.   

ChatGPT: “Train your staff: Your staff can be your first line of defense against bad actors. It’s important to train them in best practices for data security and to educate them about potential threats.” 

The verdict: This is a good tip, but it lacks important details about how to train and educate employees. 

ChatGPT: “Collaborate with other companies and organizations: Collaboration is key to building resilience against bad actors. By working together with other companies and organizations, you can share best practices and information about potential threats.

The verdict: You need the human element to weigh information effectively from natural language processing (NLP) models. 

ChatGPT: “Implement strong security measures: One of the most important steps to building resilience against bad actors is to implement strong security measures for your AI systems. This includes things like robust authentication mechanisms, secure data storage, and encryption of sensitive data.” 

ChatGPT: “Monitor and analyze data: By monitoring and analyzing data, you can identify patterns and trends that may indicate a potential threat. This can help you take action before the threat becomes serious.” 

Like any learning model, ChatGPT gets its “knowledge” from internet data. Skewed or incomplete training data impacts the information it shares, which can cause these tools to produce unexpected or distorted results. What’s more, the advice given from AI is as old as its training data. In the case of ChatGPT, anything that relies on information after 2021 is not considered. This is a massive consideration for an industry such as the field of cybersecurity, which is continually evolving and incredibly dynamic. 

For example, Google recently released the top-level domain .zip to the public, allowing users to register .zip domains. But cybercriminals are already using .zip domains in phishing campaigns. Now, users need new strategies to identify and avoid these types of phishing attempts.

But since this is so new, to be effective in identifying these attempts, an AI tool would need to be trained on additional data above the Common Crawl. Building a new data set like the one we have is nearly impossible because of how much generated text is out there, and we know that using a machine to teach the machine is a recipe for disaster. It amplifies any biases in the data and re-enforces the incorrect items. 

Not only should people be wary of following advice from ChatGPT, but the industry must evolve to fight how cybercriminals use it. Bad actors are already creating more believable phishing emails and scams, and that’s just the tip of the iceberg. Tech behemoths must work together to ensure ethical users are cautious, responsible and stay in the lead in the AI arms race. 

Zane Bond is a cybersecurity expert and the head of product at Keeper Security.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


ChatGPT has taken the world by storm since OpenAI revealed the beta version of its advanced chatbot. OpenAI also released a free ChatGPT app for iPhones and iPads, putting the tool directly in consumers’ hands. The chatbot and other generative AI tools flooding the tech scene have stunned and frightened many users because of their human-like responses and nearly instant replies to questions.  

People fail to realize that although these chatbots provide answers that sound “human,” what they lack is fundamental understanding. ChatGPT was trained on a plethora of internet data — billions of pages of text — and draws its responses from that information alone.

The data ChatGPT is trained from, called the Common Crawl, is about as good as it gets when it comes to training data. Yet we never actually know why or how the bot comes to certain answers. And if it’s generating inaccurate information, it will say so confidently; it doesn’t know it’s wrong. Even with deliberate and verbose prompts and premises, it can output both correct and incorrect information. 

The costly consequences of blindly following ChatGPT’s advice

We can compare gen AI to a parrot that mimics human language. While it is good that this tool doesn’t have unique thoughts or understanding, too many people mindlessly listen to and follow its advice. When a parrot speaks, you know it’s repeating words it overheard, so you take it with a grain of salt. Users must treat natural language models with the same dose of skepticism. The consequences of blindly following “advice” from any chatbot could be costly. 

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

 


Register Now

A recent study by researchers at Stanford University, “How Is ChatGPT’s Behavior Changing Over Time?” found that the bot’s accuracy in solving a simple math problem was 98% in March 2023 but drastically dropped to just 2% in June 2023. This underscores its unreliability. Keep in mind, this research was on a basic math problem — imagine if the math or topic is more complex and a user can’t easily validate that it’s wrong.

  • What if it was code and had critical bugs? 
  • What about predictions of whether a group of X-rays have cancer?
  • What about a machine predicting your value to society?

If a person is asking ChatGPT a question, chances are they are not an expert in the topic, and therefore wouldn’t know the difference between correct and incorrect information. Users might not invest time in fact-checking the answer and might make decisions based on incorrect data.

Picking ChatGPT’s ‘brain’ about cybersecurity resilience

I asked ChatGPT for proposed solutions and tactical steps for building cybersecurity resilience against bad actors — a topic with which I’m deeply familiar. It provided some helpful advice and some bad advice. Based on my years of experience in cybersecurity, it was immediately obvious to me that the tips were questionable, but someone who is not a subject matter expert likely wouldn’t understand which responses were helpful versus harmful. Each of the tips underscored the need for the human element when assessing advice from a bot.   

ChatGPT: “Train your staff: Your staff can be your first line of defense against bad actors. It’s important to train them in best practices for data security and to educate them about potential threats.” 

  • My take: Considerations like level of experience and areas of expertise are critical to keep in mind, as knowing the audience informs the approach to education. Likewise, the training should be rooted in an organization’s specific cybersecurity needs and goals. The most valuable training is practical and grounded in things employees do every day, such as using strong and unique passwords to protect their accounts. As a bot, ChatGPT doesn’t have this context unless you, the asker, provide it. And even with overly verbose and specific prompts, it can still share bad advice.

The verdict: This is a good tip, but it lacks important details about how to train and educate employees. 

ChatGPT: “Collaborate with other companies and organizations: Collaboration is key to building resilience against bad actors. By working together with other companies and organizations, you can share best practices and information about potential threats.

  • My take: This is good advice when taken in context, specifically when public and private sector organizations collaborate to learn from one another and adopt best practices. However, ChatGPT did not provide any such context. Companies coming together after one has been the victim of an attack and discussing attack details or ransomware payouts, for example, could be incredibly harmful. In the event of a breach, the primary focus should not be on collaboration but rather on triage, response, forensic analysis and work with law enforcement.

The verdict: You need the human element to weigh information effectively from natural language processing (NLP) models. 

ChatGPT: “Implement strong security measures: One of the most important steps to building resilience against bad actors is to implement strong security measures for your AI systems. This includes things like robust authentication mechanisms, secure data storage, and encryption of sensitive data.” 

  • My take: While this is good high-level advice (although common sense), “strong security measures” differ depending on the organization’s security maturity journey. For example, a 15-person startup warrants different security measures than a global Fortune 100 bank. And while the AI might give better advice with better prompts, operators aren’t trained on what questions to ask or what caveats to provide. For example, if you said the tips were for a small business with no security budget, you will undoubtedly get a very different response.  

ChatGPT: “Monitor and analyze data: By monitoring and analyzing data, you can identify patterns and trends that may indicate a potential threat. This can help you take action before the threat becomes serious.” 

  • My take: Tech and security teams use AI for behavioral baselining, which can provide a robust and helpful tool for defenders. AI finds atypical things to look at; however, it should not make determinations. For example, say an organization has had a server performing one function daily for the past six months, and suddenly, it’s downloading copious amounts of data. AI could flag that anomaly as a threat. However, the human element is still critical for the analysis — that is, to see if the issue was an anomaly or something routine like a flurry of software updates on ‘Patch Tuesday.’ The human element is needed to determine if anomalous behavior is actually malicious. 

Advice only as good (and fresh) as training data

Like any learning model, ChatGPT gets its “knowledge” from internet data. Skewed or incomplete training data impacts the information it shares, which can cause these tools to produce unexpected or distorted results. What’s more, the advice given from AI is as old as its training data. In the case of ChatGPT, anything that relies on information after 2021 is not considered. This is a massive consideration for an industry such as the field of cybersecurity, which is continually evolving and incredibly dynamic. 

For example, Google recently released the top-level domain .zip to the public, allowing users to register .zip domains. But cybercriminals are already using .zip domains in phishing campaigns. Now, users need new strategies to identify and avoid these types of phishing attempts.

But since this is so new, to be effective in identifying these attempts, an AI tool would need to be trained on additional data above the Common Crawl. Building a new data set like the one we have is nearly impossible because of how much generated text is out there, and we know that using a machine to teach the machine is a recipe for disaster. It amplifies any biases in the data and re-enforces the incorrect items. 

Not only should people be wary of following advice from ChatGPT, but the industry must evolve to fight how cybercriminals use it. Bad actors are already creating more believable phishing emails and scams, and that’s just the tip of the iceberg. Tech behemoths must work together to ensure ethical users are cautious, responsible and stay in the lead in the AI arms race. 

Zane Bond is a cybersecurity expert and the head of product at Keeper Security.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Zane Bond, Keeper Security
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

AI’s math problem: FrontierMath benchmark shows how far technology still has to go

AI & RoboticsNews

ByteDance’s AI can make your photos act out movie scenes — but is it too real?

AI & RoboticsNews

AIRIS is a learning AI teaching itself how to play Minecraft

AI & RoboticsNews

Here are 3 critical LLM compression strategies to supercharge AI performance

Sign up for our Newsletter and
stay informed!