AI & RoboticsNews

OpenAI announces bug bounty program to address AI security risks

OpenAI

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


OpenAI, a leading artificial intelligence (AI) research lab, announced today the launch of a bug bounty program to help address growing cybersecurity risks posed by powerful language models like its own ChatGPT.

The program — run in partnership with the crowdsourced cybersecurity company Bugcrowd — invites independent researchers to report vulnerabilities in OpenAI’s systems in exchange for financial rewards ranging from $200 to $20,000 depending on the severity. OpenAI said the program is part of its “commitment to developing safe and advanced AI.”

Concerns have mounted in recent months over vulnerabilities in AI systems that can generate synthetic text, images and other media. Researchers found a 135% increase in AI-enabled social engineering attacks from January to February, coinciding with the adoption of ChatGPT, according to AI cybersecurity firm DarkTrace.

While OpenAI’s announcement was welcomed by some experts, others said a bug bounty program is unlikely to fully address the wide range of cybersecurity risks posed by increasingly sophisticated AI technologies

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

The program’s scope is limited to vulnerabilities that could directly impact OpenAI’s systems and partners. It does not appear to address broader concerns over malicious use of such technologies like impersonation, synthetic media or automated hacking tools. OpenAI did not immediately respond to a request for comment.

A bug bounty program with limited scope

The bug bounty program comes amid a spate of security concerns, with GPT4 jailbreaks emerging, which enable users to develop instructions on how to hack computers and researchers discovering workarounds for “non-technical” users to create malware and phishing emails.

It also comes after a security researcher known as Rez0 allegedly used an exploit to hack ChatGPT’s API and discover over 80 secret plugins.

Given these controversies, launching a bug bounty platform provides an opportunity for OpenAI to address vulnerabilities in its product ecosystem, while situating itself as an organization acting in good faith to address the security risks introduced by generative AI.

Unfortunately, OpenAI’s bug bounty program is very limited in the scope of threats it addresses. For instance, the bug bounty program’s official page notes: “Issues related to the content of model prompts and responses are strictly out of scope, and will not be rewarded unless they have an additional directly verifiable security impact on an in-scope service.”

Examples of safety issues which are considered to be out of scope include jailbreaks and safety bypasses, getting the model to “say bad things,” getting the model to write malicious code or getting the model to tell you how to do bad things.

In this sense, OpenAI’s bug bounty program may be good for helping the organization to improve its own security posture, but does little to address the security risks introduced by generative AI and GPT-4 for society at large.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Tim Keary
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!