Last week, OpenAI unveiled its GPT Store for third-party creators of custom chatbots (GPTs) to list and ultimately monetize their creations. But the company isn’t done making news in the first month of 2024.
Late in the day on Monday, OpenAI published a blog post detailing how it plans to implement new safeguards around its AI tools — especially its image generation model DALL-E and citations for the information surfaced in ChatGPT — in an effort combat disinformation in the run-up to the wave of elections taking place in countries around the world later this year.
“Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process,” the blog post begins.
It outlines a number of safeguards currently in place around its AI tools, including a “report” function allowing users to flag “potential violations” of custom GPTs and their behavior back to OpenAI. This includes custom GPTs that may impersonate real people or institutions, which OpenAI says is against its usage policies.
New countermeasures coming…soon?
OpenAI’s blog post noted that “users will start to get access to real-time news reporting globally, including attribution and links,” through ChatGPT, which makes sense, given the company’s headline-making partnerships with news outlets including the Associated Press and Axel Springer (publisher of Politico and Business Insider).
Most interestingly from my perspective, the company also committed to implementing image credentials from the Coalition for Content Provenance and Authenticity (C2PA), a non-profit effort by tech and AI companies and trade groups to label AI-generated imagery and content with cryptographic digital watermarking so that it can be reliably detected as AI-generated going forward.
OpenAI says it will implement C2PA credentials/digital watermarking on its DALL-E 3 imagery “early this year,” though no specific date has yet been given.
In addition, the company once again previewed the idea of its “provenance classifier, a new tool for detecting images generated by DALL-E.” This was first mentioned back in the fall of 2023 when DALL-E 3 was launched for ChatGPT Plus and Enterprise users as a tool specifically designed to allow users to upload imagery and see whether it was AI generated or not.
“Our internal testing has shown promising early results, even where images have been subject to common types of modifications,” OpenAI’s blog post states. “We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.”
AI powered campaigning and impersonations are already here
With political activists and established organizations like the Republican National Committee (RNC) in the U.S. already using AI to craft messaging, including impersonating rival candidates, will OpenAI’s tools be enough to make a dent in what is expected to be a record tide of digital disinformation?
It’s of course tough to tell, but clearly, the company at least wants to get the word out that it is a promoter of truth and accuracy, even in the face of its tools being used for malicious or duplicitous intent.
Author: Carl Franzen
Source: Venturebeat
Reviewed By: Editorial Team