AI & RoboticsNews

Invisible AI watermarks won’t stop bad actors. But they are a ‘really big deal’ for good ones

AI-generated images

In an era of deepfakes, bot-generated books and AI images created in the style of famous artists, the promise of digital watermarks to identity AI-generated images and text has been tantalizing for the future of AI transparency.

Back in July, seven companies promised President Biden they would take concrete steps to enhance AI safety, including watermarking, while in August, Google DeepMind released a beta version of a new watermarking tool, SynthID, that embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.

Thus far, however, digital watermarks — whether visible or invisible — are not sufficient to stop bad actors. In fact, Wired recently quoted a University of Maryland computer science professor, Soheil Feizi, who said “we don’t have any reliable watermarking at this point — we broke all of them.” Feizi and his fellow researchers examined how easy it is for bad actors to evade watermarking attempts. In addition to demonstrating how attackers might remove watermarks, they showed how it to add watermarks to human-created images, triggering false positives.

But in a conversation with VentureBeat, Hugging Face computer scientist and AI ethics researcher Margaret Mitchell said that while digital watermarks may not stop bad actors, they are a “really big deal” for enabling and supporting good actors who want a sort of embedded ‘nutrition label’ for AI content.

When it comes to the ethics and values surrounding AI-generated images and text, she explained, one set of values is related to the concept of provenance. “You want to be able to have some sort of lineage of where things came from and how they evolved,” she said. “That’s useful in order to track content for consent credit and compensation. It’s also important in order to understand what the potential inputs for models are.”

It’s this bucket of watermarking users that Mitchell said she gets “really excited” about. “I think that has really been lost in some of the recent rhetoric,” she said, explaining that there will always be ways AI technology doesn’t work well. But that doesn’t mean the technology as a whole is bad.

“For a subset of the users or those affected it won’t be the right tool, but for the vast majority it will be right — bad actors are a subset of users, and then a subset of users within that will be those that have the the technical know how to actually perturb the watermark.”

Mitchell highlighted new functions from Truepic, which provides authenticity infrastructure to the internet, on Hugging Face, an open-access AI platform for hosting machine learning (ML) models — that allow Hugging Face users to automatically add responsible provenance metadata to AI-generated images.

First, Truepic added content credentials from the Coalition for Content Provenance and Authenticity (C2PA) to open source models on Hugging Face, allowing anyone to generate and use transparent synthetic data. In addition, it created an experimental space to combine the provenance credentials with invisible watermarking using technology from Steg.AI, a provider of “sophisticated forensic watermarking solutions” that uses Light Field Messaging (LFM), a process of  embedding, transmitting, and receiving hidden information in video that is displayed on a screen and captured by a handheld camera.

When asked if trying to tackle issues of provenance with watermarking tools feels like a drop in an ocean of AI-generated content, Mitchell laughed. “Welcome to ethics,” she said. “It’s always something good for one small use case and you build and iterate from there.”

But one thing that is particularly exciting about watermarking as a tool, she explained, is that it is “something that both people focused on human values broadly in AI, and then AI Safety with a capital S, have agreed that this is critical with their realms.”

Then, she added, interest in digital watermarking systems rose to the level of being a part of the White House voluntary commitments.

“So in terms of all the various things that various people think are worth prioritizing, there is consensus on watermarking — people actually care about this,” she said. “Compared to some of the other work I’ve been involved in, it doesn’t seem like a drop in the bucket at all. It seems like you’re starting to fill up buckets.”

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


In an era of deepfakes, bot-generated books and AI images created in the style of famous artists, the promise of digital watermarks to identity AI-generated images and text has been tantalizing for the future of AI transparency.

Back in July, seven companies promised President Biden they would take concrete steps to enhance AI safety, including watermarking, while in August, Google DeepMind released a beta version of a new watermarking tool, SynthID, that embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.

Thus far, however, digital watermarks — whether visible or invisible — are not sufficient to stop bad actors. In fact, Wired recently quoted a University of Maryland computer science professor, Soheil Feizi, who said “we don’t have any reliable watermarking at this point — we broke all of them.” Feizi and his fellow researchers examined how easy it is for bad actors to evade watermarking attempts. In addition to demonstrating how attackers might remove watermarks, they showed how it to add watermarks to human-created images, triggering false positives.

Digital watermarking can enable and support good actors

But in a conversation with VentureBeat, Hugging Face computer scientist and AI ethics researcher Margaret Mitchell said that while digital watermarks may not stop bad actors, they are a “really big deal” for enabling and supporting good actors who want a sort of embedded ‘nutrition label’ for AI content.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.


Learn More

When it comes to the ethics and values surrounding AI-generated images and text, she explained, one set of values is related to the concept of provenance. “You want to be able to have some sort of lineage of where things came from and how they evolved,” she said. “That’s useful in order to track content for consent credit and compensation. It’s also important in order to understand what the potential inputs for models are.”

It’s this bucket of watermarking users that Mitchell said she gets “really excited” about. “I think that has really been lost in some of the recent rhetoric,” she said, explaining that there will always be ways AI technology doesn’t work well. But that doesn’t mean the technology as a whole is bad.

“For a subset of the users or those affected it won’t be the right tool, but for the vast majority it will be right — bad actors are a subset of users, and then a subset of users within that will be those that have the the technical know how to actually perturb the watermark.”

New functions on Hugging Face allow anyone to provide provenance

Mitchell highlighted new functions from Truepic, which provides authenticity infrastructure to the internet, on Hugging Face, an open-access AI platform for hosting machine learning (ML) models — that allow Hugging Face users to automatically add responsible provenance metadata to AI-generated images.

First, Truepic added content credentials from the Coalition for Content Provenance and Authenticity (C2PA) to open source models on Hugging Face, allowing anyone to generate and use transparent synthetic data. In addition, it created an experimental space to combine the provenance credentials with invisible watermarking using technology from Steg.AI, a provider of “sophisticated forensic watermarking solutions” that uses Light Field Messaging (LFM), a process of  embedding, transmitting, and receiving hidden information in video that is displayed on a screen and captured by a handheld camera.

Consensus on promise of watermarking

When asked if trying to tackle issues of provenance with watermarking tools feels like a drop in an ocean of AI-generated content, Mitchell laughed. “Welcome to ethics,” she said. “It’s always something good for one small use case and you build and iterate from there.”

But one thing that is particularly exciting about watermarking as a tool, she explained, is that it is “something that both people focused on human values broadly in AI, and then AI Safety with a capital S, have agreed that this is critical with their realms.”

Then, she added, interest in digital watermarking systems rose to the level of being a part of the White House voluntary commitments.

“So in terms of all the various things that various people think are worth prioritizing, there is consensus on watermarking — people actually care about this,” she said. “Compared to some of the other work I’ve been involved in, it doesn’t seem like a drop in the bucket at all. It seems like you’re starting to fill up buckets.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!