AI & RoboticsNews

Sarah Silverman vs. AI: A new punchline in the battle for ethical digital frontiers

Sarah Silverman

Generative AI is no laughing matter, as Sarah Silverman proved when she filed suit against OpenAI, creator of ChatGPT, and Meta for copyright infringement. She and novelists Christopher Golden and Richard Kadrey allege that the companies trained their large language models (LLM) on the authors’ published works without consent, wading into new legal territory.

One week earlier, a class action lawsuit was filed against OpenAI. That case largely centers on the premise that generative AI models use unsuspecting peoples’ information in a manner that violates their guaranteed right to privacy. These filings come as nations all over the world question AI’s reach, its implications for consumers, and what kinds of regulations — and remedies — are necessary to keep its power in check.

Without a doubt, we are in a race against time to prevent future harm, yet we also need to figure out how to address our current precarious state without destroying existing models or depleting their value. If we are serious about protecting consumers’ right to privacy, companies must take it upon themselves to develop and execute a new breed of ethical use policies specific to gen AI.

The issue of data — who has access to it, for what purpose, and whether consent was given to use one’s data for that purpose — is at the crux of the gen AI conundrum. So much data is already a part of existing models, informing them in ways that were previously inconceivable. And mountains of information continue to be added every day.

This is problematic because, inherently, consumers did not realize that their information and queries, their intellectual property and artistic creations, could be applied to fuel AI models. Seemingly innocuous interactions are now scraped and used for training. When models analyze this data, it opens up entirely new levels of understanding of behavior patterns and interests based on data consumers never consented to be used for such purposes.

In a nutshell, it means chatbots like ChatGPT and Bard, as well as AI models created and used by companies of all sorts, are leveraging information indefinitely that they technically don’t have a right to.

And despite consumer protections like the right to be forgotten per GDPR or the right to delete personal information according to California’s CCPA, companies do not have a simple mechanism to remove an individual’s information if requested. It is extremely difficult to extricate that data from a model or algorithm once a gen AI model is deployed; the repercussions of doing so reverberate through the model. Yet, entities like the FTC aim to force companies to do just that.

Last year the FTC ordered WW International (formerly Weight Watchers) to destroy its algorithms or AI models that used kids’ data without parent permission under the Children’s Online Privacy Protection Rule (COPPA). More recently, Amazon Alexa was fined for a similar violation, with Commissioner Alvaro Bedoya writing that the settlement should serve as “a warning for every AI company sprinting to acquire more and more data.” Organizations are on notice: The FTC and others are coming, and the penalties associated with data deletion are far worse than any fine.

This is because the truly valuable intellectual and performative property in the current AI-driven world comes from the models themselves. They are the value store. If organizations don’t handle data the right way, prompting algorithmic disgorgement (which could be extended to cases beyond COPPA), the models essentially become worthless (or only create value on the black market). And invaluable insights — sometimes years in the making — will be lost.

In addition to asking questions about the reasons they are collecting and keeping specific data points, companies must take an ethical and responsible corporate-wide position on the use of gen AI within their businesses. Doing so protects them and the customers they serve.

Take Adobe, for example. Amid a questionable track record of AI utilization, it was among the first to formalize its ethical use policy for gen AI. Complete with an Ethics Review Board, Adobe’s approach, guidelines, and ideals regarding AI are easy to find, one click away from the homepage with a tab (“AI at Adobe”) off the main navigation bar. The company has placed AI ethics front and center, becoming an advocate for gen AI that respects human contributions. At face value, it is a position that evokes trust.

Contrast this approach with companies like Microsoft, Twitter, and Meta that reduced the size of their responsible AI teams. Such moves could make consumers wary that the companies in possession of the greatest amounts of data are putting profits ahead of protection.

To gain consumer trust and respect, earn and retain users and slow down the potential harm gen AI could unleash, every company that touches consumer data needs to develop — and enforce — an ethical use policy for gen AI. It is imperative to safeguard customer information and protect the value and integrity of models both now and in the future.

This is the defining issue of our time. It’s bigger than lawsuits and government mandates. It is a matter of great societal importance and about the protection of foundational human rights.

Daniel Barber is the cofounder and CEO of DataGrail.

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


Generative AI is no laughing matter, as Sarah Silverman proved when she filed suit against OpenAI, creator of ChatGPT, and Meta for copyright infringement. She and novelists Christopher Golden and Richard Kadrey allege that the companies trained their large language models (LLM) on the authors’ published works without consent, wading into new legal territory.

One week earlier, a class action lawsuit was filed against OpenAI. That case largely centers on the premise that generative AI models use unsuspecting peoples’ information in a manner that violates their guaranteed right to privacy. These filings come as nations all over the world question AI’s reach, its implications for consumers, and what kinds of regulations — and remedies — are necessary to keep its power in check.

Without a doubt, we are in a race against time to prevent future harm, yet we also need to figure out how to address our current precarious state without destroying existing models or depleting their value. If we are serious about protecting consumers’ right to privacy, companies must take it upon themselves to develop and execute a new breed of ethical use policies specific to gen AI.

What’s the problem?

The issue of data — who has access to it, for what purpose, and whether consent was given to use one’s data for that purpose — is at the crux of the gen AI conundrum. So much data is already a part of existing models, informing them in ways that were previously inconceivable. And mountains of information continue to be added every day.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

This is problematic because, inherently, consumers did not realize that their information and queries, their intellectual property and artistic creations, could be applied to fuel AI models. Seemingly innocuous interactions are now scraped and used for training. When models analyze this data, it opens up entirely new levels of understanding of behavior patterns and interests based on data consumers never consented to be used for such purposes.

In a nutshell, it means chatbots like ChatGPT and Bard, as well as AI models created and used by companies of all sorts, are leveraging information indefinitely that they technically don’t have a right to.

And despite consumer protections like the right to be forgotten per GDPR or the right to delete personal information according to California’s CCPA, companies do not have a simple mechanism to remove an individual’s information if requested. It is extremely difficult to extricate that data from a model or algorithm once a gen AI model is deployed; the repercussions of doing so reverberate through the model. Yet, entities like the FTC aim to force companies to do just that.

A stern warning to AI companies

Last year the FTC ordered WW International (formerly Weight Watchers) to destroy its algorithms or AI models that used kids’ data without parent permission under the Children’s Online Privacy Protection Rule (COPPA). More recently, Amazon Alexa was fined for a similar violation, with Commissioner Alvaro Bedoya writing that the settlement should serve as “a warning for every AI company sprinting to acquire more and more data.” Organizations are on notice: The FTC and others are coming, and the penalties associated with data deletion are far worse than any fine.

This is because the truly valuable intellectual and performative property in the current AI-driven world comes from the models themselves. They are the value store. If organizations don’t handle data the right way, prompting algorithmic disgorgement (which could be extended to cases beyond COPPA), the models essentially become worthless (or only create value on the black market). And invaluable insights — sometimes years in the making — will be lost.

Protecting the future

In addition to asking questions about the reasons they are collecting and keeping specific data points, companies must take an ethical and responsible corporate-wide position on the use of gen AI within their businesses. Doing so protects them and the customers they serve.

Take Adobe, for example. Amid a questionable track record of AI utilization, it was among the first to formalize its ethical use policy for gen AI. Complete with an Ethics Review Board, Adobe’s approach, guidelines, and ideals regarding AI are easy to find, one click away from the homepage with a tab (“AI at Adobe”) off the main navigation bar. The company has placed AI ethics front and center, becoming an advocate for gen AI that respects human contributions. At face value, it is a position that evokes trust.

Contrast this approach with companies like Microsoft, Twitter, and Meta that reduced the size of their responsible AI teams. Such moves could make consumers wary that the companies in possession of the greatest amounts of data are putting profits ahead of protection.

To gain consumer trust and respect, earn and retain users and slow down the potential harm gen AI could unleash, every company that touches consumer data needs to develop — and enforce — an ethical use policy for gen AI. It is imperative to safeguard customer information and protect the value and integrity of models both now and in the future.

This is the defining issue of our time. It’s bigger than lawsuits and government mandates. It is a matter of great societal importance and about the protection of foundational human rights.

Daniel Barber is the cofounder and CEO of DataGrail.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Daniel Barber, DataGrail
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!