
As early as last fall, before ChatGPT had even launched, experts were already predicting that issues related to the copyrighted data that trained generative AI models would unleash a wave of litigation that, like other big technological changes that changed how the commercial world worked â such as video recording and Web 2.0 â could one day come before a certain group of nine justices.Â
âUltimately, I believe this is going to go to the Supreme Court,â Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, told VentureBeat last October â and recently confirmed that his opinion is unchanged.
Edward Klaris, a managing partner at Klaris Law, a New York City- based firm dedicated to media, entertainment, tech and the arts, also maintains that a generative AI case could âabsolutelyâ be taken up by the Supreme Court. âThe interests are clearly important â weâre going to get cases that come down on various sides of this argument,â he recently told VentureBeat.
>>Follow VentureBeatâs ongoing generative AI coverage<<
The question is: How did we get here? How did the trillions of data points at the core of generative AI become a toxin of sorts that, depending on your point of view and the decision of the highest judicial authority, could potentially hobble an industry destined for incredible innovation, or poison the well of human creativity and consent?Â
The explosion of generative AI over the past year has become an ââoh, shit!â moment when it comes to dealing with the data that trained large language and diffusion models, including mass amounts of copyrighted content gathered without consent, Dr. Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), told VentureBeat in a recent interview.Â
The question of how AI technologies could affect copyright and intellectual property has been a known, but not terribly urgent, problem legal scholars and some AI researchers have wrestled with over the past decade. But what had been âan open question,â explained Hanna, who studies data used to train AI and ML models, has suddenly become a far more pressing issue â to put it mildly â for generative AI. Now that generative AI tools based on large language models (LLMs) are available to consumers and businesses, the fact that they are trained on a massive corpora of text and images, mostly scraped from the internet, and can generate new, similar content, has brought about a sudden increased scrutiny of their data sourcesÂ
A growing alarm among artists, authors, and other creative professionals concerned about the use of their copyrighted works in AI training datasets has already led to a spate of generative AI-focused lawsuits filed over the past six months. From the first class-action copyright infringement lawsuit around AI art filed against Stability AI, Midjourney and DeviantArt in January, to comedian Sarah Silvermanâs recent lawsuit against OpenAI and Meta filed in July, more copyright holders are increasingly pushing back against data scraping practices in the name of training AI.Â
In response, Big Tech companies like OpenAI have been lawyering up for the long haul. Last week, in fact, OpenAI filed a motion to dismiss two class-action lawsuits from book authorsâincluding Sarah Silvermanâwho earlier this summer alleged that ChatGPT was illegally trained on pirated copies of their books.
The company asked a US district court in California to throw out all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at âa later stage of the case.â According to OpenAI, even if the authorsâ books were a âtiny partâ of ChatGPTâs massive data set, âthe use of copyrighted materials by innovators in transformative ways does not violate copyright.âÂ
The wave of lawsuits, as well as pushback from enterprise companies â that donât want legal blowback for using generative AI, especially for consumer-facing applications â has also been a wake-up call for AI researchers and entrepreneurs. This cohort has not witnessed such significant legal pushback before â at least not when it comes to copyright (there have been previous AI-related lawsuits related to privacy and bias).Â
Of course, data has always been the oil driving artificial intelligence to greater heights. There is no AI without data. But the typical AI researcher, Hanna explained, is likely far more interested in exploring the boundaries of science with data than digging into laws governing the use of that data.Â
âPeople donât get into AI to deal with copyright law,â she said. âComputer scientists arenât trained in data collection, and they surely are not trained on copyright issues. This is certainly not part of computer vision, or machine learning, or AI pedagogy.âÂ
Naveen Rao, VP of generative AI at Databricks and co-founder of MosaicML, pointed out that researchers are usually just thinking about making progress. âIf youâre a pure researcher, youâre not really thinking about the business side of it,â he said.Â
If anything, some AI researchers creating datasets for use in machine learning models have been motivated by an effort to democratize access to the types of closed, black box datasets companies like OpenAI were already using. For example, Wired reported that the dataset at the heart of the Sarah Silverman case, Books3, which has been used to create Metaâs Llama, as well as other AI models, started as a âpassion projectâ by AI researcher Shawn Presser. He saw it as aligned with the open source movement, as a way to allow smaller companies and researchers to compete against the big players.Â
Yet, Presser was aware there would be backlash: âWe almost didnât release the data sets at all because of copyright concerns,â he told Wired.Â
But whether AI researchers creating and using datasets for model training thought about it or not, there is no doubt that the data underpinning generative AI â which can arguably be described as its secret sauce â includes vast amounts of copyrighted material, from books and Reddit posts to YouTube videos, newspaper articles and photos. However, copyright critics and some legal experts insist this falls under what is known in legal parlance as âfair useâ of the data â that is, U.S. copyright law âpermits limited use of copyrighted material without having to first acquire permission from the copyright holder.âÂ
At testimony before the U.S. Senate at a hearing on AI and intellectual property related to AI and copyright on July 12, Matthew Sag, a professor of law in AI, machine learning and data science at Emory University School of Law, said that âif an LLM is trained properly and operated with appropriate safeguards, its outputs will not resemble its inputs in a way that would trigger copyright liability. Training such an LLM on copyrighted works would thus be justified under the fair use doctrine.â
While some might see that as an unrealistic expectation, it would be good news for copyright critics like AI pioneer Andrew Ng, former co-founder and head of Google Brain, who make no bones about the fact that they know the latest advances in machine learning have depended on free access to large quantities of data, much of it scraped from the open internet.Â
In an issue of his DeepLearning.ai newsletter, The Batch, titled âItâs Time to Update Copyright for Generative AI, a lack of access to massive popular datasets such as Common Crawl, The Pile, and LAION would put the brakes on progress or at least radically alter the economics of current research.Â
âThis would degrade AIâs current and future benefits in areas such as art, education, drug development, and manufacturing, to name a few,â he said.Â
But other legal minds, and a rising chorus of creators, see an equally persuasive counterargument â that copyright issues around generative AI are qualitatively different from previous high-court cases related to digital technologies and copyright, most notably Authors Guild, Inc. v. Google, Inc.Â
In that federal lawsuit, authors and publishers argued that Googleâs project to digitize and display excerpts from books infringed upon their copyrights. Google won the case in 2015 by claiming its actions fell under âfair useâ because it provided valuable resources for researchers, scholars, and the public, while also enhancing the discoverability of books.
However, the concept of âfair useâ is based on a four-factor test â four measures that judges consider when evaluating whether a work is âtransformativeâ or simply a copy: the purpose and character of the work, the nature of the work, the amount taken from the original work, and the effect of the new work on a potential market. That fourth factor is the key to how generative AI really differs, say experts, because it aims to assess whether the use of the copyrighted material has the potential to negatively impact the commercial value of the original work or impede opportunities for the copyright holder to exploit their work in the market â which is exactly what artists, authors, journalists and other creative professionals claim.Â
âThe Handmaidâs Taleâ author Margaret Atwood, who discovered that 33 of her books were part of the Books3 dataset, explained this concern bluntly in a recent Atlantic essay:Â
âOnce fully trained, the bot may be given a commandââWrite a Margaret Atwood novelââand the thing will glurp forth 50,000 words, like soft ice cream spiraling out of its dispenser, that will be indistinguishable from something I might grind out. (But minus the typos.) I myself can then be dispensed withâmurdered by my replica, as it wereâbecause, to quote a vulgar saying of my youth, who needs the cow when the milkâs free?â
Two decades ago, no one in the AI community thought much about the copyright issues of datasets, because they were far smaller and more controlled, said Hanna.Â
In AI for computer vision, for example, images were typically not gathered on the web, because photo-sharing sites like Flickr (which wasnât launched until 2004) did not exist. âCollections of images tended to be smaller and were either taken in from under certain transit controlled conditions, by researchers themselves,â she said.Â
That was true for text datasets used for natural language processing as well. The earliest learned models for language generation typically consisted of material that was either a matter of public record or explicitly licensed for research use.Â
All of that changed with the development of ImageNet, which now includes over 14 million hand-annotated images in its dataset. Created by AI researcher Fei-Fei Li (now at Stanford) and presented for the first time in 2009, ImageNet was one of the first cases of mass scraping of image datasets intended for computer vision research. According to Hanna, this qualitative scale shift also became the mode of operation for doing data collection, âsetting the groundwork for a lot of the generative AI stuff that weâre seeing.âÂ
Eventually, datasets became so large that it became impossible to responsibly source and hand-curate datasets in the same way anymore.Â
According to âThe Devil is in the Training Data,â a July 2023 paper authored by Google DeepMind research scientists Katherine Lee and Daphne Ippolito, as well as A. Feder Cooper, a Ph.D. candidate in computer science at Cornell, âgiven the sheer amount of training data required to produce high-quality generative models, itâs impossible for a creator to thoroughly understand the nuances of every example in a training dataset.âÂ
Cooper, who, along with Lee presented a workshop at the recent International Conference on Machine Learning on Generative AI and the Law, said that best practices in training and testing models were taught in high school and college courses. âBut the ability to execute that on these new huge datasets, we donât have a good way to do that,â they told VentureBeat.Â
By the end of 2022, OpenAIâs ChatGPT, as well as image generators like Stable Diffusion and Midjourney, had taken AIâs academic research into the commercial stratosphere. But this quest for commercial success â on a foundation of mass amounts of copyrighted data gathered without consent â hasnât actually happened all at once, explained Yacine Jernite, who leads the ML and Society team at Hugging Face.
âItâs been like a slow slip from something which was mostly academic for academics to something thatâs strongly commercial,â he said. âThere was no single moment where it was like, âthis means we need to rethink everything that weâve been doing for the last 20 years.ââÂ
But Databricksâ Rao maintains that we are, in fact, having that kind of moment right now â what he calls the âNapster momentâ for generative AI. The 2001 A&M Records, Inc. v. Napster, Inc., landmark intellectual property case found that Napster could be held liable for infringement of copyright on its peer-to-peer music file sharing service.Â
Napster, he explained, clearly demonstrated demand for streaming music â as generative AI is clearly demonstrating demand for text and image-generating tools. âBut then [Napster] did get shut down until someone figured out the incentives, how to go back and remunerate the creators the right way,â he said.Â
One difference, however, is that with Napster, artists were nervous about speaking out, recalled Neil Turkewitz, a copyright activist who previously served as an EVP at the Recording Industry Association of America (RIAA) during the Napster era. âThe voices opposing Napster were record labels,â he explained.
The current environment, he said, is completely different. âArtists have now seen the parallels to what happened with Napster â they know theyâre sitting there on deathâs doorstep and need to speak out, so youâve had a huge outpouring from the artists community,â he said.
Yet, industries are also speaking out â particularly in areas such as publishing and entertainment, said Marc Rotenberg, president and founder of the nonprofit Center for AI and Digital Policy, as well as an adjunct professor at Georgetown Law School. Â
âBack when the Google books ruling was handed down, Google did very well in the outcome as a legal matter, but publishers and the news industry did not,â he said. The memory of that case, he said, weighs heavily.Â
As todayâs AI models require companies to hand over their data, he explained, a company like the New York Times recognizes that if their work can be replicated, they could go out of business (the New York Times updated its Terms of Service last month to prohibit its content from being used to train AI models).Â
âTo me, one of the most interesting legal cases today involving AI is not yet a legal case,â Rotenberg said. âItâs the looming battle between one of the most well regarded publishers, The New York Times, and one of the most impactful generative AI firms, OpenAI.âÂ
But lawyers defending Big Tech companies in todayâs generative AI copyright cases say they have legal precedent on their side.Â
One lawyer at a firm representing one of the top AI companies told VentureBeat that generative AI is an example of how every couple of decades a new, really significant question comes along and forms how the commercial world works. These legal cases, he said, will âplay a huge role in shaping the pace and contours of innovation, and really our understanding of this amazing body of law that dates back to 1791.âÂ
The lawyer, who asked to remain anonymous because he was not authorized to speak about ongoing litigation, said that he is âquite confident that the position of the technology companies is the one that should and hopefully will prevail.â However, he emphasized that he thought those seeking to protect industries through these copyright lawsuits will have an uphill battle.Â
âItâs just really bad for using the regulated labor market, or privacy considerations, or whatever it is â there are other bodies of law that deal with this concern,â he said. âAnd I think happily, courts have been sort of generally pretty faithful to that concept.â
He also insisted that such an effort simply would not work. âThe US isnât the only country on Earth, and these tools are going to continue to exist,â he said. âThereâs going to be a tremendous amount of jurisdictional arbitrage in terms of where these companies are based, in terms of the location from which the tools are launched.â
The bottom line, he said, is âyou couldnât put this cat back in the bag.â
Others disagree with that assessment: Rotenberg says the Federal Trade Commission is the one US agency with the authority and ability to act on these AI and copyright disputes. In March, the Center for AI and Digital Policy asked the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security. And in July, the FTC opened an investigation into OpenAI over whether the chatbot has harmed consumers through its collection of data.Â
âIf the FTC sides with us, they can require the deletion of data, the deletion of algorithms, the deletion of models that were created from data that was improperly obtained,â he said.Â
And Databricksâ Rao insists that these generative AI models need to be â and can be â retrained. âIâll be really honest, that even applies to models that we put out there. Weâre using web-scraped data, just like everybody else, it has become sort of a standard,â he said. âIâm not saying that standard is correct. But I think there are ways to build models on permission data.âÂ
Hanna, however, pointed out that if there were a judicial ruling which found that generative AI could not be trained on copyrighted works, it would be âearth-shakingâ â effectively meaning âall the models out there would have to be auditedâ to identify all the training data at issue.Â
And doing that would be even harder than most people realize: In a new paper, âTalkinâ âBout AI Generation: Copyright and the Generative AI Supply Chain,â A. Feder Cooper, Katherine Lee and Cornell Lawâs James Grimmelman explained that the process of training and using a generative AI model is similar to a supply chain, with six stages â from the creation of the data and curation of the dataset to model training, model fine-tuning, application deployment and AI generation by users.Â
Unfortunately, they explain, it is impossible to localize copyright concerns to a single link in the chain, so they âdo not believe that it is currently possible to predict with certainty whether and when participants in the generative-AI supply chain will be held liable for copyright infringement.âÂ
The bottom line is that any effort to remove copyrighted works from training data would be incredibly difficult. Rotenberg compared it to asbestos, a very popular insulating material built into a lot of American homes in the 50s and 60s. When it was found to be carcinogenic and the US passed extensive laws to regulate its use, people had to take on the responsibility of removing it, which wasnât easy.Â
âIs generative AI asbestos for the digital economy?â he mused. âI guess the courts will have to decide.â
While no one knows how US courts will rule in these matters related to generative AI and copyright, experts VentureBeat spoke to had varying hopes and predictions about what might be coming down the pike.Â
âWhat I do wish would happen now is a more collaborative stance on this, instead of like, Iâm going to fight it tooth and nail and fight it to the end,â said Rao. âIf we say, âI do want to start permissioning data, I want to start paying creators in some ways to use that data,â thatâs more of a legitimate path forward.âÂ
What is causing particular angst, he added, is the increased emphasis on black box, closed models, so that people donât know whether their data was taken or not and have no way of auditing. âI think it is actually really dangerous,â he said. âLetâs be more transparent about it.âÂ
Yacine Jernite agrees, saying that even some companies that had traditionally been more open â like Meta â are now being more careful about saying what their models were trained on. For example, Meta did not disclose what data was used to train its recently announced Llama 2 model.
âI donât think anyone wins with that,â he said. Â
The reality, said lawyer Edward Klaris, is that the use of copyrighted works to train generative AI âdoesnât feel fair, because youâre taking everybodyâs work and youâre producing works that potentially supplant it.â As a result, he believes courts will lean in favor of copyright owners and against technological advancement.Â
âI think the courts will apply rules that did not apply in the Google books case, more on the infringement side,â he said.Â
Karla Ortiz, a concept artist and illustrator based in San Francisco who has worked on blockbuster films including Marvelâs Guardians of the Galaxy Vol. 3, Loki, The Eternals, Black Panther, Avengers: Infinity War, and Doctor Strange, testified before the Senate hearing on AI and copyright on July 12 â so far, Ortiz is the only creative professional to have done so.Â
In her testimony, Ortiz focused on fairness: âUltimately, you as congress are faced with a question about what is fundamentally fair in American society,â she said. âIs it fair for technology companies to take work that is the product of a lifetime of devotion and labor, even utilize creatorsâ full names, without any permission, credit or compensation to the creator, in order to create a software that mimicâs their work? Is it fair for technology companies to directly compete with those creators who supplied the raw material from which their AIâs are built? Is it fair for these technology companies to reap billions of dollars from models that are powered by the work of these creators, while at the same time lessening or even destroying current and future economic and labor prospects of creators? Iâd answer no to all of these questions.âÂ
The data underpinning generative AI has become a legal quagmire that may take years, if not decades, to wind its way through the courts. Experts agree that it is impossible to predict how the Supreme Court would rule, should a case related to generative AI and copyrighted training data come before the nine justices.Â
But either way, it will have a significant impact. The unnamed Big Tech legal source VentureBeat spoke to said that he thinks âwhat weâre seeing right now is the next big wave of litigation over these tools that are going to, if you ask me, have a profound effect on society.âÂ
But perhaps the AI community needs to prepare for what they might consider a worst-case scenario. AI pioneer Andrew Ng, for one, already seems aware that both the lack of transparency into AI datasets, as well as the possibility of access to datasets filled with copyrighted material, could come to an end.Â
âThe AI community is entering an era in which we are called upon to be more transparent in our collection and use of data,â he admitted in the June 7 edition of his DeepLearning.ai newsletter The Batch. âWe shouldnât take resources like LAION for granted, because we may not always have permission to use them.âÂ
Head over to our on-demand library to view sessions from VB Transform 2023. Register Here
As early as last fall, before ChatGPT had even launched, experts were already predicting that issues related to the copyrighted data that trained generative AI models would unleash a wave of litigation that, like other big technological changes that changed how the commercial world worked â such as video recording and Web 2.0 â could one day come before a certain group of nine justices.Â
âUltimately, I believe this is going to go to the Supreme Court,â Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, told VentureBeat last October â and recently confirmed that his opinion is unchanged.
Edward Klaris, a managing partner at Klaris Law, a New York City- based firm dedicated to media, entertainment, tech and the arts, also maintains that a generative AI case could âabsolutelyâ be taken up by the Supreme Court. âThe interests are clearly important â weâre going to get cases that come down on various sides of this argument,â he recently told VentureBeat.
>>Follow VentureBeatâs ongoing generative AI coverage<<
Event
VB Transform 2023 On-Demand
Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.
Â
The question is: How did we get here? How did the trillions of data points at the core of generative AI become a toxin of sorts that, depending on your point of view and the decision of the highest judicial authority, could potentially hobble an industry destined for incredible innovation, or poison the well of human creativity and consent?Â
The âoh shitâ moment for generative AI
The explosion of generative AI over the past year has become an ââoh, shit!â moment when it comes to dealing with the data that trained large language and diffusion models, including mass amounts of copyrighted content gathered without consent, Dr. Alex Hanna, director of research at the Distributed AI Research Institute (DAIR), told VentureBeat in a recent interview.Â
The question of how AI technologies could affect copyright and intellectual property has been a known, but not terribly urgent, problem legal scholars and some AI researchers have wrestled with over the past decade. But what had been âan open question,â explained Hanna, who studies data used to train AI and ML models, has suddenly become a far more pressing issue â to put it mildly â for generative AI. Now that generative AI tools based on large language models (LLMs) are available to consumers and businesses, the fact that they are trained on a massive corpora of text and images, mostly scraped from the internet, and can generate new, similar content, has brought about a sudden increased scrutiny of their data sourcesÂ
A growing alarm among artists, authors, and other creative professionals concerned about the use of their copyrighted works in AI training datasets has already led to a spate of generative AI-focused lawsuits filed over the past six months. From the first class-action copyright infringement lawsuit around AI art filed against Stability AI, Midjourney and DeviantArt in January, to comedian Sarah Silvermanâs recent lawsuit against OpenAI and Meta filed in July, more copyright holders are increasingly pushing back against data scraping practices in the name of training AI.Â
In response, Big Tech companies like OpenAI have been lawyering up for the long haul. Last week, in fact, OpenAI filed a motion to dismiss two class-action lawsuits from book authorsâincluding Sarah Silvermanâwho earlier this summer alleged that ChatGPT was illegally trained on pirated copies of their books.
The company asked a US district court in California to throw out all but one claim alleging direct copyright infringement, which OpenAI hopes to defeat at âa later stage of the case.â According to OpenAI, even if the authorsâ books were a âtiny partâ of ChatGPTâs massive data set, âthe use of copyrighted materials by innovators in transformative ways does not violate copyright.âÂ
âPeople donât get into AI to deal with copyright lawâ
The wave of lawsuits, as well as pushback from enterprise companies â that donât want legal blowback for using generative AI, especially for consumer-facing applications â has also been a wake-up call for AI researchers and entrepreneurs. This cohort has not witnessed such significant legal pushback before â at least not when it comes to copyright (there have been previous AI-related lawsuits related to privacy and bias).Â
Of course, data has always been the oil driving artificial intelligence to greater heights. There is no AI without data. But the typical AI researcher, Hanna explained, is likely far more interested in exploring the boundaries of science with data than digging into laws governing the use of that data.Â
âPeople donât get into AI to deal with copyright law,â she said. âComputer scientists arenât trained in data collection, and they surely are not trained on copyright issues. This is certainly not part of computer vision, or machine learning, or AI pedagogy.âÂ
Naveen Rao, VP of generative AI at Databricks and co-founder of MosaicML, pointed out that researchers are usually just thinking about making progress. âIf youâre a pure researcher, youâre not really thinking about the business side of it,â he said.Â
If anything, some AI researchers creating datasets for use in machine learning models have been motivated by an effort to democratize access to the types of closed, black box datasets companies like OpenAI were already using. For example, Wired reported that the dataset at the heart of the Sarah Silverman case, Books3, which has been used to create Metaâs Llama, as well as other AI models, started as a âpassion projectâ by AI researcher Shawn Presser. He saw it as aligned with the open source movement, as a way to allow smaller companies and researchers to compete against the big players.Â
Yet, Presser was aware there would be backlash: âWe almost didnât release the data sets at all because of copyright concerns,â he told Wired.Â
Training data is generative AIâs secret sauce
But whether AI researchers creating and using datasets for model training thought about it or not, there is no doubt that the data underpinning generative AI â which can arguably be described as its secret sauce â includes vast amounts of copyrighted material, from books and Reddit posts to YouTube videos, newspaper articles and photos. However, copyright critics and some legal experts insist this falls under what is known in legal parlance as âfair useâ of the data â that is, U.S. copyright law âpermits limited use of copyrighted material without having to first acquire permission from the copyright holder.âÂ
At testimony before the U.S. Senate at a hearing on AI and intellectual property related to AI and copyright on July 12, Matthew Sag, a professor of law in AI, machine learning and data science at Emory University School of Law, said that âif an LLM is trained properly and operated with appropriate safeguards, its outputs will not resemble its inputs in a way that would trigger copyright liability. Training such an LLM on copyrighted works would thus be justified under the fair use doctrine.â
While some might see that as an unrealistic expectation, it would be good news for copyright critics like AI pioneer Andrew Ng, former co-founder and head of Google Brain, who make no bones about the fact that they know the latest advances in machine learning have depended on free access to large quantities of data, much of it scraped from the open internet.Â
In an issue of his DeepLearning.ai newsletter, The Batch, titled âItâs Time to Update Copyright for Generative AI, a lack of access to massive popular datasets such as Common Crawl, The Pile, and LAION would put the brakes on progress or at least radically alter the economics of current research.Â
âThis would degrade AIâs current and future benefits in areas such as art, education, drug development, and manufacturing, to name a few,â he said.Â
The âfour-factorâ test for âfair useâ of copyrighted data
But other legal minds, and a rising chorus of creators, see an equally persuasive counterargument â that copyright issues around generative AI are qualitatively different from previous high-court cases related to digital technologies and copyright, most notably Authors Guild, Inc. v. Google, Inc.Â
In that federal lawsuit, authors and publishers argued that Googleâs project to digitize and display excerpts from books infringed upon their copyrights. Google won the case in 2015 by claiming its actions fell under âfair useâ because it provided valuable resources for researchers, scholars, and the public, while also enhancing the discoverability of books.
However, the concept of âfair useâ is based on a four-factor test â four measures that judges consider when evaluating whether a work is âtransformativeâ or simply a copy: the purpose and character of the work, the nature of the work, the amount taken from the original work, and the effect of the new work on a potential market. That fourth factor is the key to how generative AI really differs, say experts, because it aims to assess whether the use of the copyrighted material has the potential to negatively impact the commercial value of the original work or impede opportunities for the copyright holder to exploit their work in the market â which is exactly what artists, authors, journalists and other creative professionals claim.Â
âThe Handmaidâs Taleâ author Margaret Atwood, who discovered that 33 of her books were part of the Books3 dataset, explained this concern bluntly in a recent Atlantic essay:Â
âOnce fully trained, the bot may be given a commandââWrite a Margaret Atwood novelââand the thing will glurp forth 50,000 words, like soft ice cream spiraling out of its dispenser, that will be indistinguishable from something I might grind out. (But minus the typos.) I myself can then be dispensed withâmurdered by my replica, as it wereâbecause, to quote a vulgar saying of my youth, who needs the cow when the milkâs free?â
AI datasets used to be smaller and more controlled
Two decades ago, no one in the AI community thought much about the copyright issues of datasets, because they were far smaller and more controlled, said Hanna.Â
In AI for computer vision, for example, images were typically not gathered on the web, because photo-sharing sites like Flickr (which wasnât launched until 2004) did not exist. âCollections of images tended to be smaller and were either taken in from under certain transit controlled conditions, by researchers themselves,â she said.Â
That was true for text datasets used for natural language processing as well. The earliest learned models for language generation typically consisted of material that was either a matter of public record or explicitly licensed for research use.Â
All of that changed with the development of ImageNet, which now includes over 14 million hand-annotated images in its dataset. Created by AI researcher Fei-Fei Li (now at Stanford) and presented for the first time in 2009, ImageNet was one of the first cases of mass scraping of image datasets intended for computer vision research. According to Hanna, this qualitative scale shift also became the mode of operation for doing data collection, âsetting the groundwork for a lot of the generative AI stuff that weâre seeing.âÂ
Eventually, datasets became so large that it became impossible to responsibly source and hand-curate datasets in the same way anymore.Â
According to âThe Devil is in the Training Data,â a July 2023 paper authored by Google DeepMind research scientists Katherine Lee and Daphne Ippolito, as well as A. Feder Cooper, a Ph.D. candidate in computer science at Cornell, âgiven the sheer amount of training data required to produce high-quality generative models, itâs impossible for a creator to thoroughly understand the nuances of every example in a training dataset.âÂ
Cooper, who, along with Lee presented a workshop at the recent International Conference on Machine Learning on Generative AI and the Law, said that best practices in training and testing models were taught in high school and college courses. âBut the ability to execute that on these new huge datasets, we donât have a good way to do that,â they told VentureBeat.Â
A âNapster momentâ for generative AI
By the end of 2022, OpenAIâs ChatGPT, as well as image generators like Stable Diffusion and Midjourney, had taken AIâs academic research into the commercial stratosphere. But this quest for commercial success â on a foundation of mass amounts of copyrighted data gathered without consent â hasnât actually happened all at once, explained Yacine Jernite, who leads the ML and Society team at Hugging Face.
âItâs been like a slow slip from something which was mostly academic for academics to something thatâs strongly commercial,â he said. âThere was no single moment where it was like, âthis means we need to rethink everything that weâve been doing for the last 20 years.ââÂ
But Databricksâ Rao maintains that we are, in fact, having that kind of moment right now â what he calls the âNapster momentâ for generative AI. The 2001 A&M Records, Inc. v. Napster, Inc., landmark intellectual property case found that Napster could be held liable for infringement of copyright on its peer-to-peer music file sharing service.Â
Napster, he explained, clearly demonstrated demand for streaming music â as generative AI is clearly demonstrating demand for text and image-generating tools. âBut then [Napster] did get shut down until someone figured out the incentives, how to go back and remunerate the creators the right way,â he said.Â
One difference, however, is that with Napster, artists were nervous about speaking out, recalled Neil Turkewitz, a copyright activist who previously served as an EVP at the Recording Industry Association of America (RIAA) during the Napster era. âThe voices opposing Napster were record labels,â he explained.
The current environment, he said, is completely different. âArtists have now seen the parallels to what happened with Napster â they know theyâre sitting there on deathâs doorstep and need to speak out, so youâve had a huge outpouring from the artists community,â he said.
Yet, industries are also speaking out â particularly in areas such as publishing and entertainment, said Marc Rotenberg, president and founder of the nonprofit Center for AI and Digital Policy, as well as an adjunct professor at Georgetown Law School. Â
âBack when the Google books ruling was handed down, Google did very well in the outcome as a legal matter, but publishers and the news industry did not,â he said. The memory of that case, he said, weighs heavily.Â
As todayâs AI models require companies to hand over their data, he explained, a company like the New York Times recognizes that if their work can be replicated, they could go out of business (the New York Times updated its Terms of Service last month to prohibit its content from being used to train AI models).Â
âTo me, one of the most interesting legal cases today involving AI is not yet a legal case,â Rotenberg said. âItâs the looming battle between one of the most well regarded publishers, The New York Times, and one of the most impactful generative AI firms, OpenAI.âÂ
Will Big Tech prevail?Â
But lawyers defending Big Tech companies in todayâs generative AI copyright cases say they have legal precedent on their side.Â
One lawyer at a firm representing one of the top AI companies told VentureBeat that generative AI is an example of how every couple of decades a new, really significant question comes along and forms how the commercial world works. These legal cases, he said, will âplay a huge role in shaping the pace and contours of innovation, and really our understanding of this amazing body of law that dates back to 1791.âÂ
The lawyer, who asked to remain anonymous because he was not authorized to speak about ongoing litigation, said that he is âquite confident that the position of the technology companies is the one that should and hopefully will prevail.â However, he emphasized that he thought those seeking to protect industries through these copyright lawsuits will have an uphill battle.Â
âItâs just really bad for using the regulated labor market, or privacy considerations, or whatever it is â there are other bodies of law that deal with this concern,â he said. âAnd I think happily, courts have been sort of generally pretty faithful to that concept.â
He also insisted that such an effort simply would not work. âThe US isnât the only country on Earth, and these tools are going to continue to exist,â he said. âThereâs going to be a tremendous amount of jurisdictional arbitrage in terms of where these companies are based, in terms of the location from which the tools are launched.â
The bottom line, he said, is âyou couldnât put this cat back in the bag.â
Generative AI: âAsbestosâ for the digital economy?Â
Others disagree with that assessment: Rotenberg says the Federal Trade Commission is the one US agency with the authority and ability to act on these AI and copyright disputes. In March, the Center for AI and Digital Policy asked the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security. And in July, the FTC opened an investigation into OpenAI over whether the chatbot has harmed consumers through its collection of data.Â
âIf the FTC sides with us, they can require the deletion of data, the deletion of algorithms, the deletion of models that were created from data that was improperly obtained,â he said.Â
And Databricksâ Rao insists that these generative AI models need to be â and can be â retrained. âIâll be really honest, that even applies to models that we put out there. Weâre using web-scraped data, just like everybody else, it has become sort of a standard,â he said. âIâm not saying that standard is correct. But I think there are ways to build models on permission data.âÂ
Hanna, however, pointed out that if there were a judicial ruling which found that generative AI could not be trained on copyrighted works, it would be âearth-shakingâ â effectively meaning âall the models out there would have to be auditedâ to identify all the training data at issue.Â
And doing that would be even harder than most people realize: In a new paper, âTalkinâ âBout AI Generation: Copyright and the Generative AI Supply Chain,â A. Feder Cooper, Katherine Lee and Cornell Lawâs James Grimmelman explained that the process of training and using a generative AI model is similar to a supply chain, with six stages â from the creation of the data and curation of the dataset to model training, model fine-tuning, application deployment and AI generation by users.Â
Unfortunately, they explain, it is impossible to localize copyright concerns to a single link in the chain, so they âdo not believe that it is currently possible to predict with certainty whether and when participants in the generative-AI supply chain will be held liable for copyright infringement.âÂ
The bottom line is that any effort to remove copyrighted works from training data would be incredibly difficult. Rotenberg compared it to asbestos, a very popular insulating material built into a lot of American homes in the 50s and 60s. When it was found to be carcinogenic and the US passed extensive laws to regulate its use, people had to take on the responsibility of removing it, which wasnât easy.Â
âIs generative AI asbestos for the digital economy?â he mused. âI guess the courts will have to decide.â
Hopes and predictions for the future of generative AI and copyright
While no one knows how US courts will rule in these matters related to generative AI and copyright, experts VentureBeat spoke to had varying hopes and predictions about what might be coming down the pike.Â
âWhat I do wish would happen now is a more collaborative stance on this, instead of like, Iâm going to fight it tooth and nail and fight it to the end,â said Rao. âIf we say, âI do want to start permissioning data, I want to start paying creators in some ways to use that data,â thatâs more of a legitimate path forward.âÂ
What is causing particular angst, he added, is the increased emphasis on black box, closed models, so that people donât know whether their data was taken or not and have no way of auditing. âI think it is actually really dangerous,â he said. âLetâs be more transparent about it.âÂ
Yacine Jernite agrees, saying that even some companies that had traditionally been more open â like Meta â are now being more careful about saying what their models were trained on. For example, Meta did not disclose what data was used to train its recently announced Llama 2 model.
âI donât think anyone wins with that,â he said. Â
The reality, said lawyer Edward Klaris, is that the use of copyrighted works to train generative AI âdoesnât feel fair, because youâre taking everybodyâs work and youâre producing works that potentially supplant it.â As a result, he believes courts will lean in favor of copyright owners and against technological advancement.Â
âI think the courts will apply rules that did not apply in the Google books case, more on the infringement side,â he said.Â
Karla Ortiz, a concept artist and illustrator based in San Francisco who has worked on blockbuster films including Marvelâs Guardians of the Galaxy Vol. 3, Loki, The Eternals, Black Panther, Avengers: Infinity War, and Doctor Strange, testified before the Senate hearing on AI and copyright on July 12 â so far, Ortiz is the only creative professional to have done so.Â
In her testimony, Ortiz focused on fairness: âUltimately, you as congress are faced with a question about what is fundamentally fair in American society,â she said. âIs it fair for technology companies to take work that is the product of a lifetime of devotion and labor, even utilize creatorsâ full names, without any permission, credit or compensation to the creator, in order to create a software that mimicâs their work? Is it fair for technology companies to directly compete with those creators who supplied the raw material from which their AIâs are built? Is it fair for these technology companies to reap billions of dollars from models that are powered by the work of these creators, while at the same time lessening or even destroying current and future economic and labor prospects of creators? Iâd answer no to all of these questions.âÂ
It is impossible to know how the Supreme Court would rule
The data underpinning generative AI has become a legal quagmire that may take years, if not decades, to wind its way through the courts. Experts agree that it is impossible to predict how the Supreme Court would rule, should a case related to generative AI and copyrighted training data come before the nine justices.Â
But either way, it will have a significant impact. The unnamed Big Tech legal source VentureBeat spoke to said that he thinks âwhat weâre seeing right now is the next big wave of litigation over these tools that are going to, if you ask me, have a profound effect on society.âÂ
But perhaps the AI community needs to prepare for what they might consider a worst-case scenario. AI pioneer Andrew Ng, for one, already seems aware that both the lack of transparency into AI datasets, as well as the possibility of access to datasets filled with copyrighted material, could come to an end.Â
âThe AI community is entering an era in which we are called upon to be more transparent in our collection and use of data,â he admitted in the June 7 edition of his DeepLearning.ai newsletter The Batch. âWe shouldnât take resources like LAION for granted, because we may not always have permission to use them.âÂ
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team