AI & RoboticsNews

Who owns DALL-E images? Legal AI experts weigh in

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


When OpenAI announced expanded beta access to DALL-E in July, the company offered paid subscription users full usage rights to reprint, sell and merchandise the images they create with the powerful text-to-image generator.

A week later, creative professionals across industries were already buzzing with questions. Topping the list: Who owns images put out by DALL-E, or for that matter, other AI-powered text-to-image generators, such as Google’s Imagen? The owner of the AI that trains the model? Or the human that prompts the AI with words like “red panda wearing a black leather jacket and riding a motorcycle, in watercolor-style?” 

In a statement to VentureBeat, an OpenAI spokesperson said, “OpenAI retains ownership of the original image primarily so that we can better enforce our content policy.” 

However, several creative professionals told VentureBeat they were concerned about the lack of clarity around image ownership from tools like DALL-E. Some who work for large agencies or brands said those issues might be too uncertain to warrant using the tools for high-profile client work. 

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, in its Palo Alto office, said the answer to the question “Who owns DALL-E images?” is far from clear. And, he emphasized, legal fallout is inevitable. 

“If DALL-E is adopted in the way I think [Open AI] envisions it, there’s going to be a lot of revenue generated by the use of the tool,” he said. “And when you have a lot of players in the market and issues at stake, you have a high chance of litigation.”

Big stakes get litigated for case-specific answers

Mark Davies, partner at Orrick, agreed there are many open legal questions when it comes to AI. “What happens in reality is when there are big stakes, you litigate it,” he said. “And then you get the answers in a case-specific way.” 

In the context of text-to-image generators and the resulting creations, the question is mostly about what’s “fair use,” he explained. Under U.S. copyright law, fair use is a “legal doctrine that promotes freedom of expression by permitting the unlicensed use of copyright-protected works in certain circumstances.”

In a technology context, the most recent, and biggest, case example was 2021’s Google LLC v. Oracle America, Inc. In a 6-2 decision, the Supreme Court held that Google’s use of Oracle’s code amounted to a fair use under United States copyright law. As a result, the Court did not consider the question as to whether the material copied was protected by copyright.

One big lesson from that case was that these disputes will be decided by the courts, Davies emphasized. “The idea that we’re gonna get some magical solution from a different place is just not how the legal system really works,” he said. 

However, he added, for an issue like that around DALL-E image ownership to move forward, it often needs two parties with a lot at stake, because litigation is so expensive. “So it does take a core disagreement on something really important for these rules to develop,” he said. And it has happened in the past, he added, with Morse code, with railroads, with smartphones and the internet. 

“I think when you are living through technological change, it feels unique and special,” he said. “But the industrial revolution happened. It got sorted out.” 

Contradictory statements from Open AI on DALL-E?

Still, some experts say Open AI’s statements about the use of DALL-E – that the company owns the images but users can commercialize them – is confusing and contradictory. 

Jim Flynn, a senior partner and managing director at Epstein Becker and Green, said it struck him as “a little give with one hand, take away with the other.” 

The thing is, both sides have fairly good claims and arguments, he pointed out. “Ultimately, I think the people who own this AI process make a fairly good claim that they would have some ownership rights,” he said. “This image was created by the simple input of some basic commands from a third party.”

On the other hand, an argument could be made that it is similar to using a digital camera, he added — where images are created but the camera manufacturers do not own the rights to user photos.

In addition, if those who own text-to-image generators own image output, it would be “viscerally unsatisfactory” to many who believe that if they buy or license a process like DALL-E, they should own what they created — particularly if they paid for the right to use it in the exact same manner as the AI company promoted them to use it. 

“If I were representing one of the advertising agencies, or the clients of the advertising agencies, I wouldn’t advise them to use this software to create a campaign, because I do think the AI provider would have some claims to the intellectual property,” he said. “I’d be looking to negotiate something more definitive.” 

The future of DALL-E image ownership

While there are arguments on both sides of the DALL-E ownership question, as well as many historical analogies, Flynn does not necessarily think the law needs to change to address them. 

“But will it change? Yes, I think it will, because there are a lot of people, especially in the AI community, who have some interest that isn’t really related to copyright or intellectual property,” he said. “I think the interest in it isn’t being driven because of complex legal issues but to push the issue of AI as having a separate consciousness. Because so much else in our society finds its way to court to get determined, that’s why these cases are out there.”

Flynn predicts a shakeout, a new consensus around who owns AI-generated creations, that will be driven by economic forces that the law follows. “That’s what happened with things like email correspondence and legal privilege, and frankly, that’s what happened with the digital camera.” 

He said he would tell clients that if they want to use AI-generated creations, it will be best to use a purveyor like Shutterstock that offers a certain number of licenses for an annual fee. 

“But the reality is, you’re also going to get big advertising agencies that are probably going to either develop their own [text-to-image AI], or license AI at the institutional level from some API provider to create advertising,” he said. “And the ad agency will pay the AI creator some amount of money and use it for clients. There certainly are models out there that this fits with.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!