AI & RoboticsNews

Could Big Tech be liable for generative AI output? Hypothetically ‘yes,’ says Supreme Court justice

Check out all the on-demand sessions from the Intelligent Security Summit here.


In a surprise moment during today’s Supreme Court hearing about a Google case that could impact online free speech, justice Neil M. Gorsuch touched upon potential liability for generative AI output, according to Will Oremus at the Washington Post.

In the Gonzalez v. Google case in front of the Court, the family of an American killed in a 2015 ISIS terrorist attack in Paris argued that Google and its subsidiary YouTube did not do enough to remove or stop promoting ISIS terrorist videos seeking to recruit members. According to attorneys representing the family, this violated the Anti-Terrorism Act.

In lower court rulings, Google won with the argument that Section 230 of the Communications Decency Act shields it from liability for what its users post on its platform.

Is generative AI protected by Section 230?

According to the Washington Post’s live coverage, search engines historically “have responded to users’ queries with links to third-party websites, making for a relatively clear-cut defense under Section 230 that they should not be held liable for the content of those sites. But as search engines begin answering some questions from users directly, using their own artificial intelligence software, it’s an open question whether they could be sued as the publisher or speaker of what their chatbots say.”

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

In the course of Tuesday’s questioning, Gorsuch used generative AI as a hypothetical example of when a tech platform would not be protected by Section 230.

“Artificial intelligence generates poetry,” he said. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing or digesting content. And that is not protected. Let’s assume that’s right. Then the question becomes, what do we do about recommendations?”

As generative AI tools such as ChatGPT and DALL-E 2 exploded into the public consciousness over the past year, legal battles have been brewing all along the way.

For example, in November a proposed class action complaint was announced against GitHub, Microsoft and OpenAI for allegedly infringing protected software code via GitHub Copilot, a generative AI tool which is meant to assist software coders.

And in mid-January, the first class-action copyright infringement lawsuit around AI art was filed against two companies focused on open-source generative AI art — Stability AI (which developed Stable Diffusion) and Midjourney — as well as DeviantArt, an online art community.

But now, it looks like questions about liability might move front and center when it comes to legal issues around Big Tech and generative AI. Stay tuned.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!