AI & RoboticsNews

OpenAI’s GPT-4 violates FTC rules, argues AI policy group

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


The Federal Trade Commission (FTC) received a new complaint today from the Center for AI and Digital Policy (CAIDP), which calls for an investigation of OpenAI and its product GPT-4. The complaint argues that the FTC has declared that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability,” but claims that OpenAI’s GPT-4 “satisfies none of these requirements” and is “biased, deceptive, and a risk to privacy and public safety.”

CAIDP is a Washington, D.C.-based independent, nonprofit research organization that “assesses national AI policies and practices, trains AI policy leaders, and promotes democratic values for AI.” It is headed by president and founder Marc Rotenberg and senior research director Merve Hickok.

“The FTC has a clear responsibility to investigate and prohibit unfair and deceptive trade practices. We believe that the FTC should look closely at OpenAI and GPT-4,” said Rotenberg in a press release about the complaints. “We are specifically asking the FTC to determine whether the company has complied with the guidance the federal agency has issued.”

The complaint comes a day after an open letter calling for a six-month “pause” on developing large-scale AI models beyond GPT-4 highlighted the fierce debate around risks vs. hype as the speed of AI development accelerates.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

>>Follow VentureBeat’s ongoing generative AI coverage<<

FTC has made recent public statements about generative AI

The complaint also comes 10 days after the FTC published a business blog post called “Chatbots, deepfakes, and voice clones: AI deception for sale,” authored by Michael Atleson, an attorney at the FTC division of advertising practices. The blog post said that the FTC Act’s “prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive — even if that’s not its intended or sole purpose.” Companies should consider whether they should even be making or selling the AI tool and whether they are effectively mitigating the risks.

“If you decide to make or offer a product like that, take all reasonable precautions before it hits the market,” says the blog post. “The FTC has sued businesses that disseminated potentially harmful technologies without taking reasonable measures to prevent consumer injury.”

In a separate post from February, “Keep your AI claims in check,” Atleson wrote that the FTC may be “wondering” if a company advertising an AI product is aware of the risks. “You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong — maybe it fails or yields biased results — you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.”

FTC attorney said agency will always apply ‘bedrock’ advertising law principles

In an interview with VentureBeat last week, unrelated to the CAIDP complaint and focused solely on advertising law, Atleson said that the basic message of both of his recent AI-focused blog posts is that no matter how new or different the product or service is, the FTC will always apply the “bedrock” advertising law principles in the FTC Act — that you can’t misrepresent or exaggerate what your product can do or what it is, and you can’t sell things that are going to cause consumers substantial harm.

“It doesn’t matter whether it’s AI or whether it turns out we’re all living in a multiverse,” he said. “Guess what? That prohibition of false advertising still applies to every single instance.”

He added that admittedly, AI technology development is happening quickly. “We’re certainly right in the middle of a corporate rush to get a certain type of AI product to market, different types of generative AI tools,” he said. The FTC has focused on AI for a while now, he added, but the difference is that AI is more in the public eye, “especially with these new generative AI tools to which consumers have direct access.”

Federal AI regulation may come from FTC

With the growth of AI and speed of its development, legal experts say that FTC rulemaking about AI could be coming in 2023. In a December 2022 article written by Alston and Bird, federal AI regulation may be emerging from the FTC even though AI-focused bills introduced in Congress have not yet gained significant support.

“In recent years, the FTC issued two publications foreshadowing increased focus on AI regulation,” the article said, stating that the FTC had developed AI expertise in enforcing a variety of statutes, such as the Fair Credit Reporting Act, Equal Credit Opportunity Act and the FTC Act.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!