AI & RoboticsNews

FTC hosts challenge to stop harms of voice cloning AI

Voice cloning — the practice of mimicking someone’s voice so well it can pass for the real thing — has had a banner year, with a range of AI startups and techniques emerging to enable it, and a song going viral featuring voice clones of popular music artists Drake and The Weeknd.

But for the Federal Trade Commission (FTC), the U.S. federal government agency in charge of investigating and preventing consumer harm and promotion fair market competition, voice cloning poses a major risk for consumer fraud. Imagine someone impersonating your mother’s voice and asking you to quickly wire her $5,000, for example. Or even someone stealing and using your voice to access your bank accounts through a customer help hotline.

The FTC is seeking to move quickly (at least, for a government agency) to try and address such scenarios. According to a tentative agenda posted by the agency ahead of its upcoming meeting this Thursday, November 16, the FTC will “announce an exploratory Voice Cloning Challenge to encourage the development of multidisciplinary solutions—from products to procedures—aimed at protecting consumers from artificial intelligence-enabled voice cloning harms, such as fraud and the broader misuse of biometric data and creative content. “

In other words: the FTC wants technologists and members of the public to come up with ways to stop voice clones from tricking people. 

In one demonstration of voice cloning’s propaganda potential, a filmmaker shocked many by generating a realistic-looking deepfake video depicting First Lady Jill Biden criticizing U.S. policy towards Palestine. While intended as satire to bring attention to humanitarian concerns, it showed how AI could craft a seemingly plausible fake narrative using a synthesized clone of the First Lady’s voice.

The producer was able to craft the deepfake in just one week using UK-based ElevenLabs, one of the top voice cloning startups at the forefront of this emerging sector, founded by former employees of controversial military and corporate intelligence AI startup Palantir. ElevenLabs has gained increasing investor interest, reportedly in talks to raise $1 billion in a third funding round this year according to sources that spoke to Business Insider.

This fast-tracked growth signifies voice cloning’s rising commercial prospects, and like AI more generally, open source solutions are also available. 

However, faster advancement also means more opportunities for harmful misuse may arise before safeguards can catch up. Regulators aim to get ahead of issues through proactive efforts like the FTC’s new challenge program.

At the core of concerns is voice cloning’s ability to generate seemingly authentic speech from only a few minutes of sample audio. This raises possibilities for the creation and spread of fake audios and videos meant to deliberately deceive or manipulate listeners. Experts warn of risks for fraud, deepfakes used to publicly embarrass or falsely implicate targets, and synthetic propaganda affecting political processes.

Mitigation has so far relied on voluntary practices by companies and advocacy for standards. But self-regulation may not be enough. Challenges like the FTC’s offer a coordinated, cross-disciplinary avenue to systematically address vulnerabilities. Through competitively awarded grants, the challenge seeks stakeholder collaboration to develop technical, legal and policy solutions supporting accountability and consumer protection.

Ideas could range from improving deepfake detection methods to establishing provenance and disclosure standards for synthetic media. The resulting mitigations would guide continued safe innovation rather than stifle progress. With Washington and private partners working in tandem, comprehensive and balanced solutions balancing rights and responsibilities can emerge.

According to comments filed to the US Copyright Office, the FTC raised cautions about the potential risks of generative AI being used improperly or deceiving consumers.

By expressing wariness over AI systems being trained on “pirated content without consent,” the filing aligned with debates around whether voice cloning tools adequately obtain permission when using individuals’ speech samples. The Voice Cloning Challenge could support the development of best practices for responsibly collecting and handling personal data.

The FTC also warned of consumer deception risks if AI impersonates people. Through the challenge, the FTC aims to foster the creation of techniques to accurately attribute synthetic speech and avoid misleading deepfakes.

By launching the challenge, the FTC appears to seek to proactively guide voice cloning and other generative technologies toward solutions that can mitigate the consumer and competition concerns raised in its copyright filing. 

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Hear from top industry leaders on Nov 15. Reserve your free pass


Voice cloning — the practice of mimicking someone’s voice so well it can pass for the real thing — has had a banner year, with a range of AI startups and techniques emerging to enable it, and a song going viral featuring voice clones of popular music artists Drake and The Weeknd.

But for the Federal Trade Commission (FTC), the U.S. federal government agency in charge of investigating and preventing consumer harm and promotion fair market competition, voice cloning poses a major risk for consumer fraud. Imagine someone impersonating your mother’s voice and asking you to quickly wire her $5,000, for example. Or even someone stealing and using your voice to access your bank accounts through a customer help hotline.

The FTC is seeking to move quickly (at least, for a government agency) to try and address such scenarios. According to a tentative agenda posted by the agency ahead of its upcoming meeting this Thursday, November 16, the FTC will “announce an exploratory Voice Cloning Challenge to encourage the development of multidisciplinary solutions—from products to procedures—aimed at protecting consumers from artificial intelligence-enabled voice cloning harms, such as fraud and the broader misuse of biometric data and creative content. “

In other words: the FTC wants technologists and members of the public to come up with ways to stop voice clones from tricking people. 

VB Event

AI Unleashed

Don’t miss out on AI Unleashed on November 15! This virtual event will showcase exclusive insights and best practices from data leaders including Albertsons, Intuit, and more.

 


Register for free here

The tech is advancing rapidly worth big money

In one demonstration of voice cloning’s propaganda potential, a filmmaker shocked many by generating a realistic-looking deepfake video depicting First Lady Jill Biden criticizing U.S. policy towards Palestine. While intended as satire to bring attention to humanitarian concerns, it showed how AI could craft a seemingly plausible fake narrative using a synthesized clone of the First Lady’s voice.

The producer was able to craft the deepfake in just one week using UK-based ElevenLabs, one of the top voice cloning startups at the forefront of this emerging sector, founded by former employees of controversial military and corporate intelligence AI startup Palantir. ElevenLabs has gained increasing investor interest, reportedly in talks to raise $1 billion in a third funding round this year according to sources that spoke to Business Insider.

This fast-tracked growth signifies voice cloning’s rising commercial prospects, and like AI more generally, open source solutions are also available. 

However, faster advancement also means more opportunities for harmful misuse may arise before safeguards can catch up. Regulators aim to get ahead of issues through proactive efforts like the FTC’s new challenge program.

Voluntary standards may not be enough

At the core of concerns is voice cloning’s ability to generate seemingly authentic speech from only a few minutes of sample audio. This raises possibilities for the creation and spread of fake audios and videos meant to deliberately deceive or manipulate listeners. Experts warn of risks for fraud, deepfakes used to publicly embarrass or falsely implicate targets, and synthetic propaganda affecting political processes.

Mitigation has so far relied on voluntary practices by companies and advocacy for standards. But self-regulation may not be enough. Challenges like the FTC’s offer a coordinated, cross-disciplinary avenue to systematically address vulnerabilities. Through competitively awarded grants, the challenge seeks stakeholder collaboration to develop technical, legal and policy solutions supporting accountability and consumer protection.

Ideas could range from improving deepfake detection methods to establishing provenance and disclosure standards for synthetic media. The resulting mitigations would guide continued safe innovation rather than stifle progress. With Washington and private partners working in tandem, comprehensive and balanced solutions balancing rights and responsibilities can emerge.

FTC moves to address Gen AI harms head on

According to comments filed to the US Copyright Office, the FTC raised cautions about the potential risks of generative AI being used improperly or deceiving consumers.

By expressing wariness over AI systems being trained on “pirated content without consent,” the filing aligned with debates around whether voice cloning tools adequately obtain permission when using individuals’ speech samples. The Voice Cloning Challenge could support the development of best practices for responsibly collecting and handling personal data.

The FTC also warned of consumer deception risks if AI impersonates people. Through the challenge, the FTC aims to foster the creation of techniques to accurately attribute synthetic speech and avoid misleading deepfakes.

By launching the challenge, the FTC appears to seek to proactively guide voice cloning and other generative technologies toward solutions that can mitigate the consumer and competition concerns raised in its copyright filing. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Bryson Masse
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!