AI & RoboticsNews

AI will make 2024 US elections a ‘hot mess’

Generative AI will make the 2024 US elections a ‘hot mess’ — whether it is from chatbots or deepfakes — while at the same time, politics will slow down AI regulation efforts, says Nathan Lambert, a machine learning researcher at the Allen Institute for AI, who also co-hosts The Retort AI podcast with researcher Thomas Krendl Gilbert.

“I don’t expect AI regulation to come in the US [in 2024] given that it’s an election year and it’s a pretty hot topic,” he told VentureBeat. “I think the US election will be the biggest determining factor in the narrative to see what positions different candidates take and how people misuse AI products, and how that attribution is given and how that’s handled by the media.”

As people use tools like ChatGPT and DALL-E to create content for the election machine, “it’s going to be a hot mess,” he added, “whether or not people attribute the use to campaigns, bad actors, or companies like OpenAI.”

Even though the 2024 US Presidential election is still 11 months away, the use of AI in US political campaigns is already raising red flags. A recent ABC News report, for example, highlighted Florida governor Ron DeSantis’ campaign efforts over the summer which included AI-generated images and audio of Donald Trump.

And a recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that nearly 6 in 10 adults (58%) think AI tools will increase the spread of false and misleading information during next year’s elections.

Some Big Tech companies are already attempting to respond to concerns: On Tuesday this week, Google said it plans to restrict the kinds of election-related prompts its chatbot Bard and search generative experience will respond to in the months before the US Presidential election. The restrictions are set to be enforced by early 2024, the company said.

Meta, which owns Facebook, has also said it will bar political campaigns from using new gen AI advertising products while Meta advertisers will also have to disclose when AI tools are used to alter or create election ads on Facebook and Instagram. And The Information reported this week that OpenAI “has overhauled how it handles the task of rooting out disinformation and offensive content from ChatGPT and its other products, as worries about the spread of disinformation intensify ahead of next year’s elections.”

But Wired reported last week that Microsoft’s Copilot (originally Bing Chat) is providing conspiracy theories, misinformation, and out-of-date or incorrect information, and it shared new research that claims the Copilot issues are systemic.

The bottom line, said Lambert, is that it may be “impossible to keep [gen AI] information as sanitized as it needs to be” when it comes to the election narrative.

That could be more serious than the 2024 Presidential race, said Alicia Solow-Niederman, associate professor of law at George Washington University Law School and an expert in the intersection of law and technology. Solow-Niederman said that generative AI tools, whether through misinformation or overt disinformation campaigns, can “be really serious for the fabric of our democracy.”

She pointed to legal scholars Danielle Citron and Robert Chesney, who defined a concept called ‘the liar’s dividend:’ “It’s the idea that in a world where we can’t tell what’s true and what’s not, we don’t know who to trust, and our whole electoral system, ability to self govern, starts to erode,” she told VentureBeat.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Generative AI will make the 2024 US elections a ‘hot mess’ — whether it is from chatbots or deepfakes — while at the same time, politics will slow down AI regulation efforts, says Nathan Lambert, a machine learning researcher at the Allen Institute for AI, who also co-hosts The Retort AI podcast with researcher Thomas Krendl Gilbert.

“I don’t expect AI regulation to come in the US [in 2024] given that it’s an election year and it’s a pretty hot topic,” he told VentureBeat. “I think the US election will be the biggest determining factor in the narrative to see what positions different candidates take and how people misuse AI products, and how that attribution is given and how that’s handled by the media.”

As people use tools like ChatGPT and DALL-E to create content for the election machine, “it’s going to be a hot mess,” he added, “whether or not people attribute the use to campaigns, bad actors, or companies like OpenAI.”

Use of AI in election campaigns already causing concern

Even though the 2024 US Presidential election is still 11 months away, the use of AI in US political campaigns is already raising red flags. A recent ABC News report, for example, highlighted Florida governor Ron DeSantis’ campaign efforts over the summer which included AI-generated images and audio of Donald Trump.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

And a recent poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that nearly 6 in 10 adults (58%) think AI tools will increase the spread of false and misleading information during next year’s elections.

Some Big Tech companies are already attempting to respond to concerns: On Tuesday this week, Google said it plans to restrict the kinds of election-related prompts its chatbot Bard and search generative experience will respond to in the months before the US Presidential election. The restrictions are set to be enforced by early 2024, the company said.

Meta, which owns Facebook, has also said it will bar political campaigns from using new gen AI advertising products while Meta advertisers will also have to disclose when AI tools are used to alter or create election ads on Facebook and Instagram. And The Information reported this week that OpenAI “has overhauled how it handles the task of rooting out disinformation and offensive content from ChatGPT and its other products, as worries about the spread of disinformation intensify ahead of next year’s elections.”

But Wired reported last week that Microsoft’s Copilot (originally Bing Chat) is providing conspiracy theories, misinformation, and out-of-date or incorrect information, and it shared new research that claims the Copilot issues are systemic.

Gen AI tools can ‘be really serious for the fabric of our democracy’

The bottom line, said Lambert, is that it may be “impossible to keep [gen AI] information as sanitized as it needs to be” when it comes to the election narrative.

That could be more serious than the 2024 Presidential race, said Alicia Solow-Niederman, associate professor of law at George Washington University Law School and an expert in the intersection of law and technology. Solow-Niederman said that generative AI tools, whether through misinformation or overt disinformation campaigns, can “be really serious for the fabric of our democracy.”

She pointed to legal scholars Danielle Citron and Robert Chesney, who defined a concept called ‘the liar’s dividend:’ “It’s the idea that in a world where we can’t tell what’s true and what’s not, we don’t know who to trust, and our whole electoral system, ability to self govern, starts to erode,” she told VentureBeat.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!