AI & RoboticsNews

OpenAI, Georgetown, Stanford study finds LLMs can boost public opinion manipulation

Check out all the on-demand sessions from the Intelligent Security Summit here.


Advances in AI-powered large language models promise new applications in the near and distant future, with programmers, writers, marketers and other professionals standing to benefit from advanced LLMs. But a new study by scientists at Stanford University, Georgetown University, and OpenAI highlight the impact that LLMs can have on the work of actors that try to manipulate public opinion through the dissemination of online content.

The study finds that LLMs can boost political influence operations by enabling content creation at scale, reducing the costs of labor, and making it more difficult to detect bot activity.

The study was carried out after Georgetown University’s Center for Security and Emerging Technology (CSET), OpenAI, and the Stanford Internet Observatory (SIO) co-hosted a workshop in 2021 to explore the potential misuse of LLMs for propaganda purposes. And as LLMs continue to improve, there is concern that malicious actors will have more reason to use them for nefarious goals.

Study finds LLMs impact actors, behaviors, and content

Influence operations are defined by three key elements: Actors, behaviors, and content. The study by Stanford, Georgetown, and OpenAI finds that LLMs can impact all three aspects.

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

With LLMs making it easy to generate long stretches of coherent text, more actors will find it attractive to use them for influence operations. Content creation previously required human writers, which is costly, scales poorly, and can be risky when actors are trying to hide their operations. LLMs are not perfect and can make stupid mistakes when generating text. But a writer coupled with an LLM can become much more productive by editing computer-generated text instead of writing from scratch. This makes the writers much more productive and reduces the cost of labor.

“We argue that for propagandists, language generation tools will likely be useful: they can drive down costs of generating content and reduce the number of humans necessary to create the same volume of content,” Dr. Josh A. Goldstein, co-author of the paper and research fellow with the CyberAI Project at CSET, told VentureBeat.

In terms of behavior, not only can LLMs boost current influence operations but can also enable new tactics. For example, adversaries can use LLMs to create dynamic personalized content at scale or create conversational interfaces like chatbots that can directly interact with many people simultaneously. The ability of LLMs to produce original content will also make it easier for actors to conceal their influence campaigns.

“Since text generation tools create original output each time they are run, campaigns that rely on them might be more difficult for independent researchers to spot because they won’t rely on so-called ‘copypasta’ (or copy and pasted text repeated across online accounts),” Goldstein said.

A lot we still don’t know

Despite their impressive performance, LLMs are limited in many critical ways. For example, even the most advanced LLMs tend to make absurd statements and lose their coherence as their text gets longer than a few pages. 

They also lack context for events that are not included in their training data, and retraining them is a complicated and costly process. This makes it difficult to use them for political influence campaigns that require commentary on real-time events. 

But these limitations do not necessarily apply to all kinds of influence operations, Goldstein said.

“For operations that involve longer-form text and try to persuade people of a particular narrative, they might matter more. For operations that are mostly trying to ‘flood the zone’ or distract people, they may be less important,” he said.

And as the technology continues to mature, some of these barriers might be lifted. For example, Goldstein said, the report was primarily drafted before the release of ChatGPT, which has showcased how new data gathering and training techniques can improve the performance of LLMs. 

In the paper, the researchers forecast how some of the expected developments might remove some of these barriers. For example, LLMs will become more reliable and usable as scientists develop new techniques to reduce their errors and adapt them to new tasks. This can encourage more actors to use them for influence operations.

The authors of the paper also warn about “critical unknowns.” For example, scientists have discovered that as LLMs grow larger, they show emergent abilities. As the industry continues to push toward larger-scale models, new use cases might emerge that can benefit propagandists and influence campaigns.

And with more commercial interests in LLMs, the field is bound to advance much faster in the coming months and years. For example, the development of publicly available tools to train, run, and fine-tune language models will further reduce the technical barriers of using LLMs for influence campaigns.

Implementing a kill chain 

The authors of the paper suggest a “kill chain” framework for the types of mitigation strategies that can prevent the misuse of LLMs for propaganda campaigns.

“We can start to address what’s needed to combat misuse by asking a simple question: What would a propagandist need to wage an influence operation with a language model successfully? Taking this perspective, we identified four points for intervention: model construction, model access, content dissemination and belief formation. At each stage, a range of possible mitigations exist,” Goldstein said.

For example, in the construction phase, developers might use watermarking techniques to make data created by generative models detectable. At the same time, governments can impose access control on AI hardware.

At the access stage, LLM providers can put stricter usage restrictions on hosted models and develop new norms around releasing models.

On content dissemination, platforms that provide publication services (e.g., social media platforms, forums, e-commerce websites with review features, etc.) can impose restrictions such as “proof of personhood,” which will make it difficult for an AI-powered system to submit content at scale.

While the paper provides several such examples of mitigation techniques, Goldstein stressed that work is not complete.

“Just because a mitigation is possible, does not mean it should be implemented. Those in a place to implement—be it those at technology companies, in government or researchers—should assess desirability,” he said. 

Some questions that need to be asked include: Is a mitigation technically feasible? Socially feasible? What is the downside risk? What impact will it have?

“We need more research, analysis and testing to better address which mitigations are desirable and to highlight mitigations we overlooked,” Goldstein said. “We don’t have a silver bullet solution.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Ben Dickson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!