AI & RoboticsNews

AI can persuade people to make ethically questionable decisions, study finds

AI shapes people’s lives on a daily basis. It sets prices in retail stores and makes recommendations ranging from movies to romantic partners. But it’s an open question whether AI can become a trusted advisor or even a corrupting force, influencing people’s behavior potentially to the point where they break ethical rules.

A fascinating study published by researchers at the University of Amsterdam, Max Planck Institute, Otto Beisheim School of Management, and the University of Cologne aims to discover the degree to which AI-generated advice can cause people to sacrifice their honesty. In a large-scale survey leveraging OpenAI’s GPT-2 language model, the researchers found that the advice can “corrupt” people even when they’re aware the source of the advice is AI.

There’s a growing concern among academics that AI could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. In a paper published by the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC), the coauthors find that GPT-3, the successor to GPT-2, could reliably generate “informational,” ” influential” text that might “radicalize individuals into violent far-right extremist ideologies and behaviors.”

The coauthors of this latest paper trained GPT-2 to generate “honesty-promoting” and “dishonesty-promoting” advice using a dataset of contributions from around 400 participants. Then, they recruited a group of over 1,500 people to read instructions, receive the advice, and engage in a task designed to assess honest or dishonest behavior.

AI corrupting influence

People from the group were paired in “dyads” comprising a first and second “mover.”  The first mover rolled a die in private and reported the outcome, while the second mover learned about the first mover’s report before rolling a die in private and reporting the outcome as well. Only if the first and second mover reported the same outcome were they paid according to the double’s worth, with higher doubles corresponding to higher pay. They weren’t paid if they reported different outcomes.

Before reporting the die roll outcome, people randomly assigned to different treatments read honesty-promoting or dishonesty-promoting advice that was either human-written or AI-generated. They either (1) knew the source of the advice or (2) knew that there was a 50-50 chance that it came from either source, and those who didn’t know could earn bonus pay if they guessed the source of the advice.

According to the researchers, the AI-generated advice “corrupted” people whether or not the source of the advice was disclosed to them. In fact, the statistical effect of AI-generated advice was indistinguishable from that of human-written advice. More discouragingly, honesty-promoting advice from AI failed to sway people’s behavior.

The researchers say their study illustrates the importance of testing the influence of AI as a step toward maintaining it responsibly. Those with malicious intentions could use the forces of AI to corrupt others, they warn.

” AI could be a force for good if it manages to convince people to act more ethically. Yet our results reveal that AI advice fails to increase honesty. AI-advisors can serve as scapegoats to which one can deflect (some of the) moral blame of dishonesty. Moreover … in the context of advice taking, transparency about algorithmic presence does not suffice to alleviate its potential harm,” the researchers wrote. “When AI-generated advice aligns with individuals’ preferences to lie for profit, they gladly follow it, even when they know the source of the advice is an AI. It appears there is a discrepancy between stated preferences and actual behavior, highlighting the necessity to study human behavior in interaction with actual algorithmic outputs.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
DefenseNews

Navy, senators argue over who is to blame for a too-small fleet

DefenseNews

To expand the US Navy’s fleet, we must contract

DefenseNews

Ellis to succeed Rey as director of Army Network Cross-Functional Team

Cleantech & EV'sNews

Tesla asks shareholders to move to Texas and re-pass Elon Musk's massive compensation plan

Sign up for our Newsletter and
stay informed!