AI & RoboticsNews

AI Weekly: Algorithms, accountability, and regulating Big Tech

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation.

The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word “algorithm” alone was used more than 50 times.

Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.

Representatives repeatedly cited a May 2020 article about an internal Facebook study that found that the majority of people who join extremist groups do so because the Facebook recommendation algorithm proposed that they do so. A recent article about focusing bias detection to appease conservative lawmakers instead of to reduce disinformation also came up, as lawmakers repeatedly asserted that self regulation was no longer an option. Virtually throughout the entirety of the more than five-hour long hearing, there was a tone of unvarnished repulsion and disdain for exploitative business models and willingness to sell addictive algorithms to children.

“Big Tech is essentially handing our children a lit cigarette and hoping they stay addicted for life,” Rep. Bill Johnson (R-OH) said.

In his comparison of Big Tech companies to Big Tobacco — a parallel drawn at Facebook and a recent AI research paper — Johnson quotes then-Rep. Henry Waxman (D-CA), who stated in 1994 that Big Tobacco had been “exempt from standards of responsibility and accountability that apply to all other American corporations.”

Some congresspeople suggested laws to require tech companies to publicly report diversity data at all levels of a company and to prevent targeted ads that push misinformation to marginalized communities including veterans.

Rep. Debbie Dingell (D-MI) suggested a law that would establish an independent organization of researchers and computer scientists to identify misinformation before it goes viral.

Pointing to YouTube’s recommendation algorithm and its known propensity to radicalize people, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced the Protecting Americans from Dangerous Algorithms Act back in October to amend Section 230 and allow courts to examine the role of algorithmic amplification that leads to violence.

Next to Section 230 reform, one of the most popular solutions lawmakers proposed was a law requiring tech companies to perform civil rights audits or algorithm audits for performance.

It might be cathartic seeing tech CEOs whose attitudes are described by lawmakers as smug and arrogant get their come-uppances for inaction on systemic issues that threaten human lives and democracy because they’d rather make more money. But after the bombast and bipartisan recognition of how AI can harm people on display Thursday, the pressure is on Washington, not Silicon Valley.

I mean, of course Zuckerberg or Pichai will still need to answer for it when the next white supremacist terrorist action happens and it’s again drawn directly back to a Facebook group or YouTube indoctrination, but to date, lawmakers have no record of passing sweeping legislation to regulate the use of algorithms.

Bipartisan agreement for regulation of facial recognition and data privacy has also not yet paid off with comprehensive legislation.

Mentions of artificial intelligence and machine learning in Congress are at an all-time high. And in recent weeks, a national panel of industry experts have urged AI policy action to protect the national security interests of the United States, and Google employees have implored Congress to pass stronger laws to protect people who come forward to reveal ways AI is being used to harm people.

The details of any proposed legislation will reveal just how serious lawmakers are about bringing accountability to those who make the algorithms. For example, diversity reporting requirements should include breakdowns of specific teams working with AI at Big Tech companies. Facebook and Google release diversity reports today, but those reports do not break down AI team diversity.

Testing and agreed-upon standards are table stakes in industries where products and services can harm people. You can’t break ground on a construction project without an environmental impact report, and you can’t sell people medicine without going through the Food and Drug Administration, so you probably shouldn’t be able to freely deploy AI that reaches billions of people that’s discriminatory or peddles extremism for profit.

Of course, accountability mechanisms meant to increase public trust can fail. Remember Bell, the California city that regularly underwent financial audits but still turned out to be corrupt? And algorithm audits don’t always assess performance. Even if researchers document a propensity to do harm, like analysis of Amazon’s Rekognition or YouTube radicalization showed in 2019, that doesn’t mean that AI won’t be used in production today.

Regulation of some kind is coming, but the unanswered question is whether that legislation will go beyond the solutions tech CEOs endorse. Zuckerberg voiced support for federal privacy legislation, just as Microsoft has done in fights with state legislatures attempting to pass data privacy laws. Zuckerberg also expressed some backing for algorithm auditing as an “important area of study”; however, Facebook does not perform systematic audits of its algorithms today, even though that’s recommended by a civil rights audit of Facebook completed last summer.

Last week, the Carr Center at Harvard University published an analysis of the human rights impact assessments (HRIAs) Facebook performed regarding its product and presence in Myanmar following a genocide in that country. That analysis found that a third-party HRIA largely omits mention of the Rohingya and fails to assess if algorithms played a role.

“What is the link between the algorithm and genocide? That’s the crux of it. The U.N. report claims there is a relationship,” coauthor Mark Latonero told VentureBeat. “They said essentially Facebook contributed to the environment where hateful speech was normalized and amplified in society.”

The Carr report states that any policy demanding human rights impact assessments should be wary of such reports from the companies, since they tend to engage in ethics washing and to “hide behind a veneer of human rights due diligence and accountability.”

To prevent this, researchers suggest performing analysis throughout the lifecycle of AI products and services, and attest that to center the impact of AI requires viewing algorithms as sociotechnical systems deserving of evaluation by social and computer scientists. This is in line with a previous research that insists AI be looked at like a bureaucracy, as well as AI researchers working with critical race theory.

“Determining whether or not an AI system contributed to a human rights harm is not obvious to those without the appropriate expertise and methodologies,” the Carr report reads. “Furthermore, without additional technical expertise, those conducting HRIAs would not be able to recommend potential changes to AI products and algorithmic processes themselves in order to mitigate existing and future harms.”

Evidenced by the fact that multiple members of Congress talked about the perseverance of evil in Big Tech this week, policymakers seem aware AI can harm people, from spreading disinformation and hate for profit to endangering children, democracy, and economic competition. If we all agree that Big Tech is in fact a threat to children, competitive business practices, and democracy, if Democrats and Republicans fail to take sufficient action, in time it could be lawmakers who are labeled untrustworthy.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!