AI & RoboticsNews

AI Weekly: Facebook, Google, and the tension between profits and fairness

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


This week, we learned a lot more about the inner workings of AI fairness and ethics operations at Facebook and Google and how things have gone wrong. On Monday, a Google employee group wrote a letter asking Congress and state lawmakers to pass legislation to protect AI ethics whistleblowers. That letter cites VentureBeat reporting about the potential policy outcomes of Google firing former Ethical AI team co-lead Timnit Gebru. It also cites research by UC Berkeley law professor Sonia Katyal, who told VentureBeat, “What we should be concerned about is a world where all of the most talented researchers like [Gebru] get hired at these places and then effectively muzzled from speaking. And when that happens, whistleblower protections become essential.” The 2021 AI Index report found that AI ethics stories — including Google firing Gebru — were among the most popular AI-related news articles of 2020, an indication of rising public interest. In the letter published Monday, Google employees spoke of harassment and intimidation, and a person with policy and ethics matters at Google described a “deep sense of fear” since the firing of ethics leaders Gebru and former co-lead Margaret Mitchell.

On Thursday, MIT Tech Review’s Karen Hao published a story that unpacked a lot of previously unknown information about ties between AI ethics operations at Facebook and the company’s failure to address misinformation peddled through its social media platforms and tied directly to a number of real-world atrocities. A major takeaway from this lengthy piece is that Facebook’s responsible AI team focused on addressing algorithmic bias in place of issues like disinformation and political polarization, following 2018 complaints by conservative politicians, although a recent study refutes their claims. The events described in Hao’s report appear to document political winds shifting the definition of fairness at Facebook, and the extremes to which a company will go in order to escape regulation.

Facebook CEO Mark Zuckerberg’s public defense of President Trump last summer and years of extensive reporting by journalists have already highlighted the company’s willingness to profit from hate and misinformation. A article last year, for example, found that the majority of people in Facebook groups labeled as extremist joined as a result of a recommendation made by a Facebook algorithm.

What this week’s MIT Tech Review story details is a tech giant deciding how to define fairness to advance its underlying business goals. Just as with Google’s Ethical AI team meltdown, Hao’s story describes forces within Facebook that sought to co-opt or suppress ethics operations after just a year or two of operation. One former Facebook researcher, who Hao quoted on background, described their work as helping the company maintain the status quo in a way that often contradicted Zuckerberg’s public position on what’s fair and equitable. Another researcher speaking on background described being told to block a medical-misinformation detection algorithm that had noticeably reduced the reach of anti-vaccine campaigns.

In what a Facebook spokesperson pointed to as the company’s official response, Facebook CTO Mike Schroepfer called the core narrative of Hao’s article incorrect but made no effort to dispute facts reported in the story.

Facebook chief AI scientist Yann LeCun, who got into a public spat with Gebru over the summer about AI bias that led to accusations of gaslighting and racism, claimed the story had factual errors. Hao and her editor reviewed the claims of inaccuracy and found no factual error.

Facebook’s business practices have played a role in digital redlining, genocide in Myanmar, and the insurrection at the U.S. Capitol. At an internal meeting Thursday, according to BuzzFeed reporter Ryan Mac, an employee asked how Facebook funding AI research differs from Big Tobacco’s history of funding health studies. Mac said the response was that Facebook was not funding its own research in this specific instance, but AI researchers spoke extensively about that concern last year.

Last summer, VentureBeat covered stories involving Schroepfer and LeCun after events drew questions about diversity, hiring, and AI bias at the company. As that reporting and Hao’s nine-month investigation highlight: Facebook has no system in place to audit and test algorithms for bias. A civil rights audit commissioned by Facebook and released last summer calls for the regular and mandatory testing of algorithms for bias and discrimination.

Following allegations of toxic, anti-Black work environments, both Facebook and Google have been accused in the past week of treating Black job candidates in a separate and unequal fashion. Reuters reported last week that the Equal Employment Opportunity Commission (EEOC) is investigating “systemic” racial bias at Facebook in hiring and promotions. And additional details about an EEOC complaint filed by a Black woman emerged Thursday. At Google, multiple sources told NBC News last year that diversity investments in 2018 were cut back in order to avoid criticism from conservative politicians.

On Wednesday, Facebook also made its first attempt to dismiss an antitrust suit brought against the company by the Federal Trade Commission (FTC) and attorneys general from 46 U.S. states.

All of this happened in the same week that U.S. President Joe Biden nominated Lina Khan to the FTC, leading to the claim that the new administration is building a “Big Tech antitrust all-star team.” Last week, Biden appointed Tim Wu to the White House National Economic Council. A supporter of breaking up Big Tech companies, Wu wrote an op-ed last fall in which he called one of the multiple antitrust cases against Google bigger than any single company. He later referred to it as the end of a decades-long antitrust winter. VentureBeat featured Wu’s book about the history of antitrust reform in a list of essential books to read. Other signals that more regulation could be on the way include the appointments of FTC chair Rebecca Slaughter and OSTP deputy director Alondra Nelson, who have both expressed a need to address algorithmic bias.

The Google story calling for whistleblower protections for people researching the ethical deployment of AI marks the second time in as many weeks that Congress has received a recommendation to act to protect people from AI.

The National Security Commission on Artificial Intelligence (NSCAI) was formed in 2018 to advise Congress and the federal government. The group is chaired by former Google CEO Eric Schmidt, and Google Cloud AI chief Andrew Moore is among the group’s 15 commissioners. Last week, the body published a report that recommends the government spend $40 billion in the coming years on research and development and the democratization of AI. The report also says individuals within government agencies essential to national security should be given a way to report concerns about “irresponsible AI development.” The report states that “Congress and the public need to see that the government is equipped to catch and fix critical flaws in systems in time to prevent inadvertent disasters and hold humans accountable, including for misuse.” It also encourages ongoing implementation of audits and reporting requirements. However, as audits at businesses like HireVue have shown, there are a lot of different ways to audit an algorithm.

This week’s consensus between organized Google employees and NSCAI commissioners who represent business executives from companies like Google Cloud, Microsoft, and Oracle suggests some agreement between broad swaths of people intimately familiar with the deployment of AI at scale.

In casting the final vote to approve the NSCAI report, Moore said, “We are the human race. We are tool users. It’s kind of what we’re known for. And we’ve now hit the point where our tools are, in some limited sense, more intelligent than ourselves. And it’s a very exciting future, which we have to take seriously for the benefit of the United States and the world.”

While deep learning and forms of AI may be capable of doing things that people describe as superhuman, this week we got a reminder of how untrustworthy AI systems can be when OpenAI demonstrated that its state-of-the-art model can be fooled to think an apple with “iPod” written on it is in fact an iPod, something any person with a pulse could discern.

Hao described the subjects of her Facebook story as well-intentioned people trying to make changes in a rotten system that acts to protect itself. Ethics researchers in a corporation of that size are effectively charged with considering society as a shareholder, but everyone else they work with is expected to think first and foremost about the bottom line, or personal bonuses. Hao said that reporting on the story has convinced her that self-regulation cannot work.

“Facebook has only ever moved on issues because of or in anticipation of external regulation,” she said in a tweet.

After Google fired Gebru, VentureBeat spoke with ethics, legal, and policy experts who have also reached the conclusion that “self-regulation can’t be trusted.”

Whether at Facebook or Google, each of these incidents — often told with the help of sources speaking on condition of anonymity — shine light on the need for guardrails and regulation and, as a recent Google research paper found, journalists who ask tough questions. In that paper, titled “Re-imagining Algorithmic Fairness in India and Beyond,” researchers state that “Technology journalism is a keystone of equitable automation and needs to be fostered for AI.”

Companies like Facebook and Google sit at the center of AI industry consolidation, and the ramifications of their actions extend beyond even their great reach, touching virtually every aspect of the tech ecosystem. A source familiar with ethics and policy matters at Google who supports whistleblower protection laws told VentureBeat the equation is pretty simple: “[If] you want to be a company that touches billions of people, then you should be responsible and held accountable for how you touch those billions of people.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark The Machine.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!