AI & RoboticsNews

Are AI ethics teams doomed to be a facade? Women who pioneered them weigh in

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


The concept of “ethical AI” hardly existed just a few years ago, but times have changed. After countless discoveries of AI systems causing real-world harm and a slew of professionals ringing the alarm, tech companies now know that all eyes — from customers to regulators — are on their AI. They also know this is something they need to have an answer for. That answer, in many cases, has been to establish in-house AI ethics teams.

Now present at companies including Google, Microsoft, IBM, Facebook, Salesforce, Sony, and more, such groups and boards were largely positioned as places to do important research and even act as safeguards against the companies’ own AI technologies. But after Google fired Timnit Gebru and Margaret Mitchell, leading voices in the space and the former co-leads of the company’s ethical AI lab, this past winter after Gebru refused to rescind a research paper on the risks of large language models, it felt as if the rug had been pulled out on the whole concept. It doesn’t help that Facebook has also been criticized for steering its AI ethics team away from research into topics like misinformation, in fear it could impact user growth and engagement. Now, many in the industry are questioning if these in-house teams are just a facade.

“I do think that skepticism is very much warranted for any ‘ethics’ thing that comes out of corporations,” Gebru told VentureBeat, adding that it “serves as PR [to] make them look good.”

So is it even possible to do real AI ethics work inside a corporate tech giant? And how can these teams succeed? To explore these increasingly important questions, VentureBeat spoke with the women who pioneered such initiatives — including Gebru and Mitchell, among others — about their own experiences and thoughts on how to build AI ethics teams. Several themes emerged throughout the conversations, including the pull between independence and integration, the importance of diversity and inclusion, and the fact that buy-in from executive leadership is paramount.

“[If] we can’t have open dialogue with leaders of companies about what AI ethics is, we’re not going to make progress in those companies,” Mitchell said.

It starts with power

Without genuine executive support, AI ethics teams are non-starters. Across the board, all the AI ethics leaders we spoke to maintained that it’s crucial — even step one — for launching any sort of corporate AI ethics team or initiative. Gebru emphasized the need for these teams to have some amount of power in the company, and Kathy Baxter, another AI ethics pioneer who launched and currently leads such a team at Salesforce, said she can’t “stress enough the importance of the culture and the DNA.”

“If you don’t have that idea of stakeholder capitalism and that we’re part of the community that we are selling our products and services to, then I think it’s a much more tenuous place to come from,” Baxter said.

This is also the feeling of Alice Xiang, who is heading up Sony’s recently launched AI ethics initiatives and said leadership buy-in is “incredibly critical.” She specified that executives from both the technical and legal sides, as well as the business units actually building out the AI products, all need to be on board and aligned for the effort to have an impact.

And Mitchell took it a step beyond leadership buy-in itself, emphasizing that inclusion at the top is absolutely necessary.

“If you can have diversity and inclusion all the way at the top, then it’s going to be a lot easier to actually do something real with AI ethics,” she said. “People have to feel included.”

Who’s at the table

In a recent report, The Markup detailed how AI is denying people of color home loans more often than white people with similar financial characteristics, with disparities as high as 250% in some areas. It was a bombshell finding, but the unfortunate truth is that such discoveries are routine.

Soon after, another recent investigation uncovered that the enrollment algorithms sweeping higher education are perpetuating racial inequalities, among other issues. And we know that racially biased facial recognition technology is routinely misidentifying innocent Black people and even sending them to jail for crimes they didn’t commit. Transgender people have also reported frequent issues with AI-based tools like Google Photos. And there are countless examples of AI discriminating against women and other frequently disenfranchised people — for example, Apple’s algorithm for credit cards offering women significantly smaller lines of credit than men. When pressed, the company couldn’t even explain why it was happening. And all this is just the tip of the iceberg.

In short, the ethical issues many AI researchers are interrogating are not hypothetical, but real, pervasive, and causing widespread harm today. And it’s no coincidence that the groups of people experiencing the direct harms of AI technologies are the same ones who have historically been and continue to be underrepresented in the tech industry. Overall, only 26% of computing-related jobs are held by women; just 3% are held by African American women, 6% by Asian women, and 2% by Hispanic women. And studies show these women, especially women of color, feel invisible at work. More specifically, only 16% of women and ethnic minority tech employees in a recent survey said they believe they’re well represented in tech teams. And of tech workers overall, 84% said their products aren’t inclusive.

“Basically anyone who has worked on ethical AI that I know of has come to the same conclusion: that one of the fundamental issues in developing AI ethically is that you have to have a diverse set of people at the table from the start who feel included enough to share their thoughts openly,” Mitchell said. And Xiang agreed, citing D&I as a top consideration for building AI ethics teams.

Baxter explained that “figuring out what is a safe threshold to launch” is one of the biggest challenges when it comes to these types of AI systems. And when these people don’t feel included or aren’t present at all, their perspectives and lived experiences with discrimination and racism aren’t accounted for in these vital decisions. This shows in the final products, and it connects to a point Gebru raised about how a lot of people “just want to sit in the corner and do the math.” Mitchell echoes this as well, saying “[Big tech companies] like things that are very technical, and diversity and inclusion in the workplace seems like it’s a separate issue, when it’s very much not.”

You’d think stakeholders would want their technologies to work accurately and in everyone’s best interest. Yet, raising questions around how a technology will impact people of different races, genders, religions, sexualities, or other identities that have historically been subject to harm is often perceived as activism rather than due diligence. Mitchell said this common reflex is “an example of how ingrained discrimination is.” She’s found that talking about ethics, morality, and values stimulates people in a way that’s different from other kinds of business work, comparing it to the fight-or-flight response. And though she considers herself “a reformer,” she said she’s often grouped with people who proudly self-identify as activists.

“And I think that’s simply because I don’t agree with discrimination,” she said. “If being against discrimination makes you an activist in someone’s mind, then chances are they have a very discriminatory view.”

Independence vs. integration

The consensus around executive buy-in, diversity, and inclusion is strong, but there’s one aspect of corporate AI ethics teams where people are less certain: structure. Specifically, there’s debate around whether those teams should be independent and siloed, or closely integrated with other parts of the organization.

One could make an argument for both approaches. Independent AI ethics teams would, theoretically, have the freedom and power to do the work without heavy oversight or interference. This could, for example, allow them to more publicly push back against corporate decisions or freely publish important research — even when the findings may be unwelcome by the company. On the other hand, AI ethics teams that are close to the pipeline and daily decisions would be better positioned to spot ethical problems before they’re built into products and shipped. Overall, Mitchell said this is “one of the fundamental tensions in operationalizing ethical AI right now.”

Post-Google, Gebru feels strongly about independence. She believes researchers should have a voice and be able to openly criticize the company, naming Microsoft Research as a good example where the group is seen as separate from the rest of the organization. But ultimately, she said there needs to be a balance, because companies can too easily point to the independent teams to show they care about ethics without actually acting on the efforts internally. She told VentureBeat she’s done work where it was very useful to be integrated, as well as work where it helped to be removed.

“I do think it needs to be all of it,” she said. “There needs to be independent researchers and people who are embedded in organizations, but the problem is that there’s no real independence anywhere.”

Also influenced by her experience at Google, Mitchell agrees both directions have value. The challenge, she says, is in how to slice it up.

A two-pronged approach

Salesforce and Sony are two companies that have put a hybrid model of sorts into practice. Both split their AI ethics team initiative into segments, which have varying responsibilities and levels of integration.

Salesforce’s Office of Ethical and Humane Use, launched in 2018, is tasked with ensuring the company’s technology is developed and used in a responsible manner. Baxter explained the three buckets that make up the team’s mission: policy (determining the company’s red lines); product (deliberating use cases and builds with product teams and customers); and evangelism/education (sharing whitepapers, blogs, and regulation discussions with members of government).

But within that group, there’s also the Ethical AI Practice Team, which more specifically focuses on ethics research, debiasing, and analysis. Baxter says there are also AI ethics team members who partner closely across the company’s different clouds, as well as non-ethics team members who routinely work with the group. Overall, Salesforce appears to take a mostly integrated approach to AI ethics. Baxter described “working very closely with [Salesforce’s] product teams, engineers, data scientists, product managers, UX designers, and researchers to think about, first and foremost, is this something that should exist in the first place?”

“Where are we going to get the training data from?” she continued, listing the types of questions the ethics researchers discuss with product teams. “Are there known biases or risks that we should be taking into account? What are potential unintended consequences and how do we mitigate them?”

And earlier this year, Salesforce made an organizational move that would, theoretically, give ethics an even bigger role in product design. The company moved the Office of Ethical and Humane Use, which previously was part of the Office of Equality, to sit directly within Salesforce’s product organization.

Sony, on the other hand, is new to the realm of AI ethics teams. The company launched its ethics initiative earlier this year, after announcing at the end of 2020 that it would start screening all of its AI products for ethical risks. Xiang said Sony views AI ethics as an important part of the company’s long-term competitive advantage in the AI space, and is eager to bring a global perspective to a field she said is currently dominated by U.S.-based tech companies and European regulatory standards.

While in its very early stages, Sony’s approach is interesting and worth paying attention to. The company launched two teams that “work synergistically together” to tackle the subject from multiple angles. One is a research team within Sony AI focused on fairness, transparency, accountability, and translating the “abstract concepts around AI ethics that practitioners are grappling with” into actual solutions. The other is an Ethics Office, which is a cross-Sony group based within the company’s corporate headquarters. Partially embedded within some of Sony’s existing compliance processes, this team conducts AI ethics assessments across business units. When teams submit the now-mandatory information about the AI products they’re building, this group assesses them along various dimensions.

Xiang told VentureBeat she felt strongly that these two teams should be closely integrated, and she believes AI ethics teams should be “as early as possible” as a stop on the product roadmap.

“We start our process in the initial planning stages, even before people have written a single line of code,” she said.

Keeping AI ethics real

After their experiences at Google, Gebru and Mitchell now have differing levels of faith in the idea of corporate AI ethics teams. Gebru said it’s important for people to do the work so companies can confront the issues, but told VentureBeat she doesn’t think it’s possible without strong labor and whistleblower protection laws. “There’s no way I could go to another large tech company and do that again,” she told Bloomberg in a recent interview, where she first spoke about her plans to launch an independent AI research group.

Mitchell, however, said she still “very much think[s] it’s possible to do ethical AI work in industry.” Part of her reasoning requires debunking a common misconception about AI ethics: that it’s about sticking a fork in AI technology, and that it will always be at odds with a company’s bottom line. Thinking through and prioritizing values is a big part of ethics work, she said, and in a corporate setting, profit is just another value to consider. Baxter made a similar point, saying she’s “not trying to have a gotcha” and that it’s all about tradeoffs.

In fact, AI ethics is smart for business. Though not wanting to harm people should be more than enough of a reason to take the work seriously, there’s also the fact that plowing ahead with a product without understanding and mitigating issues can damage the brand, introduce legal troubles, and deter customers.

“People often have the perception that AI ethics is exclusively about stopping or slowing down the development of AI technology,” Xiang said. “You probably hear this a lot from practitioners in the ethics space, but we don’t necessarily view our role as that. Actually, our goal is to ensure the long-term sustainable development of these AI businesses.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Sage Lazzaro
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!