AI & RoboticsNews

Diverse AI teams are key to reducing bias

All the sessions from Transform 2021 are available on-demand now. Watch now.


An Amazon-built resume-rating algorithm, when trained on men’s resumes, taught itself to prefer male candidates and penalize resumes that included the word “women.”

A major hospital’s algorithm, when asked to assign risk scores to patients, gave white patients similar scores to Black patients who were significantly sicker.

“If a movie recommendation is flawed, that’s not the end of the world. But if you are on the receiving end of a decision [that] is being used by AI, that can be disastrous,” Huma Abidi, senior director of AI SW products and engineering at Intel, said during a session on bias and diversity in AI at VentureBeat’s Transform 2021 virtual conference. Abidi was joined by Yakaira Nuñez, senior director of research and insights at Salesforce, and Fahmida Y Rashid, executive editor of VentureBeat.

Changing the human variable

In order to produce fair algorithms, the data used to train AI needs to be free of bias. For every dataset, you have to ask yourself where the data came from, if that data is inclusive, if the dataset has been updated, and so on. And you need to utilize model cards, checklists, and risk management strategies at every step of the development process.

“The best possible framework is that we were actually able to manage that risk from the outset — we had all of the actors in place to be able to ensure that the process was inclusive, bringing the right people in the room at the right time that were representative of the level of diversity that we wanted to see and the content. So risk management strategies are my favorite. I do believe … in order for us to really mitigate bias that it’s going to be about risk mitigation and risk management,” Nuñez said.

Make sure that diversity is more than just a buzzword and that your leadership teams and speaker panels are reflective of the people you want to attract to your company, Nuñez said.

Bias causes harm

When thinking about diversity, equity, and inclusion work, or bias and racism, the most impact tends to be in areas in which individuals are most at risk, Nuñez said. Health care, finance, and legal situations — anything involving police — and child welfare are all sectors where bias causes “the most amount of harm” when it shows up. So when people are working on AI initiatives in these spaces to increase productivity or efficiencies, it is even more critical that they are thinking deliberately about bias and potential for harm. Each person is accountable and responsible for managing that bias.

Nuñez discussed how the responsibility of a research and insights leader is to curate data so executives can make informed decisions about product direction. Nuñez is not just thinking about the people pulling the data together, but also the people who may not be in the target market, to give insight into people Salesforce would not have known anything about otherwise.

Nuñez regularly asks the team to think about bias and whether it is present in the data, like asking whether the panel of individuals for a project is diverse. If the feedback is not from an environment that is representative of the target ecosystem, then that feedback is less useful.

Those questions “are the small little things that I can do at the day to day level to try to move the needle a bit at Salesforce,” Nuñez said.

Company-level changes

Research has shown that minorities often have to whiten their résumés in order to get callbacks and interviews. Companies and organizations can weave diversity and inclusion into their stated values to address this issue.

“If it’s already not part of your core mission statement, it’s really important to add those things … diversity, inclusion, equity. Just doing that, by itself, will help a lot,” Abidi said.

It’s important to integrate these values into corporate culture because of the interdisciplinary nature of AI: “It’s not just engineers; we work with ethicists, we have lawyers, we have policymakers. And all of us come together in order to fix this problem,” Abidi said.

Additionally, commitments by companies to help fix gender and minority imbalances also provide an end goal for recruitment teams: Intel wants women in 40% of technical roles by 2030. Salesforce is aiming to have 50% of its U.S. workforce made up of underrepresented groups, including women, people of color, LGBTQ+ employees, people with disabilities, and veterans.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Zachariah Chou
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!