AI & RoboticsNews

Why AI can’t move forward without diversity, equity, and inclusion

The need to pursue racial justice is more urgent than ever, especially in the technology industry. The far-reaching scope and power of machine learning (ML) and artificial intelligence (AI) means that any gender and racial bias at the source is multiplied to the nth power in businesses and out in the world. The impact those technology biases have on society as a whole can’t be underestimated.

When decision-makers in tech companies simply don’t reflect the diversity of the general population, it profoundly affects how AI/ML products are conceived, developed, and implemented. Evolve, presented by VentureBeat on December 8th, is a 90-minute event exploring bias, racism, and the lack of diversity across AI product development and management, and why these issues can’t be ignored.

“A lot has been happening in 2020, from working remotely to the Black Lives Matter movement, and that has made everybody realize that diversity, equity, and inclusion is much more important than ever,” says Huma Abidi, senior director of AI software products and engineering at Intel – and one of the speakers at Evolve. “Organizations are engaging in discussions around flexible working, social justice, equity, privilege, and the importance of DEI.”

Abidi, in the workforce for over two decades, has long grappled with the issue of gender diversity, and was often the only woman in the room at meetings. Even though the lack of women in tech remains an issue, companies have made an effort to address gender parity and have made some progress there.

In 2015, Intel allocated $300 million toward an initiative to increase diversity and inclusion in their ranks, from hiring to onboarding to retention. The company’s 2020 goal is to increase the number of women in technical roles to 40% by 2030 and to double the number of women and underrepresented minorities in senior leadership.

“Diversity is not only the right thing to do, but it’s also better for business,” Abidi says. “Studies from researchers, including McKinsey, have shown data that makes it increasingly clear that companies with more diverse workforces perform better financially.”

The proliferation of cases in which alarming bias is showing up in AI products and solutions has also made it clear that DEI is a broader and more immediate issue than had previously been assumed.

“AI is pervasive in our daily lives, being used for everything from recruiting decisions to credit decisions, health care risk predictions to policing, and even judicial sentencing,” says Abidi. “If the data or the algorithms used in these cases have underlying biases, then the results could be disastrous, especially for those who are at the receiving end of the decision.”

We’re hearing about cases more and more often, beyond the famous Apple credit check fiasco, and the fact that facial recognition still struggles with dark skin. There’s Amazon’s secret recruiting tool that avoided hiring qualified women because of the data set that was used to train the model. It showed that men were more qualified, because historically that’s been the case for that company.

An algorithm used by hospitals was shown to prioritize the care of healthier white patients over sicker Black patients who needed more attention. In Oakland, an AI-powered software piloted to predict areas of high crime turned out to be actually tracking areas with high minority populations, regardless of the crime rate.

“Despite great intentions to build technology that works for all and serves all, if the group that’s responsible for creating the technology itself is homogenous, then it will likely only work for that particular specific group,” Abidi says. “Companies need to understand, that if your AI solution is not implemented in a responsible, ethical manner, then the results can cause, at best, embarrassment, but it could also lead to potentially having legal consequences, if you’re not doing it the right way.”

This can be addressed with regulation, and the inclusion of AI ethics principles in research and development, around responsible AI, fairness, accountability, transparency, and explainability, she says.

“DEI is well established — it makes business sense and it’s the right thing to do,” she says. “But if you don’t have it as a core value in your organization, that’s a huge problem. That needs to be addressed.”

And then, especially when it comes to AI, companies have to think about who their target population is, and whether the data is representative of the target population. The people who first notice biases are the users from the specific minority community that the algorithm is ignoring or targeting — therefore, maintaining a diverse AI team can help mitigate unwanted AI biases.

And then, she says, companies need to ask if they have the right interdisciplinary team, including personnel such as AI ethicists, including ethics and compliance, law, policy, and corporate responsibility. Finally, you have to have a measurable, actionable de-biasing strategy that contains a portfolio of technical, operational, organizational actions to establish a workplace where these metrics and processes are transparent.

“Add DEI to your core mission statement, and make it measurable and actionable — is your solution in line with the mission of ethics and DEI?” she says. “Because AI has the power to change the world, the potential to bring enormous benefit, to uplift humanity if done correctly. Having DEI is one of the key components to make it happen.”


The 90-minute Evolve event is divided into two distinct sessions on December 8th:

  1. The Why, How & What of DE&I in AI
  2. From ‘Say’ to ‘Do’: Unpacking real-world case studies & how to overcome real-world issues of achieving DE&I in AI

Register for free right here.


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!