AI & RoboticsNews

Why you shouldn’t be using AI for hiring

Did you miss a session at the Data Summit? Watch On-Demand Here.


In my early years in tech, and later, as someone who developed recruiting software powered by artificial intelligence, I learned first-hand how AI and machine learning can create biases in hiring. In several different contexts I saw how AI in hiring often amplifies and exacerbates the very same problems one would generally start out optimistic it would “solve.” In cases where we thought it would help root out bias or increase “fairness” in terms of candidate funnels, instead often we would be surprised to find the exact opposite happening in practice.

Today, my role at CodePath combines my AI and engineering background with our commitment to give computer science students from low-income or underrepresented minority communities greater access to tech jobs. As I consider ways for our nonprofit to achieve that goal, I often wonder whether our students are running into the same AI-related hiring biases I witnessed firsthand several times over the last decade. While AI has tremendous potential to automate some tasks effectively, I don’t believe it’s appropriate in certain nuanced, highly subjective use cases with complex datasets and unclear outcomes. Hiring is one of those use cases.

Relying on AI for hiring may cause more harm than good. 

That’s not by design. Human relations managers generally begin AI-powered hiring processes with good intentions, namely the desire to whittle down applicants to the most qualified and best fits for the company’s culture. These managers turn to AI as a trusted, objective way to filter out the best and brightest from a huge electronic stack of resumes.

The mistake comes when those managers assume the AI is trained to avoid the same biases a human might display. In many cases, that doesn’t happen; in others, the AI designers unknowingly trained the algorithms to take actions that directly affect certain job candidates — such as automatically rejecting female applicants or people with names associated with ethnic or religious minorities. Many human relations department leaders have been shocked to discover that their hiring programs are taking actions that, if performed by a human, would result in termination.

Often, well intentioned people in positions to make hiring decisions try to fix the programming bugs that create the biases. I’ve yet to see anyone crack that code. 

Effective AI requires three things: clear outputs and outcomes; clean and clear data; and data at scale. AI functions best when it has access to huge amounts of objectively measured data, something not found in hiring. Data about candidates’ educational backgrounds, previous job experiences, and other skill sets is often muddled with complex, intersecting biases and assumptions. The samples are small, the data is impossible to measure, and the outcomes are unclear — meaning that it’s hard for the AI to learn what worked and what didn’t.

Unfortunately, the more AI repeats these biased actions, the more it learns to perform them. It creates a system that codifies bias, which isn’t the image most forward-thinking companies want to project to potential recruits. This is why Illinois, Maryland, and New York City are making laws banning the use of AI in hiring decisions, and why the U.S. Equal Employment Opportunity Commission is investigating the role AI tools play in hiring. It’s also why companies such as Walmart, Meta, Nike, CVS Health, and others, under the umbrella of The Data & Trust Alliance, are rooting out bias in their own hiring algorithms.

The simple solution is to avoid using AI in hiring altogether. While this suggestion might seem burdensome to time-strapped companies looking to automate routine tasks, it doesn’t have to be.

For example, because CodePath prioritizes the needs of low-income, underrepresented minority students, we couldn’t risk using a biased AI system to match graduates of our program with top tech employers. So we created our own compatibility tool that doesn’t use AI or ML but still works at scale. It relies on automation only for purely objective data, simple rubrics, or compatibility scoring — all of which are monitored by humans who are sensitive to the issue of bias in hiring. We also only automate self-reported or strictly quantitative data, which reduces the likelihood of bias. 

For those companies that feel compelled to rely on AI technology in their hiring decisions, there are ways to reduce potential harm:

1. Recognize the risk of bias that AI-powered hiring tools present

Don’t get caught up in the idea that AI is going to be right. Algorithms are only as bias-free as the people who create (and watch over) them. Once datasets and algorithms become trusted sources, people no longer feel compelled to provide oversight for them. Challenge the technology. Question it. Test it. Find those biases and root them out. 

Companies should consider creating teams of hiring and tech professionals that monitor data, root out problems, and continuously challenge the outcomes produced by AI. The humans on those teams may be able to spot potential biases and either eliminate them or compensate for them.

2. Be mindful of your data sources — and your responsibility

If the only datasets your AI is trained to review come from companies that have historically hired few women or minorities, don’t be shocked when the algorithms spit out the same biased outcomes. Ask yourself: Am I comfortable with this data? Do I share the same values as the source? The answers to these questions allow for a careful evaluation of datasets or heuristics.

It’s also important to be aware of your company’s responsibility to have unbiased hiring systems. Even being a little more mindful of these possibilities can help reduce potential harm.

3. Use more simple, straightforward ways to identify compatibility between a candidate and an open position

Most compatibility solutions don’t require any magical AI or elaborate heuristics, and sometimes going back to the basics can actually work better. Strip away the concept of AI and ask yourself: What are the things we can all agree are either increasing or reducing compatibility in this role?

Use AI only for objective compatibility metrics in hiring decisions, such as self-reported skills or information-matching against the express needs of the role. These provide clean, clear datasets that can be measured accurately and fairly. Leave the more complicated, ambiguous, or nuanced filters to actual human beings who best understand the combination of knowledge and skills that job candidates need to succeed. For example, consider using software that automates some of the processes but still allows for an element of human oversight or final decision making. Automate only those functions you can measure fairly.

Given how much AI-powered hiring tools impact the lives of the very people at greatest risk of bias, we owe it to them to proceed with this technology with extreme caution. At best, it can lead to bad hiring decisions by companies that can ill afford the time and expense of refilling the positions. At worst, it can keep smart, talented people from getting high-paying jobs in high-demand fields, limiting not only their economic mobility but also their right to live happy, successful lives. 

Nathan Esquenazi is co-founder and chief technology officer of CodePath, a nonprofit that seeks to create diversity in tech by transforming college computer science education for underrepresented minorities and underserved populations. He is also a member of the Cognitive World think tank on enterprise AI.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Nathan Esquenazi, CodePath
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!