AI & RoboticsNews

5 ways to address regulations around AI-enabled hiring and employment

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


In November, the New York City Council passed the first bill in the U.S. to broadly address the use of AI in hiring and employment. It would require hiring vendors to conduct annual bias audits of the use of artificial intelligence (AI) in the city’s processes and tools. 

But that was just the beginning for proposed regulations on the use of employment AI tools. The European Commission recently drafted proposals that would protect gig workers from AI-enabled monitoring. And this past April, California introduced The Workplace Technology Accountability Act, or Assembly Bill 1651, which proposes employees be notified prior to the collection of data and use of monitoring tools and deployment of algorithms, with the right to review and correct collected data. It would limit the use of monitoring technologies to job-related use cases and valid business practices and require employers to conduct impact assessments on the use of algorithms and data collection.

This kind of legislation around the use of AI in hiring and employment is becoming more common, Beena Ammanath, executive director of the Global Deloitte AI Institute, told VentureBeat. The question is, what should HR departments and technical decision-makers be thinking about as AI regulation evolves? 

AI has reinvented HR processes

“The introduction of AI, virtual reality, machine learning and social collaboration can make it possible to truly reinvent HR processes and activities rather than only automate,” she said. “As HR departments continue to invest in AI and other emerging technologies, it’s important to think through data privacy and ethical impacts and understand compliance requirements with local regulations.” 

Most of those HR investments in AI are around recruiting and hiring: According to a new survey of 1,300 employers by law firm Littler Mendelson, among respondents whose organizations are deploying AI and data analytics in HR, most (69%) are doing so in the recruiting and hiring process. According to the survey, most use AI to screen resumes or applications (67%) and identify candidates (49%) – while AI drops off for actions further into the recruiting pipeline, where the human component is more critical.

But AI bias is a legitimate concern in this area, said Fran Maxwell, managing director and global leader in the organizational transformation practice at consultancy Protiviti. “Often the AI model does not intend bias, but the data used by the model could unintentionally introduce bias,” he explained. “For example, if a model considers zip code as a factor for decisioning, but that zip code is associated more prominently with a particular ethnicity, it could unknowingly introduce bias.”  Detecting that bias can be difficult, he added: “This highlights the need to have a governance function over AI and ML activities and to have humans reviewing models and results for unintended bias.”

But some experts criticize the current bills and proposed legislation. For example, companies using AI in hiring want to increase the speed of decision-making and remove the subjective human element, said Gerald Hathaway, partner at law firm Faegre Drinker, who is part of the firm’s AI and algorithmic decision-making team. But he maintains that the New York City law is “onerous.” 

“I believe the real purpose of the law is to stop employers from using AI,” he said. “The New York City requirement for annual bias testing is expensive, while the 10-day advance notice requirement defeats the need for speed.” 

In addition, concerns about bias are “really missing the point as to one of the main reasons AI is attractive to employers – to remove subjective bias from the hiring decisions,” he said. 

There are also business concerns around security, according to Maxwell. “In Europe, strict regulations around employee monitoring have made it difficult to implement many advanced security tools and processes in those environments, leaving them a bit more exposed to sophisticated attacks,” he said. “While these regulations protect employee privacy, they do put the organization at risk.” 

But Ammanath points out that as AI use increases across enterprises, it will be important for governments, the private sector and consumer groups to develop regulations for AI and other emerging technologies. “AI regulatory efforts, like those in New York and California, seek transparency for black box algorithms and the ability to protect consumers from bias and discrimination,” she said. “Generally, transparency into workplace surveillance and data collection practices is a positive.” 

Should AI practitioners in HR welcome legislation? 

Matthew Spencer at AI-powered recruiting network Suited agrees, saying that AI practitioners who work in hiring and recruitment should welcome legislation. 

“When AI is deployed to help us make hiring decisions, the risk of creating or perpetuating bias against candidates is present, just as it is with standard hiring practices,” he said. “Increased oversight will help ensure that candidates are not discriminated against so that AI can deliver on creating objective and accurate hiring outcomes.”

What is surprising, he added, is how vague the regulations are. “For example, the NYC law states that all automated employment decision tools must undergo a bias audit in which an independent auditor determines the tool’s impact on individuals based on a number of demographic factors,” he said. “However, there is no indication as to what that audit should actually include. Should we meet or exceed the EEOC guidelines? What do legislators mean when they say adverse impact testing? At what stage in the process do they want to see said testing? All of these things are still largely unknown.” 

It is important to keep in mind that AI risks and ethical implications vary widely by industry, Ammanath said. “For example, the use of AI and facial recognition technology in government and law enforcement may be perceived as negative, while use of the same technologies in manufacturing to ensure security and worker safety is positive,” she said. “I don’t think it’s possible to have a universal law or set of regulations around AI – laws will need to vary based on geography, location, industry, and culture.”

5 takeaways for HR and technical decision-makers

1. Consider benefits to the job-seeker

According to Lindsey Zuloaga, chief data scientist at talent experience platform HireVue, enterprise companies who are making a decision about AI-driven technology for their hiring should use a framework that includes benefits to the job seeker. “Technology and AI should be used to raise a candidate’s awareness about their talents, development gaps, and career opportunities,” she said.  As recruitment becomes more data-driven, organizations have a responsibility to share these insights with applicants.”

2. Share data/AI use with candidates

If AI is to be used in hiring processes, companies should be encouraged to share with candidates how their data is being used, what is being evaluated and provide the opportunity to delete their data. “HR departments must also level up their data and ethics governance to ensure data is not being misused,” said Zuloaga. “This includes data safety, anonymization, and safeguards in the case of privacy breaches.” 

3. Make sure AI is explainable

“The wielders of AI algorithms must be able to explain how a user received a particular score or evaluation,” said Zuloaga. “They should be aware of how the algorithm was developed, and factor in its strengths and limitations when forming a hiring decision.” Typically, AI models are nonlinear, which makes them difficult to explain, adds Spencer.However, practitioners can and should focus on techniques that deliver explainable AI, which allows algorithmic decision making to be human interpretable.”

4. Training AI on data should be relevant to the job at hand

For example, Zuloaga explained, facial recognition or video interviewing analysis claims to be predicting who is the most “employable,” when in reality it is actually predicting things like who is a native English speaker, who is extraverted, who could afford job training, or who is the better actor. “Ensuring your AI is only looking for traits or skills that are relevant to the job you are hiring for is absolutely essential,” she said. 

5. Use AI results to question assumptions

 “Some recruiters or hiring decision makers will use AI to validate their own decisions instead of using it to question their assumptions,” said Spencer. “The results presented should be used as a supplemental data point, allowing us to challenge our own potential biases and ask important questions before final decisions are made.” 

AI in hiring regulations will continue to gain steam

Faegre Drinker’s Hathaway points out that overall, organizations tend to have different views on how to tackle issues relating to AI regulations. “Some want to be out front on this, and others will take a ‘wait and see’ posture and look at how this all plays out,” he said. “ The latter companies may feel that there is no rush.” 

Still, according to Zuolaga, regulation isn’t going to slow down in the next year. Vendors who are trying to keep up with the changes “must be proactively engaged with a cross-section of experts to meet new requirements before they’re codified,” he said.

And they should not wait for regulation to take action, Spencer said, but should take appropriate steps on their own to ensure candidates are protected. “In the short term, regulation will continue to evolve,” he said. “In the uncertainty of that environment, vendors and companies should hold themselves to high ethical standards that exceed any bar set by future regulation.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sharon Goldman
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!