AI & RoboticsNews

For AI bias law coming January 1, unanswered questions remain

Check out all the on-demand sessions from the Intelligent Security Summit here.


On January 1, 2023, New York City’s Automated Employment Decision Tool (AEDT) law will go into effect. It’s one of the first in the U.S. aimed at reducing bias in AI-driven recruitment and employment decisions.  

Under the AEDT law, it will be unlawful for an employer or employment agency to use artificial intelligence and algorithm-based technologies to evaluate NYC candidates and employees — unless it conducts an independent bias audit before using the AI employment tools. The bottom line: New York City employers will be the ones taking on compliance obligations around these AI tools, rather than the software vendors who create them. 

But with only a few weeks to go, plenty of unanswered questions remain about the regulations, according to Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group. 

That’s because while New York City’s Department of Consumer and Worker Protection released proposed rules about implementing the law back in September and solicited comment, the final rules about what the audits will look like have yet to be published. That leaves companies unsure about how to proceed to make sure they are in compliance with the law. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

“I think some companies are waiting to see what the rules are, while some are assuming that the rules will be implemented as they were in draft and are behaving accordingly,” Gesser told VentureBeat. “There are a lot of companies who are not even sure if the rule applies to them.” 

Growing number of employers turning to AI tools

The city developed the AEDT law in response to the growing number of employers turning to AI tools to assist in recruiting and other employment decisions. Nearly one in four organizations already use automation or artificial intelligence (AI) to support hiring, according to a February 2022 survey from the Society for Human Resource Management. The percentage is even higher (42%) among large employers with 5,000 or more employees. These companies use AI tools to screen resumes, match applicants to jobs, answer applicants’ questions and complete assessments.

But the widespread adoption of these tools has led to concerns from regulators and legislators about possible discrimination and bias. Stories about bias in AI employment tools have circulated for years, including the Amazon recruiting engine that was scrapped in 2018 because it “did not like women,” or the 2021 study that found AI-enabled anti-Black bias in recruiting. 

That led to the New York City Council voting 38-4 in November 2021 to pass a bill that ultimately became the Automated Employment Decision Tool law. It focused the bill on “any computational process derived from machine learning, statistical modeling, data analytics or artificial intelligence; that issues simplified output, including a score, classification or recommendation; and that substantially assists employment decisions being made by humans.”

The proposed rules released in September clarified some ambiguities, said Gesser. “They narrowed the scope of what constitutes AI,” he explained. “[The AI] has to substantially assist or replace the discretionary decision-making. If it’s one thing out of many that get consulted, that’s probably not enough. It has to drive the decision.” 

The rules also limited the law’s application to complex models. “So to the extent that it’s just a simple algorithm that considers some factors, unless it turns it into like a score or does like some complicated analysis, it doesn’t count,” he said.

Bias audits are complex

The new law requires employers to conduct independent “bias audits” of automated employment decision tools, which include assessing their impact on gender, ethnicity and race. But auditing AI tools for bias is no easy task, requiring complex analysis and access to a great deal of data, Gesser explained.

In addition, employers may not have access to the tool that would allow them to run the audit, he pointed out, and it’s unclear whether an employer can rely on a developer’s third-party audit. A separate problem is that a lot of companies don’t have a complete set of this kind of data, which is often provided by candidates on a voluntary basis.

This data may also paint a misleading picture of the company’s racial, ethnic and gender diversity, he explained. For example, with gender options restricted to male and female, there are no options for anyone identifying as transgender or gender non-conforming.

More guidance to come

“I anticipate there’s going to be more guidance,” said Gesser. “It’s possible there may be an extension of the implementation period or a delay in the enforcement period.”

Some companies will do the audit themselves, to the extent that they can, or rely on the audit the vendors did. “But it’s not clear to me what compliance is supposed to look like and what is sufficient,” Gesser explained.

This is not unusual for AI regulation, he pointed out. “It’s so new, there’s not a lot of precedent to go off of,” he said. In addition, AI regulation in hiring is “very tricky,” unlike AI in lending, for example, which has a finite number of acceptable criteria and a long history of using models.

“With hiring, every job is different. Every candidate is different,” he said. “It’s just a much more complicated exercise to sort out what is biased.”

Gesser added that “You don’t want the perfect to be the enemy of the good.” That is, some AI employment tools are meant to actually reduce bias — and also reach a larger pool of applicants than would be possible with only human review.

“But at the same time, regulators say there is a risk that these tools could be used improperly, either intentionally or unintentionally,” he said. “So we want to make sure that people are being responsible.”

What this means for larger AI regulation

The New York City law arrives at a moment when larger AI regulation is being developed in the European Union, while a variety of state-based AI-related bills have been passed in the U.S.

The development of AI regulation is often a debate between a “risk-based regulatory regime” and a “rights-based productivity regime,” said Gesser. The New York law is “essentially a rights-based regime — everybody who uses the tool is subject to the exact same audit requirement,” he explained. The EU AI Act, on the other hand, is attempting to put together a risk-based regime to address the highest-risk outcomes of artificial intelligence.

In that case, “it’s about recognizing that there are going to be some low-risk use cases that don’t require a heavy burden of regulation,” he said.

Overall, AI is probably going to follow the route of privacy regulation, Gesser predicted — where a comprehensive European law comes into effect and slowly trickle into various state and sector-specific laws. “U.S. companies will complain that there’s this patchwork of laws and that it’s too bifurcated,” he said. “There will be a lot of pressure on Congress to make a comprehensive AI law.”  

No matter what AI regulation is coming down the pike, Gesser recommends beginning with an internal governance and compliance program.

“Whether it’s the New York law or EU law or some other, AI regulation is coming and it’s going to be really messy,” he said. “Every company has to go through its own journey towards what works for them — to balance the upside of the value of AI against the regulatory and reputational risks that come with it.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!