AI & RoboticsNews

AI Weekly: AI tools for hiring under scrutiny; Clearview AI settlement reaction

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


It was a week filled with AI news from Google’s annual I/O developer’s conference and IBM’s annual THINK conference. But there were also big announcements from the Biden administration around the use of AI tools in hiring and employment, while it was also hard to turn away from coverage of Clearview AI’s settlement of a lawsuit brought by the ACLU in 2020.

Let’s dive in.

Could businesses that use AI tools for hiring violate the ADA? 

Last week, I published a feature story, “5 ways to address regulations around AI-enabled hiring and employment,” which jumped off news that last November, the New York City Council passed the first bill in the U.S. to broadly address the use of AI in hiring and employment.

In addition, last month California introduced The Workplace Technology Accountability Act, or Assembly Bill 1651. The bill proposes employees be notified prior to the collection of data and use of monitoring tools and deployment of algorithms, with the right to review and correct collected data.

This week, that story got a big follow-up: On Thursday, the Biden administration announced that “employers who use algorithms and artificial intelligence to make hiring decisions risk violating the Americans with Disabilities Act if applicants with disabilities are disadvantaged in the process.”

As reported by NBC News, Kristen Clarke, the assistant attorney general for civil rights at the Department of Justice, which made the announcement jointly with the Equal Employment Opportunity Commission, has said there is “no doubt” that increased use of the technologies is “fueling some of the persistent discrimination.”

What does Clearview AI’s settlement with the ACLU mean for enterprises? 

On Monday, facial recognition company Clearview AI, which made headlines for selling access to billions of facial photos, settled a lawsuit filed in Illinois two years ago by the American Civil Liberties Union (ACLU) and several other nonprofits. The company was accused of violating an Illinois state law, the Biometric Information Privacy Act (BIPA). Under the terms of the settlement, Clearview AI has agreed to ban most private companies permanently from using its service.

But many experts pointed out that Clearview has little to worry about with this ruling, since Illinois is one of just a few states that have such biometric privacy laws.

“It’s largely symbolic,” said Slater Victoroff, founder and CTO of Indico Data. “Clearview is very strongly connected from a political perspective and thus their business will, unfortunately, do better than ever since this decision is limited.”

Still, he added, his reaction to the Clearview AI news was “relief.” The U.S. has been, and continues to be, in a “tenuous and unsustainable place” on consumer privacy, he said. “Our laws are a messy patchwork that will not stand up to modern AI applications, and I am happy to see some progress toward certainty, even if it’s a small step. I would really like to see the U.S. enshrine effective privacy into law following the recent lessons from GDPR in the EU, rather than continuing to pass the buck.”

AI regulation in the U.S. is the ‘Wild West’

When it comes to AI regulation, the U.S. is actually the “Wild West,” Seth Siegel, global head of AI and cybersecurity at Infosys Consulting, told VentureBeat. The bigger question now, he said, should be how the U.S. will handle companies that gather the information that violates the terms of services from sites where the data is quite visible. “Then you have the question with the definition of publicly available – what does that mean?” he added.

But for enterprise businesses, the biggest current issue is around reputational risk, he explained: “If their customers found out about the data they’re using, would they still be a trusted brand?”

AI vendors should tread carefully

Paresh Chiney, partner at global advisory firm StoneTurn, said the settlement is also a warning sign for enterprise AI vendors, who need to “tread carefully” – especially if their products and solutions are at the risk of violating laws and regulations governing data privacy.

And Anat Kahana Hurwitz, head of legal data at justice intelligence platform Darrow.ai, pointed out that all AI vendors who use biometric data can be impacted by the Clearview AI ruling, so they should be compliant with the Biometric Information Privacy Act (BIPA), which passed in 2008, “when the AI landscape was completely different.” The act, she explained, defined biometric identifiers as “retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.”

“This is legislative language, not scientific language – the scientific community does not use the term “face geometry,” and it is therefore subject to the court’s interpretation,” she said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!