AI & RoboticsNews

The uses of ethical AI in hiring: Opaque vs. transparent AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


There hasn’t been a revolution quite like this before, one that’s shaken the talent industry so dramatically over the past few years. The pandemic, the Great Resignation, inflation and now talk of looming recessions are changing talent strategies as we know them. 

Such significant changes, and the challenge of staying ahead of them, have brought artificial intelligence (AI) to the forefront of the minds of HR leaders and recruitment teams as they endeavor to streamline workflows and identify suitable talent to fill vacant positions faster. Yet many organizations are still implementing AI tools without proper evaluation of the technology or indeed understanding how it works — so they can’t be confident they are using it responsibly. 

What does it mean for AI to be “ethical?” 

Much like any technology, there is an ongoing debate over the right and wrong uses of AI. While AI is not new to the ethics conversation, increasing use of it in HR and talent management has unlocked a new level of discussion on what it actually means for AI to be ethical. At the core is the need for companies to understand the associated compliance and regulatory frameworks and ensure they are working to support the business in meeting those standards.

Instilling governance and a flexible compliance framework around AI is becoming of critical importance to meeting regulatory requirements, especially in different geographies. With new laws being introduced, it’s never been more important for companies to prioritize AI ethics alongside evolving compliance guidelines. Ensuring that they are able to understand the technology’s algorithm means they decrease the risk of AI models becoming discriminatory if not correctly reviewed, audited and trained.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

What is opaque AI?

Opaque, or black box, AI separates the technology’s algorithms from its users, making it impossible to audit AI as there is no clear understanding of how the models are working, or what data points it is prioritizing. As such, monitoring and auditing AI becomes impossible, opening a company up to the risks of running models with unconscious bias. There is a way to avoid this pattern and implement a system where AI remains subject to human oversight and evaluation: Transparant, or white box, AI. 

Ethical AI: Opening the white box

The answer to using AI ethically is “explainable AI,” or the white box model. Explainable AI effectively turns the black box model inside out — encouraging transparency around the use of AI so everyone can see how it works and, importantly, understand how conclusions were made. This approach enables organizations to report confidently on the data, as users have an understanding of the technology’s processes and can also audit them to make sure the AI remains unbiased.

For example, recruiters who use an explainable AI approach will not only have a greater understanding of how the AI made a recommendation, but they also remain active in the process of reviewing and assessing the recommendation that was returned — known as “human in the loop.” Through this approach, a human operator is the one to oversee the decision, understand how and why it came to that conclusion, and audit the operation as a whole. 

This way of working with AI also impacts how a potential employee profile is identified. With opaque AI, recruiters might simply search for a particular level of experience from a candidate or by a specific job title. As a result, the AI could return a suggestion that it then assumed to be the only accurate — or available — option. In reality, such candidate searches benefit from the AI being able to also address and identify parallel skill sets and other relevant complementary experiences or roles. Without such flexibility, recruiters are only scratching the surface of the pool of potential talent available and inadvertently may well be discriminating against others.

Conclusion

All AI comes with a level of responsibility that users must be aware of, associated ethical positions, promoting transparency and ultimately understanding all levels of its use. Explainable AI is a powerful tool in streamlining talent management processes, making recruitment and retention strategies increasingly effective; but encouraging open conversations around AI is the most critical step in truly unlocking an ethical approach to its use.

Abakar Saidov is CEO and cofounder of Beamery.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Abakar Saidov, Beamery
Source: Venturebeat

Related posts
DefenseNews

Navy, senators argue over who is to blame for a too-small fleet

DefenseNews

To expand the US Navy’s fleet, we must contract

DefenseNews

Ellis to succeed Rey as director of Army Network Cross-Functional Team

Cleantech & EV'sNews

Tesla asks shareholders to move to Texas and re-pass Elon Musk's massive compensation plan

Sign up for our Newsletter and
stay informed!