AI & RoboticsNews

Podcast: Solving AI’s black box problem

What happens when you don’t know why a smart system made a specific decision? AI’s infamous black box system is a big problem, whether you’re an engineer debugging a system or a person wondering why a facial-recognition unlock system doesn’t perform as accurately on you as it does on others.

In this episode of the The AI Show, we talk about engineering knowability into smart systems. Our guest, Nell Watson, chairs the Ethics Certification Program for AI systems for the IEEE standards association. She’s also the vice-chair on the Transparency of Autonomous Systems working group. She’s on the AI faculty at Singularity University, she’s an X-Prize judge, and she’s the founder of AI startup QuantaCorp.

Listen to the podcast here:

And, subscribe on your favorite podcasting platform:


Author: John Koetsier.
Source: Venturebeat

Related posts
GamingNews

Microsoft Has Reportedly Pushed Xbox Studios to Deliver a 30% Profit Margin, Allegedly Leading to All Those Layoffs, Canceled Projects, Price Rises, and the End of Exclusives

GamingNews

Helldivers 2 Into the Unjust: 4.1.0 Update Focuses on Making the Game 'Feel Better to Play' With Over 200 Bug Fixes, Key Balance Changes, and Quality-of-Life Improvements

GamingNews

'Remaster New Vegas Boss!' — Danny Trejo Calls on Bethesda to Give Fallout Fans What They Really Want

CryptoNews

Analyst Warns of BTC Dipping Below $100K as Gold-to-Bitcoin Rotation Takes Shape

Sign up for our Newsletter and
stay informed!