AI & RoboticsNews

Podcast: Solving AI’s black box problem

What happens when you don’t know why a smart system made a specific decision? AI’s infamous black box system is a big problem, whether you’re an engineer debugging a system or a person wondering why a facial-recognition unlock system doesn’t perform as accurately on you as it does on others.

In this episode of the The AI Show, we talk about engineering knowability into smart systems. Our guest, Nell Watson, chairs the Ethics Certification Program for AI systems for the IEEE standards association. She’s also the vice-chair on the Transparency of Autonomous Systems working group. She’s on the AI faculty at Singularity University, she’s an X-Prize judge, and she’s the founder of AI startup QuantaCorp.

Listen to the podcast here:

And, subscribe on your favorite podcasting platform:


Author: John Koetsier.
Source: Venturebeat

Related posts
GamingNews

'It'll Be a Bit of Work, but You Could Marry Them All' — Fable Has 1,000 Handcrafted NPCs for You to Play Around With

GamingNews

Xbox Developer Direct 2026: Everything Announced

GamingNews

'Are We Cooked?' — Pokémon Go Looks to Be Adding a New 'Currency' to Access Mega Raids, Though Players Are Split on Whether it Will Revitalize Mega Pokémon

CryptoNews

$1B XRP Treasury Gains Institutional Safeguards With Evernorth’s t54 Infrastructure

Sign up for our Newsletter and
stay informed!