AI & RoboticsNews

Arize introduces AI bias tracing to pinpoint and address causes

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Arize, a maker of artificial intelligence (AI) observability tools, has introduced Bias Tracing, a new tool for identifying the root cause of bias in machine learning (ML) pipelines. This can help teams prioritize and adjustments and address issues either in the data or the algorithm itself. 

Enterprises have long used observability and distributed tracing to improve applications performance, troubleshoot bugs and identify security vulnerabilities. Arize is part of a small cadre of companies adapting these techniques to enhance AI monitoring. 

Observability analyzes data logs to monitor complex infrastructure at scale. Tracing reassembles a digital twin representing the application logic and data flow for complex applications. The new bias tracing applies similar techniques to create a map of AI processing flows spanning data sources, feature engineering, training and deployment. When bias is detected, this can help data managers, scientists and engineers root out and rectify the root cause of the problem. 

 “This type of analysis is incredibly powerful in areas like healthcare or finance given the real world implications in terms of health outcomes or lending decisions,” said, Aparna Dhinakaran, Arize cofounder and chief product officer.

Getting to the root of AI bias

Arize’s AI observability platform has already supported tools for monitoring AI performance and characterizing model drift. The new Bias Tracing capabilities can automatically surface up which model inputs and slices contribute the most to bias encountered in production and identify their root cause. 

Dhinakaran said the Bias Tracing launch is related to Judea Pearl’s groundbreaking work on causal AI, which is at the cutting edge of both explainable AI and AI fairness. Pearl’s Causal AI work focuses on teaching machines to learn cause and effect and not just statistical correlations. For example, instead of the mere ability for a machine to correlate a protected attribute and outcomes, the machine needs the ability to reason if a protected attribute is the cause of an unfavorable outcome.

Going deeper

One example of a fairness metric Arize uses is recall parity. Recall parity measures the model’s sensitivity for a specific group compared to another — as well as the model’s ability to predict true positives correctly.

For instance, a regional healthcare provider might be interested in ensuring that their models predict healthcare needs equally between Latinx (the ‘sensitive’ group) and Caucasians (the base group). If recall parity falls outside the 0.8 to 1.25 thresholds (known as the four-fifths rule), it may indicate that Latinx are not receiving the level of needed follow-up care as Caucasians, leading to different levels of future hospitalization and health outcomes. 

“Distributing healthcare care in a representative way is especially important when an algorithm determines an assistive treatment intervention that is only available to a small fraction of patients,” Dhinakaran said. 

Arize helps the company identify that there is a problem overall and helps the company click a level deeper to see that disparate impact is most pronounced for specific groups. For example, this might be Latinx women, Latinx ages 50 or older, or Latinx in particular states. By surfacing these cohorts where model unfairness is potentially highest, ML teams know how to resolve the issue by adjusting or retraining the model accordingly. 

Arize Bias Tracing is currently built to work with classification models and will expand to other use cases over time.

Other companies working on AI observability include WhyLabs, Censius and Data Robot. Companies working on tools for improving AI explainability include Fiddler and SAS.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.


Author: George Lawton
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!