AI & RoboticsNews

Uber open-sources Manifold, a visual tool for debugging AI models

Debugging machine learning (ML) models isn’t a walk in the woods. Just ask the data scientists and engineers at Uber, some of whom have the unenviable task of digging into algorithms to diagnose the causes of their performance issues.

To lighten the workload, Uber internally developed Manifold, a model-agnostic visual tool that surfaces the differences in distributions of features (i.e., the measurable properties of the phenomena being observed). It’s a part of the ride-hailing company’s Michelangelo ML platform, where it’s helped various product teams analyze countless AI models. And as of today, it’s available in open source on GitHub.

“Since highlighting [Manifold] … earlier this year, we have received a lot of feedback from the community regarding its potential in general purpose ML model debugging scenarios,” wrote Uber machine learning software engineer Lezhi Li in a blog post. “In open-sourcing the standalone version of Manifold, we believe the tool will likewise benefit the ML community by providing interpretability and debuggability for ML workflows.”

Uber Manifold

Manifold leverages what’s called a clustering algorithm (k-Means) to break prediction data into segments based on their performance similarity. The algorithm ranks features by their KL-Divergence, a measure of difference between two contrasting distributions. Generally speaking, in Manifold, higher divergence indicates a given feature correlates with a factor that differentiates two segment groups.

Manifold includes support for a range of algorithm types including general binary classification and regression models. On the visualization side, it can ingest numerical and categorical as well as geospatial feature types. It integrates with Jupyter Notebook, one of the most widely adopted data science platforms for data scientists and ML engineers, and it boasts interactive data slicing and performance comparisons based on per-instance prediction loss and other feature values.

Manifold’s handy Performance Comparison View compares prediction performance across models and data subsets. The Feature Attribution View visualizes feature distributions of data subsets with various performance levels, aggregated by user-defined segments. Both provide an overview of model performance and a map view for geospatial features, helping to identify underperforming data subsets for further inspection.

Manifold comes packaged for standalone or packaged installation. Once it’s installed, there’s two ways to input data into it: via a comma-separated file or a programmatic conversion.

Uber has been on something of an open source tear. The release of Manifold comes after the unveiling of Plato, a platform for building, training, and deploying conversational AI and machine learning. Early last year, it debuted Ludwig, an open source toolbox built on top of Google’s TensorFlow framework that allows users to train and test AI models without having to write code. And in February 2019, it launched the Autonomous Visualization System (AVS), a standalone web-based technology for understanding and sharing autonomous systems data.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!