AI & RoboticsNews

Salesforce’s Simulation Cards spell out uses, risks, and bias to make AI models more transparent

Salesforce recently open-sourced Foundation (formerly AI Economist), an AI research simulation for exploring tax policies. To accompany its release, the company this week published what it’s calling a Simulation Card, a file to document the use, risks, and sources of bias in published versions of the simulation.

Simulation Cards join ongoing efforts to bring transparency to historically black-box systems. Over the past year, Google launched Model Cards, which sprang from a Google AI whitepaper published in October 2018. Model Cards specify model architectures and provide insight into factors that help ensure optimal performance for given use cases. The idea of Model Cards emerged following Microsoft’s work on “datasheets for data sets,” or datasheets intended to foster trust and accountability through documenting datasets’ creation, composition, intended uses, maintenance, and other properties. Two years ago, IBM proposed its own form of model documentation in voluntary factsheets called “Supplier’s Declaration of Conformity” (DoC) to be completed and published by companies developing and providing AI.

The objective of Simulation Cards is similar to that of Model Cards and Data Sheets. However, Simulation Cards reflect the fact that simulations differ from trained models and datasets because they’re designed to create scenarios of interest, according to Salesforce. These scenarios can contain bias, which might be purposefully built-in or an unexpected side effect of the design choices made during creation. Because simulations create many datasets of various shapes and sizes, the potential for misuse is greater than that of a single fixed dataset that might contain bias.

Salesforce Foundation Simulation Card

Above: The Simulation Card for Foundation.

Image Credit: Salesforce

The Simulation Card for Foundation is divided into several sections: Simulation Details, Basic Information, Intended Use, Factors, Metrics, Quantitative Analyses, Ethical Considerations, and Caveats and Recommendations. Simulation Details provides the date of the simulation and the name of the publishing organization, as well as any keywords, licenses, contact information, and relevant version numbers. The Basic Information and Intended Use sections cover top-level info about the simulation and the applications for it that the coauthors had in mind. Factors canvases the modeling assumptions the simulations make, while Metrics and Quantitative Analyses outline the metrics used to measure the results. Finally, Ethical Considerations and Caveats and Recommendations provide guidelines for (or warnings against) applying the outputs to real-world systems.

It remains to be seen what sort of third-party adoption Simulation Cards might gain, if any, but Salesforce itself says it’s committed to releasing cards alongside future simulations. “We encourage researchers and developers to publish similar Simulation Cards for software releases, to broadly promote transparency and the ethical use of simulation frameworks. AI simulations offer researchers the power to generate data and evaluate outcomes of virtual economies that capture a part of the real world,” Salesforce wrote in a blog post. “An unethical simulation poses an order-of-magnitude larger ethical risk. As a result, our commitment to transparency is all that much more critical.”


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here



Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!