AI & RoboticsNews

AI researchers propose ‘bias bounties’ to put ethics principles into practice

Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software.

This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say.

“Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties,” the paper reads. “We focus here on bounties for discovering bias and safety issues in AI systems as a starting point for analysis and experimentation but note that bounties for other properties (such as security, privacy protection, or interpretability) could also be explored.”

Authors of the paper published Wednesday, which is titled “Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims,”  also recommend “red-teaming” to find flaws or vulnerabilities and connecting independent third-party auditing and government policy to create a regulatory market, among other techniques.

The idea of bias bounties for AI was initially suggested in 2018 by coauthor JB Rubinovitz. Meanwhile, Google alone said it has paid $21 million to security bug finders, while bug bounty platforms like HackerOne and Bugcrowd have raised funding rounds in recent months.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

Former DARPA director Regina Dugan also advocated red-teaming exercises to address ethical challenges in AI systems. And a team led primarily by prominent Google AI ethics researchers released a framework for internal use at organizations to close what they deem an ethics accountability gap.

The paper shared this week includes 10 recommendations for how to turn AI ethics principles into practice. In recent years, more than 80 organizations — including OpenAI, Google, and even the U.S. military — have drafted AI ethics principles, but the authors of this paper assert AI ethics principles are “only a first step to [ensuring] beneficial societal outcomes from AI” and say “existing regulations and norms in industry and academia are insufficient to ensure responsible AI development.”

They also make a number of recommendations:

  • Share AI incidents as a community and perhaps create centralized incident databases
  • Establish audit trails for capturing information during the development and deployment process for safety-critical applications of AI systems
  • Provide open source alternatives to commercial AI systems and increase scrutiny of commercial models
  • Increase government funding for researchers in academia to verify hardware performance claims
  • Support the privacy-centric techniques for machine learning developed in recent years, like federated learning, differential privacy, and encrypted computation

The paper is the culmination of ideas proposed in a workshop held in April 2019 in San Francisco that included about 35 representatives from academia, industry labs, and civil society organizations. The recommendations were made to address what the authors call a gap in effective assessment of claims made by AI practitioners and provide paths to “verifying AI developers’ commitments to responsible AI development.”

As AI continues to proliferate throughout business, government, and society, the authors say there’s also been a rise in concern, research, and activism around AI, particularly related to issues like bias amplification, ethics washing, loss of privacy, digital addictions, facial recognition misuse, disinformation, and job loss.

AI systems have been found to reinforce existing race and gender bias, resulting in issues like facial recognition bias in police work and inferior health care for millions of African-Americans. As a recent example, the U.S. Department of Justice was criticized recently for using the PATTERN risk assessment tool known for racial bias to decide which prisoners are sent home early to reduce population size due to COVID-19 concerns.

The authors argue there’s a need to move beyond nonbinding principles that fail to hold developers to account. Google Brain cofounder Andrew Ng described this very problem at NeurIPS last year. Speaking on a panel in December, he said he read an OECD ethics principle to engineers he works with who responded by saying that the language wouldn’t impact how they do their jobs.

“With rapid technical progress in artificial intelligence (AI) and the spread of AI-based applications over the past several years, there is growing concern about how to ensure that the development and deployment of AI is beneficial — and not detrimental — to humanity,” the paper reads. “Artificial intelligence has the potential to transform society in ways both beneficial and harmful. Beneficial applications are more likely to be realized, and risks more likely to be avoided, if AI developers earn rather than assume the trust of society and of one another. This report has fleshed out one way of earning such trust, namely the making and assessment of verifiable claims about AI development through a variety of mechanisms.”

In other recent AI ethics news, in February the IEEE Standards Association, part of one of the largest organizations in the world for engineers, released a whitepaper calling for a shift toward “Earth-friendly AI,” the protection of children online, and the exploration of new metrics for the measurement of societal well-being.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!