AI & RoboticsNews

White House touts new AI safety consortium: Over 200 leading firms to test and evaluate models

AI Safety Institute

One day after appointing a top White House aide as director of the new US AI Safety Institute (USAISI) at the National Institute of Standards and Technology (NIST), the Biden Administration announced the creation of the US AI Safety Institute Consortium (AISIC), which it called “the first-ever consortium dedicated to AI safety.”

The coalition includes more than 200 member companies and organizations, ranging from Big Tech firms such as Google, Microsoft and Amazon and top LLM companies like OpenAI, Cohere and Anthropic to a range of research labs, civil society and academic teams, state and local governments and nonprofits.

Brilliant Labs launches Frame, the world’s first AI glasses with integrated multimodal assistant. Supported by Niantic CEO and AR pioneer, John Hanke. Advisors and investors include Oculus, Siri, Pebble, Paylocity, Wayfarer Studios. The AISIC, functioning under the USAISI and supported by the White House, aims to establish a new measurement science in AI safety, contributing to President Biden’s Executive Order priorities

The Consortium was announced as part of the AI Executive Order

The consortium’s development was announced on October 31, 2023, as part of President Biden’s AI Executive Order. The NIST website explained that “participation in the consortium is open to all interested organizations that can contribute their expertise, products, data, and/or models to the activities of the Consortium.”

Participants who were selected (and are required to pay a $1000 annual fee) entered into a “Consortium Cooperative Research and Development Agreement (CRADA) with NIST.

According to NIST, Consortium members will contribute to one the following guidelines:

  1. Develop new guidelines, tools, methods, protocols and best practices to facilitate the evolution of industry standards for developing or deploying AI in safe, secure, and trustworthy ways
  2. Develop guidance and benchmarks for identifying and evaluating AI capabilities, with a focus on capabilities that could potentially cause harm
  3. Develop approaches to incorporate secure-development practices for generative AI, including special considerations for dual-use foundation models, including
    • Guidance related to assessing and managing the safety, security, and trustworthiness of models and related to privacy-preserving machine learning;
    • Guidance to ensure the availability of testing environments
  4. Develop and ensure the availability of testing environments
  5. Develop guidance, methods, skills and practices for successful red-teaming and privacy-preserving machine learning
  6. Develop guidance and tools for authenticating digital content
  7. Develop guidance and criteria for AI workforce skills, including risk identification and management, test, evaluation, validation, and verification (TEVV), and domain-specific expertise
  8. Explore the complexities at the intersection of society and technology, including the science of how humans make sense of and engage with AI in different contexts
  9. Develop guidance for understanding and managing the interdependencies between and among AI actors along the lifecycle

Source of NIST funding for AI safety is unclear

As VentureBeat reported yesterday, since the White House announced the development of the AI Safety Institute and accompanying Consortium in November, there have been few details disclosed about how the institute would work and where its funding would come from — especially since NIST itself, with reportedly a staff of about 3,400 and an annual budget of just over $1.6 billion — is known to be underfunded.

A bipartisan group of senators asked the Senate Appropriations Committee in January for $10 million of funding to help establish the U.S. Artificial Intelligence Safety Institute (USAISI) within NIST as part of the fiscal 2024 funding legislation. But it is not clear where that funding request stands.

In addition, in mid-December House Science Committee lawmakers from both parties sent a letter to NIST that Politico reported “chastised the agency for a lack of transparency and for failing to announce a competitive process for planned research grants related to the new U.S. AI Safety Institute.”

In an interview with VentureBeat about the USAISI leadership appointments, Rumman Chowdhury, who formerly led AI efforts at Accenture and also served as head of Twitter (now X)’s META team (Machine Learning Ethics, Transparency and Accountability) from 2021-2011, said that funding is an issue for the USAISI.

“One of the frankly under-discussed things is this is an unfunded mandate via the executive order,” she said. “I understand the politics of why, given the current US polarization, it’s really hard to get any sort of bill through…I understand why it came through an executive order. The problem is there’s no funding for it.”

Author: Sharon Goldman
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!

Worth reading...
How AI and software can improve semiconductor chips | Accenture interview