AI & RoboticsNews

Biden administration takes action to promote responsible AI innovation

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Yesterday, at long last, the Biden administration began to outline its actions toward responsible AI, aiming to help protect Americans from present and potential future risks posed by AI.

Back in October 2022, the White House released the first iteration of its AI Bill of Rights. But with the rise of generative AI and the popularity of ChatGPT, the administration has faced increasing pressure to come up with more specific plans to promote responsible AI and limit potential risks. At the beginning of April, after the release of the open letter signed by Elon Musk and other tech industry luminaries calling for a pause in AI development, Biden had stated that the U.S. must address the ”potential risks of AI.”

Public-private partnership for responsible AI

The new actions were announced on the same day the president and vice president met with a group of the most influential leaders in AI, including Sam Altman, CEO of OpenAI, Dario Amodei, CEO of Anthropic, Satya Nadella, Chairman and CEO of Microsoft and Sundar Pichai, CEO of Google and Alphabet.

The actions outlined by the administration include:

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

  • New investments to power responsible AI research and development (R&D) in the U.S.
  • Public assessments of existing generative AI systems by leading developers
  • Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities 

Among the new investments announced by the administration is $140 million in funding for the National Science Foundation to develop seven new National AI Research Institutes. The goal of these institutes is to conduct research and development into responsible AI usage.

Regarding the public assessments, the administration announced a public evaluation to be conducted at the DEFCON 31 security conference this summer.

And in policy, the Office of Management and Budget (OMB) is set to release draft policy guidance on how AI systems can and should be used by the U.S. government.

The readout of the White House meeting with AI industry executives emphasized the importance the administration places on responsible use of AI.

“The President and Vice President were clear that in order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses to individuals, society, and national security,” the readout states. “These include risks to safety, security, human and civil rights, privacy, jobs, and democratic values.”

Industry chimes in on the administration’s actions

There are various views on what the administration’s actions mean and what’s left to be done to help support responsible AI.

“The Biden administration’s new actions to promote responsible AI reflect the urgent need for a transformative shift in the industry,” stated Vishal Sikka, CEO and founder, Vianai Systems. “There is great responsibility and care needed in developing and using AI.”

George Davis, founder and CEO at Frame AI, commented that the administration’s announcements put regulatory focus in the right place: shaping AI as a public benefit by enabling responsible and community-oriented research. In his view there are real strategic, economic and social justice concerns associated with AI, but they are best addressed by contributing to public innovation, rather than pushing research private via restrictions.

“The administration checks all the most urgent boxes: funding public research for public good, involving industry in public assessments, and considering the government’s own responsible adoption of AI,” Davis stated.

That said, Davis noted that his biggest concern left unaddressed by the administration is the risk of concentrated economic power. He believes there is a risk that monopolistic behavior could emerge in the AI space.

“Policies that partner with industry to conduct oversight should include a focus on enabling continued competitive innovation,” Davis said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sean Michael Kerner
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!