AI & RoboticsNews

As AI risk grows, Anthropic calls for NIST funding boost: ‘This is the year to be ambitious’

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


As the speed and scale of AI innovation and its related risks grow, AI research company Anthropic is calling for $15 million in funding for the National Institute of Standards and Technology (NIST) to support the agency’s AI measurement and standards efforts.

Anthropic published a call-to-action memo yesterday, two days after a budget hearing about 2024 funding of the U.S. Department of Commerce in which there was bipartisan support for maintaining American leadership in the development of critical technologies. NIST, an agency of the U.S. Department of Commerce, has worked for years on measuring AI systems and developing technical standards, including the Face Recognition Vendor Test and the recent AI Risk Management Framework

The memo said that an increase in federal funding for NIST is “one of the best ways to channel that support … so that it is well placed to carry out its work promoting safe technological innovation.”

A ‘shovel-ready’ AI risk approach

While there have been other recent ambitious proposals — calls for an “international agency” for artificial intelligence, legislative proposals for an AI ‘regulatory regime,’ and, of course, an open letter to temporarily “pause” AI development — Anthropic’s memo said the call for NIST funding is a simpler, “shovel-ready” idea available to policymakers.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

“Here’s a thing we could do today that doesn’t require anything too wild,” said Anthropic cofounder Jack Clark in an interview with VentureBeat. Clark, who has been active in AI policy work for years (including a stint at OpenAI), added that “this is the year to be ambitious about this funding, because this is the year in which most policymakers have started waking up to AI and proposing ideas.”

The clock is ticking on dealing with AI risk

Clark admitted that a company like the Google-funded Anthropic, which is one the top companies building large language models (LLMs), proposing these sorts of measures is “a little weird.”

“It’s not that typical, so I think that this implicitly demonstrates that the clock’s ticking” when it comes to tackling AI risk, he explained. But it’s also an experiment, he added: “We’re publishing the memo because I want to see what the reaction is both in DC and more broadly, because I’m hoping that will persuade other companies and academics and others to spend more time publishing this kind of stuff.”

If NIST is better funded, he pointed out, “we’ll get more solid work on measurement and evaluation in a place which naturally brings government, academia and industry together.” On the other hand, if it’s not funded, more evaluation and measurement would be “solely driven by industry actors, because they’re the ones spending the money. The AI conversation is better with more people at the table, and this is just a logical way to get more people at the table.”

The downsides of ‘industrial capture’ in AI

It’s notable that as Anthropic seeks billions to take on OpenAI, and was famously tied to the collapse of Sam Bankman-Fried’s crypto empire, Clark talks about the downsides of “industrial capture.”

“In the last decade, AI research moved from being predominantly an academic exercise to an industry exercise, if you look at where money is being spent,” he said. “This means that lots of systems that cost a lot of money are driven by this minority of actors, who are mostly in the private sector.”

One important way to improve that is to create a government infrastructure that gives government and academia a way to train systems at the frontier and build and understand them themselves, Clark explained. “Additionally, you can have more people developing the measurements and evaluation systems to try and look closely at what is happening at the frontier and test out the models.”

A society-wide conversation that policymakers need to prioritize

As chatter increases about the risks of massive datasets that train popular large language models like ChatGPT, Clark said that research about the output behavior of AI systems, interpretability and what the level of transparency should look like is important. “One hope I have is that a place like NIST can help us create some kind of gold-standard public datasets, which everyone ends up using as part of the system or as an input into the system,” he said.

Overall, Clark said he got into AI policy work because he saw its growing importance as a “giant society-wide conversation.”

When it comes to working with policymakers, he added that most of it is about understanding the questions they have and trying to be useful.

“The questions are things like ‘Where does the U.S. rank with China on AI systems?’ or ‘What is fairness in the context of generative AI text systems?’” he said. “You just try and meet them where they are and answer [those] question[s], and then use it to talk about broader issues — I genuinely think people are becoming a lot more knowledgeable about this area very quickly.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!