AI & RoboticsNews

AI Weekly: DARPA seeks to better align AI with human intentions

Did you miss a session at the Data Summit? Watch On-Demand Here.


This week in AI, DARPA, the emerging technologies R&D agency of the U.S. Defense Department, launched a new program that aims to “align” AI systems with human decision-makers in domains where there isn’t an agreed-upon right answer. Elsewhere, two prominent cofounders from LinkedIn and DeepMind, Reid Hoffman and Mustafa Suleyman, announced a new AI startup called Inflection AI that seeks to develop software that allows humans to talk to computers using everyday language.

In a press release describing the new three-and-a-half-year program, DARPA says that the goal is to “evaluate and build trusted algorithmic decision-makers for mission-critical Department of Defense operations.” Dubbed “In the Moment,” or ITM, it focuses on the process of alignment — building AI systems that accomplish what they’re expected to accomplish.

“ITM is different from typical AI development approaches that require human agreement on the right outcomes,” ITM program manager Matt Turek said in a statement. “The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data.”

For example, self-driving cars can be developed against a ground truth for right and wrong decisions based on unchanging, relatively consistent rules of the road. The designers of these cars could hard-code “risk values” into the cars that prevent them from, for example, making right turns on red in cities where they’re illegal. But Turek says that these one-size-fits-all risk values won’t work from a Department of Defense perspective. Combat situations evolve rapidly, he points out, and a commander’s intent can change from scenario to scenario.

“The [Defense Department] needs rigorous, quantifiable, and scalable approaches to evaluating and building algorithmic systems for difficult decision-making where objective ground truth is unavailable,” Turek continued. “Difficult decisions are those where trusted decision-makers disagree, no right answer exists, and uncertainty, time-pressure, and conflicting values create significant decision-making challenges.”

DARPA is only the latest organization to explore techniques that might help better align AI with a person’s intent. In January, OpenAI, the company behind the text-generating model GPT-3, detailed an alignment technique that it claims cuts down on the amount of toxic language that GPT-3 generates. Toxic text generation is a well-known problem in AI, often caused by toxic datasets. Because text-generating systems are trained on data containing problematic content, some of the content slips through.

“Although [AI systems are] quite smart today, they don’t always do what we want them to do. The goal of alignment is to produce AI systems that do [achieve] what we want them to,” OpenAI cofounder and chief scientist Ilya Sutskever told VentureBeat in a phone interview earlier this year. “[T]hat becomes more important as AI systems become more powerful.”

ITM will attempt to establish a framework to evaluate decision-making by algorithms in “very difficult domains,” including combat, through the use of “realistic, challenging” scenarios. “Trusted humans” will be asked to make decisions in these scenarios and then the results will be compared to decisions from an algorithm subjected to the same scenarios.

“We’re going to collect the decisions, the responses from each of those decision-makers, and present those in a blinded fashion to multiple triage professionals,” Turek said. “Those triage professionals won’t know whether the response comes from an aligned algorithm or a baseline algorithm or from a human. And the question that we might pose to those triage professionals is which decision-maker would they delegate to, providing us a measure of their willingness to trust those particular decision-makers.”

Talking to computers

Related to the problem of alignment, LinkedIn cofounder Hoffman and DeepMind cofounder Suleyman plan with Inflection AI to leverage AI to help humans talk to computers. In an interview with CNBC, Suleyman described wanting to build products that eliminate the need for people to write in shorthand or simplify their ideas to communicate with machines.

“[Programming languages, mice, and other interfaces] are ways we simplify our ideas and reduce their complexity and in some ways their creativity and their uniqueness in order to get a machine to do something,” Suleyman told the publication. “It feels like we’re on the cusp of being able to generate language to pretty much human-level performance. It opens up a whole new suite of things that we can do in the product space.”

Inflection AI’s plans remain vague, but the concept of translating human intentions into a language computers can understand dates back decades. Even the best chatbots and voice assistants today haven’t delivered on the promise — recall Viv Labs, which pledged to deliver a “conversational interface to anything” that instead fizzled out into elements of Samsung’s maligned Bixby assistant. But Suleyman and Hoffman are betting that their expertise — as well as coming advancements in conversational AI — will make an intuitive human-computer language interface possible within the next five years.

“Even at the bigger tech companies, there’s a relatively small number of people actually building these [AI] models. One of the advantages of doing this in a startup is that we can go much faster and be more dynamic,” Suleyman told CNBC. “My experience of building many, many teams over the last 15 years is that there is this golden moment when you really have a very close-knit, small, focused team. I’m going to try and preserve that for as long as possible.”

Given that countless visionaries have tried and failed in this area, that would be an impressive feat indeed.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

Senior AI Staff Writer

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Kyle Wiggers
Source: Venturebeat

Related posts
DefenseNews

Defense Innovation Unit prepares to execute $800 million funding boost

DefenseNews

Army may swap AI bill of materials for simpler ‘baseball cards’

DefenseNews

As the US Air Force fleet keeps shrinking, can it still win wars?

Cleantech & EV'sNews

Tesla skirts Austin's environmental rules at Texas gigafactory

Sign up for our Newsletter and
stay informed!