AI & RoboticsNews

Defense Innovation Board unveils AI ethics principles for the Pentagon

The Defense Innovation Board, a panel of 16 prominent technologists advising the Pentagon, today voted to approve AI ethics principles for the Department of Defense. The report includes 12 recommendations for how the U.S. military can apply ethics in the future for both combat and non-combat AI systems. The principles are broken into five main principles: responsible, equitable, traceable, reliable, and governable.

The principles state that humans should remain responsible for “developments, deployments, use and outcomes,” and AI systems used by the military should be free of bias that can lead to unintended human harm. AI deployed by the DoD should also be reliable, governable, and use “transparent and auditable methodologies, data sources, and design procedure and documentation.”

“You may see resonances of the word fairness in here [AI ethics principle document]. I will caution you that in many cases the Department of Defense should not be fair,” DIB board member and Carnegie Mellon University VP of research Michael McQuade said today. “It should be a firm principle that ours is to not have unintended bias in our systems.”

Applied Inventions cofounder and computer theorist Danny Hillis and board members agreed to amend the draft document to say the governable principle should include “avoid unintended harm and disruption and for human disengagement of deployed systems.” The report, Hillis said, should be explicit and unambiguous that AI systems used by the military should come with an off switch for a human to press in case things go wrong.

“I think this was the most problematical aspect about them because they’re capable of exhibiting and evolving forms of behavior that are very difficult for the designer to predict, and sometimes those forms of behavior are actually kind of self preserving forms of behavior that can get a bit out of sync with the intent and goals of the designer, and so I think that’s one of the most dangerous potential aspects about them,” he said.

The Defense Innovation Board is chaired by former Google CEO Eric Schmidt, and members include MIT CSAIL director Daniela Rus, Hayden Planetarium director Neil deGrasse Tyson, LinkedIn cofounder Reid Hoffman, Code for America director Jennifer Pahlka, and Aspen Institute director Walter Isaacson.

The document titled “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense” and an accompanying white paper will be shared on the Defense Innovation Board website. Both will now be shared with DoD leadership for review for them to decide which if any of the principles it will adopt.

Going forward, the board wants the Joint AI Center charged with leading AI initiatives within the DoD to work with Secretary of Defense Mark Esper to put communication and policy orders in place to ensure the principles can succeed, make research investments in areas like reproducibility, strengthen AI testing and evaluation techniques, and create an annual AI safety and security conference.

The report recognizes potential unintended consequences for AI and acknowledges existing documents that guide DoD ethics principles like the U.S. Constitution, 2015 DoD Law of War manual, and 1949 Geneva conventions.

Investments by China and Russia in AI are mentioned in the ethics principle document.

“Within the high-stakes domain of national security, it is important to note that the U.S. finds itself in a technological competition with authoritarian powers that are pursuing AI applications in ways inconsistent with the legal, ethical, and moral norms expected by democratic countries. Our aim is to ground the principles offered here in DoD’s longstanding ethics framework — one that has withstood the advent and deployment of emerging military-specific or dual-use technologies over decades and reflects our democratic norms and values,” the report reads.

The principles released today are the product of a 15-month process to gather comments and insights from public forums and AI community leaders like Facebook chief AI scientist Yann Le Cun, former MIT Media Lab director Joi Ito, OpenAI research director Dr. Dario Amodei, and Stanford University former chief AI scientist Dr. Fei-Fei Li. Public comments were also welcomed.

In a public meeting attended by VentureBeat this spring in Silicon Valley, AI experts and people opposed to lethal autonomous robots shared their opinions about potential ethical challenges the Pentagon and U.S. servicemembers might face, such as improvements to object detection or weapon targeting systems. At that time, Microsoft director of ethics and society Mira Lane and others recognized the potential lack of moral hangups by U.S. adversaries, but that the U.S. military can play a big role in defining how and how not to use AI in the future. That idea came up again today.

“It’s an opportunity to lead a global dialogue founded in the basics of who we are, how we operate as a country and as a department, and where we go from here,” McQuade said.

Speaking at a conference held by Li’s Stanford University Institute for Human-Centered AI earlier this year, Schmidt said the purpose of the AI ethics report and a national security commission on AI report scheduled to be delivered to Congress next week are intended to help the United States establish a national AI policy akin to the kinds that have been created by more than 30 other countries around the world.

Also discussed in the DIB’s meeting today at Georgetown University: how the Department of Defense and Joint AI Center can recruit and retain AI-ML talent; discussion of how the military is lacking in digital innovation; and recommendations to increase adoption of lean design, technical skills, AI education for servicemembers, and a campaign for an AI-ready military.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!