AI & RoboticsNews

White House urges federal agencies and European allies to avoid overregulation of AI

The White House is calling on U.S. lawmakers and businesses, as well as European nations and allies, to avoid overregulation of artificial intelligence. The announcement comes as part of AI regulatory principles introduced today by the Trump administration.

“Europe and our allies should avoid heavy handed innovation-killing models, and instead consider a similar regulatory approach. The best way to counter authoritarian uses of AI is to make sure America and our international partners remain the global hubs of innovation, shaping the evolution of technology in a manner consistent with our common values,” an OSTP statement reads.

The EU’s European Commission is expected to introduce similar guidance on AI policy in the coming months, following the receipt of AI ethics guidelines from an AI expert group. In the United Kingdom, members of parliament are considering facial recognition regulation, while European Union commissioners are expected to release more stringent protections of public use of facial recognition software in early 2020.

In addition to warning against regulatory overreach around private businesses’ use of AI, the OSTP’s proposed principles seek to influence decision makers in the U.S. and abroad. The agency is calling for steps to be taken to ensure public participation in the creation of policy and the promotion of trustworthy AI.

The principles are only meant to define rules for AI developed and deployed in the private sector, not the kind of AI used by the federal government or law enforcement agencies like the FBI. Any regulation proposed by a federal agency will be required to demonstrate adherence to the principles, U.S. CTO Michael Kratsios and deputy CTO Lynne Parker told VentureBeat and other reporters in a call ahead of the news.

“Pre-emptive and burdensome regulation does not only stifle economic innovation and growth, but also global competitiveness amid the rise of authoritarian governments that have no qualms with AI being used to track, surveil, and imprison their own people,” Kratsios said.

“As countries around the world grapple with similar questions about the appropriate regulation of AI, the U.S. AI regulatory principles demonstrate that America is leading the way to shape the evolution in a way that reflects our values of freedom, human rights, and civil liberties. The new European Commission has said they intend to release an AI regulatory document in the coming months. After a productive meeting with Commissioner Vestager in November, we encourage Europe [EU] to use the U.S. AI principles as a framework. The best way to counter authoritarian uses of AI is to make America and our national partners remain the global hub of innovation, advancing our common values.”

An administration official would go on to criticize regulatory efforts by state and local governments when asked what prompts fear of overregulation of AI at a time when many regulators seem to be going in the opposite direction.

“I think the examples in the U.S. today at state and local level are examples of overregulation which you want to avoid on the national level. So when particular states and localities make decisions like banning facial recognition across the board, you end up in tricky situations where civil servants may be breaking the law when they try to unlock their government-issued phone,” the official said in response to recent events in San Francisco.

Regulation of facial recognition software, in the form of moratoriums or bans, is perhaps the most high-profile example of AI regulation by governments since the re-emergence of the technology in recent years.

In May 2019, the San Francisco Board of Supervisors passed the first facial recognition ban in the United States. Cities like nearby Berkeley and Oakland followed, as well as Somerville, Massachusetts, and San Diego, California. City officials in Portland, Oregon are currently considering a ban on facial recognition use by city government and private businesses.

A number of state legislatures have passed regulation to limit the use of artificial intelligence, like Illinois, where legislators passed a law requiring businesses to share details about the use of AI in online job interviews.

Lawmakers beyond Washington D.C. pass policy in lieu of direction from lawmakers in Congress. A bipartisan bill to regulate use of facial recognition technology was reportedly in the works last year, but its future seems uncertain. Experts testifying before Congress repeatedly cited the need for regulation due to a current general lack of oversight.

AI regulatory principles from the White House come days after the Trump administration and Department of Commerce placed limits on the export of AI software for geospatial imagery analysis.

AI regulatory principles introduced today were ordered as part of the Trump administration’s American AI initiative, which first made its debut in roughly a year ago, in February 2019. To meet demands laid out in the executive order, the administration released an updated federal AI research roadmap that mostly reasserts Obama-era policy, and the National Institute of Standards and Technology (NIST) released a plan that details how the federal government should engage with industry and academic researchers to create AI standards.

Federal agencies considering regulation will be bound to follow the principles and submit them for consideration to the White House Office of Information and Regulatory Affairs, an administration official said.

AI regulators will be asked to follow 10 principles, such as a need for public trust in AI, public participation in regulatory rulemaking processes, and non-discrimination. NIST working on federal engagement for development of technical AI standards offered as example of how to measure fairness or safety of AI models.

The principles are intentionally high level, administration officials said, for federal agencies to craft regulation on a case-by-case basis, to account for rules needed for the contrast in rules needed to power AI-powered drones or medical devices.

Public comment from the AI community and federal agencies will be requested for the next 60 days, and then a final set of principles will be sent to federal agencies.

Analysis by the Stanford Institute for Human-Centered Artificial Intelligence released last fall asserts that the U.S. federal government must ramp up investment and spend $120 billion in the next decade for research, education, and to grow the national ecosystem. A 2020 Trump administration proposed budget called for about $1 billion in non-defense AI research and development.

In other federal government AI news, a study released in late December by NIST found that many facial recognition systems used today falsely identify Asian or African people 10 to 100 times more often than the faces of white people.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!