AI & RoboticsNews

The White House moves to hold artificial intelligence accountable with AI Bill of Rights

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Responsible artificial intelligence (AI), ethical AI, trustworthy AI.  Call it what you want — it’s a concept that’s impossible to ignore if you pay attention to the tech industry.

As AI has rapidly advanced, more voices have joined in the cry to ensure that it remains safe. The near-unanimous consensus is that AI can easily become biased, unethical and even dangerous. 

To address this ever-growing issue, today the White House released a Blueprint for an AI Bill of Rights. This outlines five principles that should guide the design, use and deployment of automated systems to protect Americans in this age of AI.

The issues with AI are well-documented, the Blueprint points out — from unsafe systems in patient care to discriminatory algorithms used for hiring and credit decisions.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

“The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats —and uses technologies in ways that reinforce our highest values,” it reads. 

Blueprint for AI Bill of Rights — not far enough

The EU has led the way in realizing an ethical AI future with its proposed EU AI act. Numerous organizations have also broached the concept of developing an overarching framework. 

The U.S. has notably lagged in the discussion. Prior to today, the federal government had not provided any concrete guidance on protecting citizens from AI dangers, even as President Joe Biden has called for protections around privacy and data collection. 

Many still say that the Blueprint, while a good start, doesn’t go far enough and doesn’t have what it takes to gain true traction. 

“It is exciting to see the U.S. joining an international movement to help understand and control the impact of new computing technologies, and especially artificial intelligence, to make sure the technologies enhance human society in positive ways,” said James Hendler, chair of the Association for Computing Machinery (ACM) technology policy council. 

Hendler, a professor at Rensselaer Polytechnic Institute and one of the originators of the Semantic Web, pointed to recent statements including the Rome Call for AI Ethics, the proposed EU regulations on AI and statements from the UN committee. 

They are “all calling for more understanding of the impacts of increasingly autonomous systems on human rights and human values,” he said. “The global technology council of the ACM has been working with our member committees to update earlier statements on algorithmic accountability, as we believe regulation of this technology needs to be a global, not just national, effort.” 

Similarly, the Algorithmic Justice League posted on its Twitter page that the Blueprint is a “step in the right direction in the fight toward algorithmic justice.”

The League combines art and research to raise public awareness of the racism, sexism, ableism and other harmful forms of discrimination that can be perpetuated by AI. 

Others point to the fact that the Blueprint doesn’t include any recommendations for restrictions on the use of controversial forms of AI — such as those that can identify people in real-time via biometric data or facial images. Some also point out that it does not address the critical issue of autonomous lethal weapons or smart cities. 

Five critical tenets

The White House’s Office of Science and Technology Policy (OSTP), which advises the president on science and technology, first talked of its vision for the blueprint last year. 

The five identified principles: 

  1. Safe and Effective Systems: People should be protected from unsafe or ineffective systems. 
  2. Algorithmic Discrimination Protections: People should not face discrimination by algorithms, and systems should be used and designed equitably. 
  3. Data Privacy: People should be protected from abusive data practices via built-in protections and have agency over how data about them is used.
  4. Notice and Explanation: People should know that an automated system is being used and understand how and why it contributes to outcomes that impact them.
  5. Human Alternatives, Consideration and Fallback: People should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems they encounter.

The Blueprint is accompanied by a handbook, “From Principles to Practice,” with “detailed steps toward actualizing these principles in the technological design process.” 

It was framed based on insights from researchers, technologists, advocates, journalists and policymakers, and notes that, while automated systems have “brought about extraordinary benefits,” they have also caused significant harm. 

It concludes that, “these principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.” 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!