AI & RoboticsNews

3 things the AI Bill of Rights does (and 3 things it doesn’t)

Did you miss a session from MetaBeat 2022? Head over to the on-demand library for all of our featured sessions here.


Expectations were high when the White House released its Blueprint for an AI Bill of Rights on Tuesday. Developed by the White House Office of Science and Technology Policy (OSTP), the blueprint is a non-binding document that outlines five principles that should guide the design, use and deployment of automated systems, as well as technical guidance toward implementing the principles, including recommended action for a variety of federal agencies. 

For many, high expectations for dramatic change led to disappointment, including criticism that the AI Bill of Rights is “toothless” against artificial intelligence (AI) harms caused by big tech companies and is just a “white paper.” 

It is not surprising that there were some mismatched expectations about what the AI Bill of Rights would include, Alex Engler, a research fellow at the Brookings Institution, told VentureBeat.

“You could argue that the OSTP set themselves up a little bit with this large flashy announcement, not really also communicating that they are a scientific advisory office,” Engler said.

Event

Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.


Register Here

Efforts to curb AI risks

The Biden Administration’s efforts to curb AI risks certainly differ from those currently being debated in the EU, he added.

“The EU is trying to draw rules which largely apply to all the circumstances you can conceive of using an algorithm for which there is some societal risk,” Engler said. “We’re seeing nearly the opposite approach from the Biden Administration, which is a very sector and even application-specific approach – so there is a very clear contrast.” 

Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed out that while there are shortcomings, they are mostly the function of a constantly evolving field where no one has all the answers yet. 

“I think it does a very, very good job of moving the ball forward, in terms of what we need to do, what we should do and how we should do it,” said Gupta. 

Gupta and Engler detailed three key things they say the AI Bill of Rights actually does — and three things it does not: 

The AI Bill of Rights does:

1. Highlight meaningful and thorough principles. 

The Blueprint includes five principles around safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation as well as human alternatives, consideration and fallback.

“I think the principles on their own are meaningful,” said Engler. “They are not only well-chosen and well-founded, but they give sort of an intellectual foundation to the idea that there are systemic, algorithmic harms to civil rights.” 

Engler said that he feels that broad conceptualization of harms is valuable and thorough.

“I think you could argue that if anything, it’s too thorough and they should have spent a little bit more time on other issues, but it is certainly good,” he said. 

2. Offer an agency-led, sector-focused approach.

It’s one thing to have principles, but Engler points out that the obvious next question is: What can the government do about it?

“The obvious subtext for those of us who are paying attention is that federal agencies are going to lead in the practical application of current laws to algorithms,” he said. “This is especially going to be meaningful in quite a few of [the] big systemic concerns around AI. For instance, the Equal Employment Opportunity Commission is working on hiring discrimination. And one I didn’t know about that is very new is that Health and Human Services is looking to combat racial bias in health care provisioning, which is a really systemic problem.” 

One of the advantages of this sort of sector-specific and application-specific approach is that if the agencies are really choosing what problems they’re tackling, as they’re being encouraged by the White House to do, they will be more motivated. “They’re going to choose the problems their stakeholders care about,” he said. “And [there can be] really meaningful and specific policy that considers the algorithms in this broader context.”

3. Acknowledge organizational elements.

Gupta said that he particularly liked the Blueprint’s acknowledgment of organizational elements when it comes to how AI systems are procured, designed, developed and deployed.

“I think we tend to overlook how critical the organizational context is – the structure, the incentives and how people who design and develop these systems interact with them,” he said.  

The AI Bill of rights, he explained, becomes particularly comprehensive by touching on this key element that is typically not included or acknowledged.

“It harmonizes both technical design interventions and organizational structure and governance as a joint objective which we’re seeking to achieve, rather than two separate streams to address responsible AI issues,” Gupta added.

The AI Bill of Rights does not:

The word “Bill of Rights,” not surprisingly, makes most think about the binding, legal nature of the first 10 amendments of the U.S. constitution.

“It is hard to think of a more spectacular legal term than AI Bill of Rights,” said Engler. “So I can imagine how disappointing it is when really what you’re getting is the incremental adaptation of existing agencies’ regulatory guidance.” 

That said, he explained, “This is in many ways, the best and first thing that we want – we want specific sectoral experts who understand the policy that they’re supposed to be in charge of, whether it’s housing or hiring or workplace safety or health care, and we want them to enforce good rules in that space, with an understanding of algorithms.”

He went on, “I think that’s the conscious choice we’re seeing, as opposed to trying to write central rules that somehow govern all of these different things, which is one of the reasons that the EU law is so confusing and so hard to move forward.” 

2. Cover every important sector. 

The AI Bill of Rights, Engler said, does reveal the limitations of a voluntary, agency-led approach — since there were several sectors that are notably missing, including educational access, worker surveillance and — most concerning — almost anything from law enforcement.

“One is left to doubt that federal law enforcement has taken steps to curtail inappropriate use of algorithmic tools like undocumented use of facial recognition, or to really affirmatively say that there are limits to what computer surveillance and computer vision can do, or that weapon detection might not be very reliable,” Engler said. “It’s not clear that they’re going to voluntarily self-curtail their own use of these systems, and that is a really significant drawback.”

3. Take the next step to test in the real world.

Gupta said that what he would like to see is organizations and businesses trying out the AI Bill of Rights recommendations in real-world pilots and documenting the lessons learned.

“There seems to be a lack of case studies for applications, not of these particular sets of guidelines which were just released, but for other sets of guidelines and proposed practices and patterns,” he said. “Unless we really test them out in the real world with case studies and pilots, unless we try this stuff out in practice, we don’t know to what extent the proposed practices, patterns and recommendations work or don’t work.”

Enterprises need to pay attention

Although the AI Bill of Rights is non-binding and mostly focused on federal agencies, enterprise businesses still need to take notice, said Engler. 

“If you are already in a regulated space, and there are already regulations on the books that affect your financial system or your property evaluation process or your hiring, and you have started doing any of that with an algorithmic system or software, there’s a pretty good chance that one of your regulating agencies is going to write some guidance say it applies to you,” he said. 

And while non-regulated industries may not need to be concerned in the short-term, Engler added that any industry that involves human services and uses complicated blackbox algorithms may come under scrutiny down the line. 

“I don’t think that’s going to happen overnight, and it would have to happen through legislation,” he said. “But there are some requirements in the American Data Privacy and Protection Act, which could feasibly pass this year, that do have some algorithm protection, so I’d also be worried about that.” 

Overall, Gupta said that he believes the AI Bill of Rights has continued to raise the importance of responsible AI for organizations. 

“What it does concretely now for businesses is give them some direction in terms of what they should be investing in,” he said, pointing to an MIT Sloan Management Review/Boston Consulting Group study that found that companies that prioritize scaling their RAI program over scaling their AI capabilities experience nearly 30% fewer AI failures.  

“I think [the AI Bill of Rights] sets the right direction for what we need in this field of responsible AI going forward,” he said. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!