AI & RoboticsNews

The White House’s new AI principles won’t solve regulatory problems

This week, the White House released 10 AI Principles, intended as guidance for federal agencies while they consider how to appropriately regulate AI in the private sector. It’s an effort to reduce AI’s potential harms, which have come under scrutiny all over the world, while maintaining the maximum benefits to society. The industry has been awaiting this moment amid lingering uncertainty around the U.S. government’s plans to control this powerful technology and ensure it doesn’t hurt people more than it helps.

But while it may be a good thing that the White House is taking an active role in the fight to regulate AI, the Trump administration’s emphasis on light-touch regulation means the new rules fail to go far enough.

The principles themselves address some of the concerns raised by the AI ethics community and academics who study the effects of technology on society. One such principle calls for lawmakers to consider whether the technology will “introduce real-world bias that produces discriminatory outcomes,” echoing the rallying cries of academics who have for years warned that AI will codify existing societal biases into automated decision systems. These systems have been shown to adversely impact the most vulnerable people in society, including those marginalized by discrimination on the basis of race, gender, sexuality, and disability — and perhaps most alarmingly, our country’s poorest citizens. Unregulated algorithms can automate and thereby govern the human right to life in areas like health care, where flaws in algorithms have dictated that black patients receive inadequate care when compared to their white counterparts. In other cases, lawmakers suspect that algorithmic bias may perpetuate gender disparities in access to financial credit and employment.

The guidance also acknowledges that “current technical challenges in creating interpretable AI can make it difficult for agencies to ensure a level of transparency necessary” to foster public trust. It advises agencies to pursue transparency in two forms: disclosing when and where the technology is in use and making the outcomes transparent enough to ensure that the algorithms at least comply with existing laws.

But the true extent of the harm AI does globally is often obscured, due to trade secret designations and a governmental tendency to resort to the Glomar response — the classic “I can neither confirm nor deny” line. Using these protective measures, entities can hide the extent and breadth of the AI-related programs and products they’re using. It’s entirely likely that many algorithms already in use violate existing anti-discrimination laws (among others). In some cases, companies may even choose “black box” model types that obscure the rationale behind decisions at scale in order to claim ignorance and a lack of control over the resulting actions. This legal loophole is possible because some types of AI are so complex that no human could ever truly understand the logic behind a particular decision, making it difficult to assign blame if something goes wrong.

This lack of transparency has resulted in a massive loss of public trust in the technology industry today, and it’s further evidence that AI-specific regulation is desperately needed to protect the public good. We’ve seen time and again that — even with the best intentions — AI has the potential to hurt people in mass numbers, which puts unique responsibilities on the field. This incredible power to do harm at scale means those of us in the AI industry have a responsibility to put societal interest above profit. Unfortunately, too few companies currently embody this high degree of accountability.

Bias mitigation, public disclosure, and a solution to the problematic “black box” are table stakes for any sufficiently effective regulatory framework for AI. But the government’s AI Principles fall woefully short in their attempt to optimize the balance between societal good and any potential dangers the technology may cause — now or down the line.

Instead, the AI Principles focus mainly on the risks of losing out in terms of power rivalry, market competition, and economic growth. In doing so, the administration dramatically underestimates the ongoing harm facing Americans today, once again sacrificing public well-being for unchecked industry growth.

Importantly, although this is the first guidance to emerge from the federal government, many cities and states have already had success governing AI, where similar, comprehensive federal bills have stalled or failed due to congressional deadlock. Several cities have banned intrusive facial recognition use by law enforcement, with many more algorithmically centered proposals under consideration at the state and city levels.

It’s telling that the new AI Principles warn of regulatory “overreach” in one breath while undermining local legislative authority in another. The guidance advises that agencies may use “their authority to address inconsistent, burdensome, and duplicative State laws.” This language subtly indicates to lawmakers that a practice known as federal preemption could be used to undo some of the strong, grassroots, and broadly celebrated local regulations that have been championed by AI experts and civil liberties advocates like the ACLU.

Even more concerning is the fact that these strong local laws are the result of the public democratic will expressed in pockets of the country where technical work is most common: San Francisco; Sommerville, Massachusetts (near MIT); and a likely proposal in Seattle, Washington. These new local laws were enacted as a response to the inherent risks of using predictive technology to gate access to sensitive services like public housing, proactive health care, financial credit, and employment and to a lack of action from Washington. The people who build these technologies know that any algorithm threatening to perpetuate human bias or provide a “math-washed” license to discriminate must be closely monitored for misbehavior, or never implemented at all.

These AI Principles may be a small step in the right direction, and broadly speaking, they can introduce a degree of enhanced responsibility if correctly implemented by lawmakers who are earnestly seeking to reduce risk. But they are only a starting point, and they threaten further harm by raising the issue of federal preemption, which could undo the incredible work already being done by local legislators. Industry workers with direct knowledge of the benefits and risks of AI have often been the strongest voices in the call for strict regulation, and the White House should take steps to better align its policies with the advice of those working hardest to bring AI to market.

Liz O’Sullivan is the cofounder of ArthurAI and technology director of STOP (Surveillance Technology Oversight Project).


Author: Liz O’Sullivan, ArthurAI and STOP
Source: Venturebeat

Related posts
DefenseNews

Lockheed to supply Australia with air battle management system

DefenseNews

First upgraded F-35s won’t be ready for combat until next year

DefenseNews

US Army faces uphill battle to fix aviation mishap crisis

Cleantech & EV'sNews

GreenPower just launched this versatile electric utility truck

Sign up for our Newsletter and
stay informed!