AI & RoboticsNews

When AI hurts people, who is held responsible?

Following a Maricopa County Grand Jury decision, last month the woman behind the wheel of a semi-autonomous Uber vehicle was charged with negligent homicide for the 2018 killing of Elaine Herzberg. The backup driver being charged following the first autonomous vehicle fatality appears to be a first, and it promises to be a landmark case with the power to shape the future of artificial intelligence.

How to determine whether a human was at fault when AI plays a role in a person’s injury or death is no easy task. If AI is in control and something goes wrong, when is it the human’s fault and when can you blame the AI? That’s the focus of a recent paper published in the Boston University Law Review. In it, UCLA assistant professor Andrew Selbst finds that AI creates tension with existing negligence law that requires intervention by regulators. The paper was initially published in early 2020 but was recently updated with analysis of the Arizona negligent homicide case.

Selbst says the Uber case could be an instance where responsibility for an action is misattributed to a human actor with limited control over the behavior of an automated or autonomous system, what cultural anthropologist Madeleine Elish calls “moral crumple zones.” When machines and humans are considered in tandem but the law fails to take machine intelligence into account, humans can absorb responsibility and become “liability sponges.”

“If negligence law requires a higher standard of care than humans can manage, it will place liability on human operators, even where the average person cannot prevent the danger,” Selbst writes. “While the Uber case seems to point in the direction of moral crumple zones, it is also easy to imagine the reverse — finding that because the average person cannot react in time or stay perpetually alert, failing to do so is reasonable. Ultimately, what AI creates is uncertainty.”

Selbst said legal scholars tend to draw distinction between fully autonomous vehicles and semi-autonomous machines working with humans, like the vehicle involved in the Uber crash.

While fully autonomous vehicles or artificial general intelligence (AGI) may shift responsibility to the hardware maker or AI system, the answer is far less clear when a human uses AI to make a decision based on a prediction, classification, or assessment, a fact that Selbst expects will present challenges for businesses, governments, and society.

The vast majority of AI on the market today augments human decision-making. Common examples today include recidivism algorithms judges use and AI-powered tools medical professionals employ to make a medical treatment plan or a diagnosis. Examples include systems that detect patterns in medical imagery that professionals can use to diagnose diseases like breast cancer, lung cancer, brain cancer, and, in early work, COVID-19.

Selbst declares that technology is a key driver of change in negligence law, but humans and AI combining to make a decision makes AI different. Complicating matters further is the fact that humans can accept automated decision-making without scrutiny, ignore AI if they suffer alert fatigue from too many notifications, and rely on AI to recognize patterns in data too complex to follow.

To resolve matters when things go wrong in a world full of humans and AI systems making decisions alongside each other, Selbst says governments need to consider reforms that give negligence law the chance to catch up with emerging technology.

“Where society decides that AI is too beneficial to set aside, we will likely need a new regulatory paradigm to compensate the victims of AI’s use, and it should be one divorced from the need to find fault. This could be strict liability, it could be broad insurance, or it could be ex ante regulation,” the paper reads.

There’s also a range of existing models like Andrew Tutt’s proposed “FDA for algorithms,” a federal agency to assess algorithms in the same way the FDA investigates pharmaceutical drugs, which like AI can deliver results without being able to fully explain its contents. There’s also the idea of algorithm assessments akin to environmental impact assessments, a way to increase oversight and publicly available disclosures.

“Ultimately, because AI inserts a layer of inscrutable, unintuitive, statistically derived, and often proprietary code between the decision and outcome, the nexus between human choices, actions, and outcomes from which negligence law draws its force is tested,” the paper reads. “While there may be a way to tie some decisions back to their outcomes using explanation and transparency requirements, negligence will need a set of outside interventions to have a real chance at providing redress for harms that result from the use of AI.”

Doing so might give negligence law standards time to catch up with advances in artificial intelligence before future paradigm shifts occur and standards fall even further behind.

The paper also explores the question of what happens when algorithmic bias plays a role in an injury? Going back to the autonomous vehicle question, research has shown that computer vision systems used do a better job detecting white pedestrians than Black pedestrians. Accepting use of such a system could reduce vehicle fatalities overall but also sanction worse outcomes for Black pedestrians.

If no regulatory intervention takes place, Selbst said, there’s a danger AI could normalize adverse outcomes for some but give them no recourse. That has the potential to accelerate feelings of helplessness consumers face today when they encounter algorithmic bias or have no options for redress when they experience harm online.

“The concern is that while AI may successfully reduce the overall number of injuries, it will not eliminate them, but it will eliminate the ability of the people injured in the new regime to recover in negligence,” the paper reads. “By using a tool based in statistical reasoning, the hospital prevents many injuries, but from the individual standpoint it also creates an entirely new set of victims that will have no recourse.”

Secrecy in the AI industry is also a major potential hurdle. Negligence law evolves over time to reflect common definitions of what constitutes reasonable or unreasonable behavior by, for example, a doctor or driver accused of negligence, but secrecy is likely to keep common occurrences that result in injury hidden from the public. Like in the past with Big Tobacco, some of that information may come into public view through whistleblowers, but secrecy can prevent that evolution of standards. The speed of development of AI compared to changes in negligence or tort law could make things worse.

“As a result of the secrecy, we know little of what individual companies have learned about the errors and vulnerabilities in their products. Under these circumstances, it is impossible for the public to come to any conclusions about what kinds of failures are reasonable or not,” the paper states.

Alongside data portability and freedom to refuse use of AI in a decision-making process, providing avenues to recourse for people is an essential part of an algorithmic bill of rights AI experts proposed last year. In another recent initiative encouraging society to adapt to accommodate AI, last month Amsterdam and Helsinki launched beta versions of algorithm registries for residents to inspect risk and bias assessments, identify datasets used to train AI systems, and quickly find the city official and city department responsible for the deployment of that AI system.


Author: Khari Johnson
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!