AI & RoboticsNews

The pitfalls of a ‘retrofit human’ in AI systems

Stanislav Petrov is not a famous name in the computer science space, like Ada Lovelace or Grace Hopper, but his story serves as a critical lesson for developers of AI systems. Petrov, who passed away on May 19, 2017, was a lieutenant colonel in the Soviet Union’s Air Defense Forces. On September 26, 1983, an alarm announced that the U.S. had launched five nuclear armed intercontinental ballistic missiles (ICBMs). His job, as the human in the loop of this technical detection system, was to escalate to leadership to launch Soviet missiles in retaliation, ensuring mutually assured destruction.

As the sirens blared, he took a moment to pause and think critically. Why would the U.S. send only five missiles? What purpose would they have to send them? There had been no indication in global political events that such a drastic measure was imminent. He chose to not follow protocol, and after some agonizing minutes realized he had made the right decision. There had been no missile attack; it was a false alarm. He was officially reprimanded by leadership for his decision to save the world.

The ability to take action based on context-specific human deduction is not accounted for in our sociotechnical algorithmic systems. Our language around AI systems anthropomorphizes technology, eliminating the human from the narrative. Linguistically, we structure our description of the technology as follows: “AI can diagnose heart disease in four seconds, as study shows machines now ‘as good’ as doctors.” This way of thinking reduces the actions of human doctors to rote tasks and presents the idea of an “AI doctor” as if it has a physical form and is capable of willful action. When we imagine such AI, it is not as code or algorithms, but through personifications like those drawn from the Terminator or, more benignly, Bjork’s music video “All is Full of Love.” In these scenarios, the human is no longer an empowered actor, but rather a passive recipient of outcomes.

The reality is that these systems are not all-knowing, not perfectly generalizable, and, in practice, often quite flawed. There are two ways in which designers and practitioners of algorithmic systems fail. First, they’re overconfident in the ability of AI to deliver a solution that is context-specific to the human subject. Second, they don’t incorporate a way for human actors to meaningfully challenge or correct the system’s recommendations. An extension of technochauvinism, a term coined by Meredith Broussard, “retrofit human” is the phenomenon of adjusting humans to the limitations of the AI system rather than adjusting the technology to serve humanity. The consequences of this are becoming increasingly evident as algorithms begin to impact our daily lives.

The use of risk-assessment algorithms in the criminal justice system has become a high-stakes topic of discussion, first catching the public eye with ProPublica’s analysis of Northpointe’s COMPAS parole algorithm. Northpointe claims its software can predict recidivism rates. But Jeff Larson and Julia Angwin’s team of data scientists at ProPublica performed analysis and determined that COMPAS scores white people and black people unequally. Their work exposed not only the issues of bias and discrimination in the development and construction of algorithms in impactful situations, their statistical debate with Northpointe illustrated the probabilistic, and therefore uncertain, nature of algorithmic output.

By design, algorithms can’t make the final decision in many situations; but we have to enable a human in the loop to actually affect change. As any critical design scholar will tell you, simply inserting a human as an afterthought is woefully insufficient, especially when faced with the narrative of “all-knowing” AI systems. Understanding user interaction and power dynamics is critical to creating well-designed human-in-the-loop systems.

Some research indicates that people trust algorithms before other people because of the perceived objectivity of data and AI. In their research, Poornima Madhavan and Douglas Weigman tested for perceptions of reliability and trustworthiness of human versus AI decision-making. They noticed that in a luggage-sorting exercise, automated “novices” were considered to be more reliable and trustworthy than human “novices,” but human “experts” were thought to be more trustworthy than automated “experts.” When we investigate reliability, overall people thought algorithmic solutions were more reliable than people.

In other words, even in a low-skilled task, we see deference to the algorithm if the human is perceived to be in a less-empowered role. Even given the potential flaws in algorithms, people had to prove their superior ability, and the default position of power was given to the AI system.

Given the mystery around algorithms, people have a hard time understanding how to integrate this input into their decision-making. Ben Green and Yiling Chen find that in traditional human-in-the loop algorithmic systems — in this case, a pretrial risk assessment scenario — participants could not determine how accurate the assessments (or the model) were, did not adjust their reliance on the system based on how well the model performed, and still showed racial bias in their decisions.

Outside the lab, how might the human-machine power dynamic change when we investigate high-skilled actors informed by AI? Megan T. Stevenson found that judges who were given pretrial risk assessment results to determine bail demonstrated little change in their decision-making, and any changes regressed back to their own biases over time. Similar to Green and Chen’s experiment above, if judges are not given the information to critically interrogate or contest an algorithm and are not held accountable for their decision to reject an algorithmic output in the system design, they may simply choose to ignore it.

But Stevenson’s findings illustrate how flawed design can lead to outcomes with embedded biases that adversely impact the less-empowered — in this case, the individual for whom the bail assessment is being made. The use of an algorithm makes the final decision appear to be more objective to an untrained observer, even if it did not influence the judge making the decision.

This makes the governance of these human-algorithmic systems and the contestability of the judges’ bail decisions more difficult. It also leads to a puzzling question: If we institute algorithmic advisory systems because we consider humans to be biased and then posit that algorithmic bias requires human oversight, where does this cycle end? Similarly, how do we combat technochauvinism and create systems that give humans the ability to contest results instead of being punished for non-adherence, like Petrov was?

Our conversation about algorithmic bias needs to consider humans as both recipients and actors in the ecosystem. While Petrov’s case was not about the use of AI, it warns of the dangers of designing technical systems that assume the user will not exercise independent thought. The pitfalls of a retrofit human system — one in which the human is subject to the limitations of technology and not empowered to influence outcomes — appear when we fail to design truly meaningful interaction between algorithmic output and human beings.


Author: Dr. Rumman Chowdhury, Accenture
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!