AI & RoboticsNews

Bias in AI is spreading and it’s time to fix the problem

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


This article was contributed by Loren Goodman, cofounder and CTO at InRule Technology.

Traditional machine learning (ML) does only one thing: it makes a prediction based on historical data.

Machine learning starts with analyzing a table of historical data and producing what is called a model; this is known as training. After the model is created, a new row of data can be fed into the model and a prediction is returned. For example, you could train a model from a list of housing transactions and then use the model to predict the sale price of a house that has not sold yet.

There are two primary problems with machine learning today. First is the “black box” problem. Machine learning models make highly accurate predictions, but they lack the ability to explain the reasoning behind a prediction in terms that are comprehensible to humans. Machine learning models just give you a prediction and a score indicating confidence in that prediction.

Second, machine learning cannot think beyond the data that was used to train it. If historical bias exists in the training data, then, if left unchecked, that bias will be present in the predictions. While machine learning offers exciting opportunities for both consumers and businesses, the historical data on which these algorithms are built can be laden with inherent biases.

The cause for alarm is that business decision-makers do not have an effective way to see biased practices that are encoded into their models. For this reason, there is an urgent need to understand what biases lurk within source data. In concert with that, there needs to be human-managed governors installed as a safeguard against actions resulting from machine learning predictions.

Biased predictions lead to biased behaviors and as a result, we “breathe our own exhaust.” We are continually building on biased actions resulting from biased decisions. This creates a cycle that builds upon itself, creating a problem that compounds over time with every prediction. The earlier that you detect and eliminate bias, the faster you mitigate risk and expand your market to previously rejected opportunities. Those who are not addressing bias now are exposing themselves to a myriad of future unknowns related to risk, penalties, and lost revenue.

Demographic patterns in financial services

Demographic patterns and trends can also feed further biases in the financial services industry. There’s a famous example from 2019, where web programmer and author David Heinemeier took to Twitter to share his outrage that Apple’s credit card offered him 20 times the credit limit of his wife, even though they file joint taxes.

Two things to keep in mind about this example:

  • The underwriting process was found to be compliant with the law. Why? Because there aren’t currently any laws in the U.S. around bias in AI since the topic is seen as highly subjective.
  •  To train these models correctly, historical biases will need to be included in the algorithms. Otherwise, the AI won’t know why it’s biased and can’t correct its mistakes. Doing so fixes the “breathing our own exhaust” problem and provides better predictions for tomorrow.

Real-world cost of AI bias

Machine learning is used across a variety of applications impacting the public. Specifically, there is growing scrutiny with social service programs, such as Medicaid, housing assistance, or supplemental social security income. Historical data that these programs rely on may be plagued with biased data, and reliance on biased data in machine learning models perpetuates bias. However, awareness of potential bias is the first step in correcting it.

A popular algorithm used by many large U.S.-based health care systems to screen patients for high-risk care management intervention programs was revealed to discriminate against Black patients as it was based on data related to the cost of treating patients. However, the model did not take into consideration racial disparities in access to healthcare, which contribute to lower spending on Black patients than similarly diagnosed white patients. According to Ziad Obermeyer, an acting associate professor at the University of California, Berkeley, who worked on the study, “Cost is a reasonable proxy for health, but it’s a biased one, and that choice is actually what introduces bias into the algorithm.”

Additionally, a widely cited case showed that judges in Florida and several other states were relying on a machine learning-powered tool called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) to estimate recidivism rates for inmates. However, numerous studies challenged the accuracy of the algorithm and uncovered racial bias – even though race was not included as an input into the model.

Overcoming bias

The solution to AI bias in models? Put people at the helm of deciding when to take or not take real-world actions based on a machine learning prediction. Explainability and transparency are critical for allowing people to understand AI and why technology makes certain decisions and predictions. By expanding on the reasoning and factors impacting ML predictions, algorithmic biases can be brought to the surface, and decisioning can be adjusted to avoid costly penalties or harsh feedback via social media.

Businesses and technologists need to focus on explainability and transparency within AI.

There is limited but growing regulation and guidance from lawmakers for mitigating biased AI practices. Recently, the UK government issued an Ethics, Transparency, and Accountability Framework for Automated Decision-Making to produce more precise guidance on using artificial intelligence ethically in the public sector. This seven-point framework will help government departments create safe, sustainable, and ethical algorithmic decision-making systems.

To unlock the full power of automation and create equitable change, humans need to understand how and why AI bias leads to certain outcomes and what that means for us all.

Loren Goodman is cofounder and CTO at InRule Technology.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Loren Goodman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!