AI & RoboticsNews

AI bias is prevalent but preventable — here’s how to root it out

All the sessions from Transform 2021 are available on-demand now. Watch now.


The success of any AI application is intrinsically tied to its training data.  You don’t just need the right data quality and the right data volume; you also have to proactively ensure your AI engineers are not passing their own latent biases on to their creations. If engineers allow their own worldviews and assumptions to influence data sets — perhaps supplying data that is limited to only certain demographics or focal points — applications dependent on AI problem-solving will be similarly biased, inaccurate, and, well, not all that useful.

Simply put, we must continuously detect and eliminate human biases from AI applications for the technology to reach its potential. I expect bias scrutiny is only going to increase as AI continues its rapid transition from a relatively nascent technology into an utterly ubiquitous one. But human bias must be overridden to truly achieve that reality. A 2018 Gartner report predicted that through 2030, 85% of AI projects will provide false results caused by bias that has been built into the data or the algorithms — or that is present (latently or otherwise) in the teams managing those deployments. The stakes are high; faulty AI leads to serious reputational damage and costly failures for businesses that make decisions based on erroneous, AI-supplied conclusions.

Identifying AI bias

AI bias takes several forms. Cognitive biases originating from human developers influences machine learning models and training data sets. Essentially, biases get hardcoded into algorithms. Incomplete data itself also produces biases — and this becomes especially true if information is omitted due to a cognitive bias. An AI trained and developed free from bias can still have its results tainted by deployment biases when put into action. Aggregation bias is another risk, occurring when small choices made across an AI project have a large collective impact on the integrity of results. In short, there are a lot of steps inherent to any AI recipe where bias can get baked in.

Detecting and removing AI bias

To achieve trustworthy AI-dependent applications that will consistently yield accurate outputs across myriad use cases (and users), organizations need effective frameworks, toolkits, processes, and policies for recognizing and actively mitigating AI bias. Available open source tooling can assist in testing AI applications for specific biases, issues, and blind spots in data.

AI Frameworks. Frameworks designed to protect organizations from the risks of AI bias can introduce checks and balances that minimize undue influences throughout application development and deployment. Benchmarks for trusted, bias-free practices can be automated and ingrained into products using these frameworks.

Here are some examples:

  • The Aletheia Framework from Rolls Royce provides a 32-step process for designing accurate and carefully managed AI applications.
  • Deloitte’s AI framework highlights six essential dimensions for implementing AI safeguards and ethical practices.
  • A framework from Naveen Joshi details cornerstone practices for developing trustworthy AI. It focuses on the need for explainability, machine learning integrity, conscious development, reproducibility, and smart regulations.

Toolkits. Organizations should also leverage available toolkits to recognize and eliminate bias present within machine learning models and identify bias patterns in machine learning pipelines. Here are some particularly useful toolkits:

  • AI Fairness 360 from IBM is an extensible (and open source) toolkit that enables examination, reporting, and mitigation of discrimination and bias in machine learning models.
  • IBM Watson OpenScale provides real-time bias detection and mitigation and enables detailed explainability to make AI predictions trusted and transparent.
  • Google’s What-If Tool offers visualization of machine learning model behavior, making it simple to test trained models against machine learning fairness metrics to root out bias.

Processes and policies. Organizations will likely need to introduce new processes purposely designed to remove bias from AI and increase trust in AI systems. These processes define bias metrics and regularly and thoroughly check data against those criteria. Policies should play a similar role, establishing governance to require strict practices and vigilant action in minimizing bias and addressing blind spots.

Remember: AI trust is a business opportunity

Those taking steps to reduce bias in their AI systems can recharacterize this potential for crisis into an opportunity for competitive differentiation. Promoting anti-bias measures can set a business apart by establishing greater customer confidence and trust in AI applications. This is true today but will be even more so as AI proliferates. Transparency in the pursuit of unbiased AI is good for business.

Advanced new AI algorithms are bringing AI into new fields — from synthetic data generation to transfer learning, reinforcement learning, generative networks, and neural networks. Each of these exciting new applications will have their own susceptibility to bias influences, which must be addressed for these technologies to flourish.

With AI bias, the fault is not in the AI but in ourselves. Taking all available steps to remove human bias from AI enables organizations to produce applications that are more accurate, more effective, and more compelling to customers.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Shomron Jacob, Iterate.ai
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!