AI & RoboticsNews

Bias and discrimination in AI: whose responsibility is it to tackle them? 

Elevate your enterprise data technology and strategy at Transform 2021.


We all have our individual biases hardwired into our perceptions and actions. One might think artificial intelligence (AI) would eliminate our biases and create a level playing field. This is not the case. Since humans create the algorithms that enable AI to learn and make inferences, their biases are inherently incorporated into the code.

The following cases illustrate how detrimental the misuse of AI can be:

These examples show how AI can foster discrimination, lack of equal opportunity and social exclusion. Once these cases became public, they also caused considerable damage to the companies and organizations that utilized these AI tools.

So, whose responsibility is it to stop the perpetual cycle of bias in AI? There are four key players:

They create the models that enable widespread usage of AI technologies. As such, they bear the largest share of responsibility for identifying potential biases in how data is processed by AI. Since this is a crucial piece of the puzzle, having this responsibility should be a requirement of the developer’s role.

At every stage of development, a developer should produce a relevant risk plan, following these key stages:

  • Model Design
    • The data used in training the AI model is representative and balanced, not skewed toward a specific demographic.
    • The model doesn’t include any discriminating parameters (such as gender, age, ethnicity or socioeconomic status), even at the expense of reducing model performance.
    • The model doesn’t perpetuate an existing skew in society in which certain populations are discriminated against.
  • Development: Once the model is created, it’s important to test “edge cases.” For example, if a developer is working on image recognition software, then he/she is required to ensure that a rich diversity of different ages, ethnicities and genders are included, as well as to factor in variables that may skew results.
  • Production: Once AI is in production, a continuous monitoring process should be employed to detect any deviations in the data that can derail the algorithm.

There are many pitfalls in implementing AI on large scale-populations, and the developers behind this process are pivotal in identifying these initial biases, preventing them from entering the AI model and informing their client (the organization that hired them for the project) about any potential risks that the system bears.

While developers have considerable responsibility to bear, the organization that employs them is responsible for setting checkpoints to ensure the outcomes of AI are being taken into consideration early. It all starts with hiring a diverse development team. Diverse AI teams tend to outperform like-minded teams, and bringing together AI developers from different backgrounds decreases the risk of creating bias-prone algorithms. Companies creating AI should also be responsible for reviewing developers’ work to ensure biases are being caught and fixed early, facilitating awareness across the company for these potential biases and providing teams with methodologies and guidelines for mitigating these biases and preventing them from slipping into the code.

One way to do this is to hire a data ethicist or establish an ethics committee, which should be composed of both developers and non-technology staff. Pull in members from legal, HR, product management and other departments; primarily, make sure it’s a group of diverse backgrounds and opinions.

Most tech companies are still focused on getting AI to work in production and have not evolved to ensure its ethical standards.

Organizations that use the AI solutions are also accountable for any ethical violations. These companies are responsible for understanding the potential risks flagged by the developers and taking action to mitigate these risks before the system is fully deployed.

While enterprises might not be thrilled to make an extra expenditure to fix the AI model they ordered (and in some cases, even shut off the solution entirely), the responsibility for preventing a biased model lies on their shoulders.

Enterprises also have the most to lose. End users will view the AI as a part of their brand, and when something goes wrong they may lose customers and brand equity as a result. In some cases, enterprises may even find themselves facing a lawsuit from a customer harmed by a biased algorithm.

Government institutions typically lag behind tech companies in placing rules and regulations to ensure market fairness. Regulators in the U.S. and the EU still haven’t set any official guidelines that clearly define the ethical red flags that companies must avoid when using AI. Lawmakers will have to move fast to keep up with the rapidly changing ecosystem. Until regulation exists that balances business needs and fairness to society as a whole, more incidents will inadvertently keep occurring.

AI is gaining traction as a game-changing technology that offers great potential in streamlining operations, cutting costs, personalizing products and improving customer experience. At the same time, using this technology at scale can create new ethical dilemmas regarding unintentional discrimination against segments of the population. Setting ground rules for identifying, managing and regulating these risks is of urgent importance to society, not just to the people directly involved in bringing these algorithms to life.

Nurit Cohen Inger, vice-president of products at BeyondMinds.ai, leads the company in defining and driving the product strategy and lifecycle, along with developing and managing a strong team of product managers and designers.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Contributor
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!