AI & RoboticsNews

We need to build better bias in AI

Check out all the on-demand sessions from the Intelligent Security Summit here.


At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We’ve all heard of high-profile instances of AI bias, like Amazon’s machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don’t just harm individuals; they work against their creators’ original intentions. Quite rightly, these examples attracted public outcry and, as a result, shaped perceptions of AI bias into something that is categorically bad and that we need to eliminate.

While most people agree on the need to build high-trust, fair AI systems, taking all bias out of AI is unrealistic. In fact, as the new wave of ML models go beyond the deterministic, they’re actively being designed with some level of subjectivity built in. Today’s most sophisticated systems are synthesizing inputs, contextualizing content and interpreting results. Rather than trying to eliminate bias entirely, organizations should seek to understand and measure subjectivity better.

In support of subjectivity

As ML systems get more sophisticated — and our goals for them become more ambitious — organizations overtly require them to be subjective, albeit in a manner that aligns with the project’s intent and overall objectives.

We see this clearly in the field of conversational AI, for instance. Speech-to-text systems capable of transcribing a video or call are now mainstream. By comparison, the emerging wave of solutions not only report speech, but also interpret and summarize it. So, rather than a straightforward transcript, these systems work alongside humans to extend how they already work, for example, by summarizing a meeting, then creating a list of actions arising from it.

 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

In these examples, as in many more AI use cases, the system is required to understand context and interpret what is important and what can be ignored. In other words, we’re building AI systems to act like humans, and subjectivity is an integral part of the package.

The business of bias

Even the technological leap that has taken us from speech-to-text to conversational intelligence in just a few years is small compared to the future potential for this branch of AI.

Consider this: Meaning in conversation is, for the most part, conveyed through non-verbal cues and tone, according to Professor Albert Mehrabian in his seminal work, Silent Messages. Less than ten percent is down to the words themselves. Yet, the vast majority of conversation intelligence solutions rely heavily on interpreting text, largely ignoring (for now) the contextual cues.

As these intelligence systems begin to interpret what we might call the metadata of human conversation. That is, tone, pauses, context, facial expressions and so on, bias — or intentional, guided subjectivity — is not only a requirement, it is the value proposition.

Conversation intelligence is just one of many such machine learning fields. Some of the most interesting and potentially profitable applications of AI center not around faithfully reproducing what already exists, but rather interpreting it.

With the first wave of AI systems some 30 years ago, bias was understandably seen as bad because they were deterministic models intended to be fast, accurate — and neutral. However, we are at a point with AI where we require subjectivity because the systems can match and indeed mimic what humans do. In short, we need to update our expectations of AI in line with how it has changed over the course of one generation.

Rooting out bad bias

As AI adoption increases and these models influence decision-making and processes in everyday life, the issue of accountability becomes key.

When an ML flaw becomes apparent, it is easy to blame the algorithm or the dataset. Even a casual glance at the output from the ML research community highlights how dependent projects are on easily accessible ‘plug and play’ upstream libraries, protocols and datasets.

However, problematic data sources are not the only potential vulnerability. Undesirable bias can just as easily creep into the way we test and measure models. ML models are, after all, built by humans. We choose the data we feed them, how we validate the initial findings and how we go on to use the results. Skewed results that reflect unwanted and unintentional biases can be mitigated to some extent by having diverse teams and a collaborative work culture in which team members freely share their ideas and inputs.

Accountability in AI

Building better bias starts with building more diverse AI/ML teams. Research consistently demonstrates that more diverse teams lead to increased performance and profitability, yet change has been maddeningly slow. This is particularly true in AI.

While we should continue to push for culture change, this is just one aspect of the bias debate. Regulations governing the AI system bias are another important route to creating trustworthy models.

Companies should expect much closer scrutiny of their AI algorithms. In the U.S., the Algorithmic Fairness Act was introduced in 2020 with the aim of protecting the interests of citizens from harm that unfair AI systems can cause. Similarly, the EU’s proposed AI regulation will ban the use of AI in certain circumstances and heavily regulate its use in “high risk” situations. And beginning in New York City in January 2023, companies will be required to perform AI audits that evaluate race and gender biases. 

Building AI systems we can trust

When organizations look at re-evaluating an AI system, rooting out undesirable biases or building a new model, they, of course, need to think carefully about the algorithm itself and the data sets it is being fed. But they must go further to ensure that unintended consequences do not creep in at later stages, such as test and measurement, results interpretation, or, just as importantly, at the point where employees are trained in using it.

As the field of AI gets increasingly regulated, companies need to be far more transparent in how they apply algorithms to their business operations. On the one hand, they will need a robust framework that acknowledges, understands and governs both implicit and explicit biases.

However, they are unlikely to achieve their bias-related objectives without culture change. Not only do AI teams urgently need to become more diverse, at the same time the conversation around bias needs to expand to keep up with the emerging generation of AI systems. As AI machines are increasingly built to augment what we are capable of by contextualizing content and inferring meaning, governments, organizations and citizens alike will need to be able to measure all the biases to which our systems are subject.

Surbhi Rathore is the CEO and cofounder of Symbl.ai

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Surbhi Rathore, Symbl.ai
Source: Venturebeat

 

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!