AI & RoboticsNews

4 considerations when taking responsibility for responsible AI

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


This article was written by Micaela Kaplan, Ethics in AI Lead, CallMiner

Artificial intelligence (AI) and machine learning (ML) have become ubiquitous in our everyday lives. From self-driving cars to our social media feeds, AI has helped our world operate faster than it ever has, and that’s a good thing — for the most part.

As these technologies integrate into our everyday lives, so too have the many questions around the ethics of using and creating these technologies. AI tools are models and algorithms that have been built on real-world data, so they reflect real-world injustices like racism, misogyny, and homophobia, along with many others. This data leads to models that perpetuate existing stereotypes, reinforce the subordination of certain groups of people to the majority population, or unfairly delegate resources or access to services. All these outcomes cause major repercussions for both consumers and businesses alike.

While many companies have begun recognizing these potential problems in their AI solutions, only a few have begun building the structures and policies to address them. The fact is that AI and social justice can no longer operate as two separate worlds. They need the influence of each other to create tools that will help us build the world we want to see. Addressing the ethical questions surrounding AI and understanding our social responsibilities is a complicated process that involves the challenging work and dedication of many people. Below are a few actionable things to keep in mind as you begin the journey towards responsible AI.

Create a space that allows people to voice their questions and concerns

When studying ethics in any capacity, facing uncomfortable truths comes with the territory. The strongest teams in the fight for responsible AI are those that are honest with themselves. These teams acknowledge the biases that appear in their data, their models, and themselves. They also consider how these biases affect the world around them. Noticing and acting on the biases and impacts requires honest group discussion.

Dedicating the time and space to have these conversations is critical in ensuring that these conversations can be just that — conversations. As teams, we need to create spaces that allow us to speak freely on topics that might be controversial without fear of consequences. This fundamentally requires the support of executives. Sometimes, it might be easier to have a team meet and discuss without executives and then present the group’s ideas to the executives later. This level of anonymity can help provide a sense of security, because ideas presented on behalf of the team cannot be traced back to a single person. Allowing for open communication and honest feedback is what allows us to confront these questions productively. In the fight for ethical AI, it’s not a team against each other; it’s the team against the potential problems in the model.

Know what to look for, or at least where to start

Finding the problems in AI solutions can be tricky. The weak performance of a model on a training set may indicate that the training population doesn’t represent the real world. Low minority representation could result in, for example, a speech tool that misinterprets accents or a filter that only recognizes white faces. There are many other cases that could arise, and knowing where to look can feel difficult.

The best way to spot bias or other concerns in your model is to pay attention and be intentional in your testing. In recent years, there has been a push in the academic community to create Datasheets for Datasets. These datasheets are intended to bring awareness to what is and is not included in a dataset so that teams can ensure that the data they use is intended for their purpose and represents their user base. Creating these datasheets for your own datasets is a great way to ensure awareness of your data populations. Similarly, it is important to test model performance on minority populations. A model that performs significantly better on a majority population versus a minority population is very likely to raise ethical questions in the future.

Meet people where they are, not where you want them to be

Successful teams consist of people who are diverse in all facets of their lives, including age, experiences, and backgrounds. That comes with a diverse understanding of what the ethical questions around AI are in the first place. The ever-growing body of research and discourse around responsible AI is full of terms and concepts that might not be familiar to everyone. Some people may feel passionate about the social justice issues at hand, while others may not have even heard of some of them. Everyone’s voice on the team deserves to be heard and creating a common language and framework to discuss and understand is crucial to building ethical AI.

Take the time, both individually and as a team, to research the issues and questions you want to discuss. Use the spaces you’ve created for discussion to help each other unpack and understand the issues and questions at hand, free from judgment. Going over key terms and ideas ensures that everyone is using the same language to talk about the same ideas. Dispelling any potential miscommunications will allow for more constructive conversations down the line. When we can learn to listen to those who are different from us when they point out a concern, we can address the problems when we see them.

Have the courage to adapt as you learn

While it’s critical to stay up-to-date on current topics in social justice and AI, it’s equally as essential to be willing to embrace the unknown. The process towards responsible AI involves anticipating change, being open to continuous learning, and knowing that problems may arise that don’t have clear-cut answers.

AI is a fast-paced industry and being agile and prepared to pivot an approach is often part of the game. However, being willing to change an approach for ethical reasons, or halting progress to de-bias a tool that is already available to users, takes courage. These choices are often harder to explain than changes made for productivity or the bottom line. The goal should not only be to bring a tool or model through the production pipeline successfully. The goal should be to stay on the cutting-edge of AI technology innovation while ensuring that the end product is fair and representative of the world we live in.

Responsible AI is everyone’s responsibility

Ensuring that models are built to fight injustice instead of perpetuating it is our collective responsibility. It’s a job that needs to begin in ideation, be a fundamental part of the research and development lifecycle and continue through release and the rest of the product’s lifecycle. Data science and research teams, along with other teams committed to ensuring responsible AI, will never succeed without executive-level support. Companies and institutions that view responsible AI as a long-term commitment and measure success based on more than just revenue empower their teams to voice questions and concerns without fear of consequences. This allows for a cycle of reflection and revision that will help answer the ethical questions we ask around the building and use of AI. There will be mistakes along the way, and our job is not to avoid innovation to protect from potential harm. Instead, our job is to look at our advancements with a critical eye so that we can make the world a more just place.

Micaela Kaplan received her MS in Computational Linguistics at Brandeis University after graduating with BAs in Linguistics and Computer Science. She hopes to work towards a more ethical world, one project at a time.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Contributor
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!