AI & RoboticsNews

Inclusive design will help create AI that works for everyone

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


A few years ago, a New Jersey man was arrested for shoplifting and spent ten days in jail. He was actually 30 miles away during the time of the incident; police facial recognition software wrongfully identified him.

Facial recognition’s race and gender failings are well known. Often trained on datasets of primarily white men, the technology fails to recognize other demographics as accurately. This is only one example of design that excludes certain demographics. Consider virtual assistants that don’t understand local dialects, robotic humanoids that reinforce gender stereotypes or medical tools that don’t work as well on darker skin tones.

Londa Schiebinger, the John L. Hinds Professor of History of Science at Stanford University, is the founding director of the Gendered Innovations in Science, Health & Medicine, Engineering, and Environment Project and is part of the teaching team for Innovations in Inclusive Design.

In this interview, Schiebinger discusses the importance of inclusive design in artificial intelligence (AI), the tools she developed to help achieve inclusive design and her recommendations for making inclusive design a part of the product development process.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Your course explores a variety of concepts and principles in inclusive design. What does the term inclusive design mean?

Londa Schiebinger: It’s design that works for everyone across all of society. If inclusive design is the goal, then intersectional tools are what get you there. We developed intersectional design cards that cover a variety of social factors like sexuality, geographic location, race and ethnicity, and socioeconomic status (the cards won notable distinction at the 2022 Core77 Design Awards). These are factors where we see social inequalities show up, especially in the U.S. and Western Europe. These cards help design teams see which populations they might not have considered, so they don’t design for an abstract, non-existing person. The social factors in our cards are by no means an exhaustive list, so we also include blank cards and invite people to create their own factors. The goal in inclusive design is to get away from designing for the default, mid-sized male, and to consider the full range of users.

Why is inclusive design important to product development in AI? What are the risks of developing AI technologies that are not inclusive? 

Schiebinger: If you don’t have inclusive design, you’re going to reaffirm, amplify and harden unconscious biases. Take nursing robots, as an example. The nursing robot’s goal is to get patients to comply with healthcare instructions, whether that’s doing exercises or taking medication. Human-robot interaction shows us that people interact more with robots that are humanoid, and we also know that nurses are 90% women in real life. Does this mean we get better patient compliance if we feminize nursing robots? Perhaps, but if you do that, you also harden the stereotype that nursing is a woman’s profession, and you close out the men who are interested in nursing. Feminizing nursing robots exacerbates those stereotypes. One interesting idea promotes robot neutrality where you don’t anthropomorphize the robot, and you keep it out of human space. But does this reduce patient compliance?

Essentially, we want designers to think about the social norms that are involved in human relations and to question those norms. Doing so will help them create products that embody a new configuration of social norms, engendering what I like to call a virtuous circle – a process of cultural change that is more equitable, sustainable and inclusive.

What technology product does a poor job of being inclusive?

Schiebinger: The pulse oximeter, which was developed in 1972, was so important during the early days of COVID as the first line of defense in emergency rooms. But we learned in 1989 that it doesn’t give accurate oxygen saturation readings for people with darker skin. If a patient doesn’t desaturate to 88% by the pulse oximeter’s reading, they may not get the life-saving oxygen they need. And even if they do get supplemental oxygen, insurance companies don’t pay unless you reach a certain reading. We’ve known about this product failure for decades, but it somehow didn’t become a priority to fix. I’m hoping that the experience of the pandemic will prioritize this important fix, because the lack of inclusivity in the technology is causing failures in healthcare.

We’ve also used virtual assistants as a key example in our class for several years now, because we know that voice assistants that default to a female persona are subjected to harassment and because they again reinforce the stereotype that assistants are female. There’s also a huge challenge with voice assistants misunderstanding African American vernacular or people who speak English with an accent. In order to be more inclusive, voice assistants need to work for people with different educational backgrounds, from different parts of the country, and from different cultures.

What’s an example of an AI product with great, inclusive design?

Schiebinger: The positive example I like to give is facial recognition. Computer scientists Joy Buolamwini and Timnit Gebru wrote a paper called “Gender Shades,” in which they found that women’s faces were not recognized as well as men’s faces, and darker-skinned people were not recognized as easily as those with lighter skin.

But then they did the intersectional analysis and found that Black women were not seen 35% of the time. Using what I call “intersectional innovation,” they created a new dataset using parliamentary members from Africa and Europe and built an excellent, more inclusive database for Blacks, whites, men and women. But we notice that there is still room for improvement; the database could be expanded to include Asians, Indigenous people of the Americas and Australia, and possibly nonbinary or transgender people.

For inclusive design, we have to be able to manipulate the database. If you’re doing natural language processing and using the corpus of the English language found online, then you’re going to get the biases that humans have put into that data. There are databases we can control and make work for everybody, but for databases we can’t control, we need other tools, so the algorithm does not return biased results.

In your course, students are first introduced to inclusive design principles before being tasked with designing and prototyping their own inclusive technologies. What are some of the interesting prototypes in the area of AI that you’ve seen come out of your class? 

Schiebinger: During our social robots unit, a group of students created a robot called ReCyclops that solves for 1) not knowing what plastics should go into each recycle bin, and 2) the unpleasant labor of workers sorting through the recycling to determine what is acceptable.

ReCyclops can read the label on an item or listen to a user’s voice input to determine which bin the item goes into. The robots are placed in geographically logical and accessible locations – attaching to existing waste containers – in order to serve all users within a community.

How would you recommend that AI professional designers and developers consider inclusive design factors throughout the product development process? 

Schiebinger: I think we should first do a sustainability lifecycle assessment to ensure that the computing power required isn’t contributing to climate change. Next, we need to do a social lifecycle assessment that scrutinizes working conditions for people in the supply chain. And finally, we need an inclusive lifecycle assessment to make sure the product works for everyone. If we slow down and don’t break things, we can accomplish this.

With these assessments, we can use intersectional design to create inclusive technologies that enhance social equity and environmental sustainability.

Prabha Kannan is a contributing writer for the Stanford Institute for Human-Centered AI.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Prabha Kannan, Stanford Institute for Human-Centered AI
Source: Venturebeat

Related posts
AI & RoboticsNews

Why AI won’t make you a better writer

AI & RoboticsNews

Snowflake Build: the 4 biggest announcements on Cortex AI and more

AI & RoboticsNews

AI search wars heat up: Genspark adds Claude-powered financial reports on demand

DefenseNews

Kongsberg wins biggest-ever missile contract from US Navy, Marines

Sign up for our Newsletter and
stay informed!