AI & RoboticsNews

Google Cloud AI removes gender labels from Cloud Vision API to avoid bias

Google Cloud AI is removing the ability to label people in images as “man” or “woman” with its Cloud Vision API, the company told VentureBeat today. Labeling is used to classify images and train machine learning models, but Google is removing gendered labels because it violates Google’s AI principle to avoid creating biased systems.

“Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the artificial intelligence principles at Google, specifically Principle #2: avoid creating or reinforcing unfair bias. After today, a non-gendered label such as ‘person’ will be returned by Cloud Vision API,” a Google spokesperson told VentureBeat in an email.

VentureBeat reached out to Microsoft’s Azure and Amazon’s AWS to ask if they intend to remove gender labels from their cloud AI services. We will update this story with more information when we hear back.

The Google Cloud Vision API provides computer vision for customers to detect objects and faces. Google previously blocked the use of gender-based pronouns in an AI tool in 2018.

Many facial analysis and facial recognition systems on the market today predict gender but have challenges identifying people who do not conform to gender norms, people who are transgender, and women of color.

A study last fall by University of Colorado, Boulder researchers found that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. People with no gender identity were misidentified 100% of the time.

Lead author Morgan Klaus Scheuerman told VentureBeat Google he believes Google is attempting to set itself apart from competitors. Systems from companies like Microsoft can label people like waitresses, air woman, or military woman.

“We basically discussed [in work last fall] how the decisions that are being made in all systems are inherently political. And in the cases of where you’re kind of classifying things about human beings, it becomes more. I think we should assess more what that what the political notions of that are. And so I’m very excited that Google is kind of taking that seriously,” he told VentureBeat in a phone interview.

In recent years, researchers like Joy Boulamwini performing system audits found major facial recognition providers tend to work best on white men and worse on women of color.

A lack of high performance for all people is a primary reason lawmakers in state legislatures, cities like San Francisco, and the U.S. Senate have proposed bans or moratoriums on the use of facial recognition systems.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!

Worth reading...
Android 11 Developer Preview 1 hands-on: Top new features