AI & RoboticsNews

DeepMind researchers say AI poses a threat to people who identify as queer

The impact of AI on people who identify as queer is an underexplored area that ethicists and researchers need to consider, along with including more queer voices in their work. That’s according to a recent study from Google’s DeepMind that looked at the positive and negative effects of AI on people who identify as lesbian, gay, bisexual, transgender, or asexual. Coauthors of a paper on the study include DeepMind senior staff scientist Shakir Mohamed, whose work last year encouraged reforming the AI industry with anticolonialism in mind and queering machine learning as a way to bring about more equitable forms of AI.

The DeepMind paper published earlier this month strikes a similar tone. “Given the historical oppression and contemporary challenges faced by queer communities, there is a substantial risk that artificial intelligence (AI) systems will be designed and deployed unfairly for queer individuals,” the paper reads.

Data on queer identity is collected less routinely than data around other characteristics. Due to this lack of data, coauthors of the paper refer to unfairness for these individuals as “unmeasurable.” In health care settings, people may be unwilling to share their sexual orientation due to fear of stigmatization or discrimination. That lack of data, coauthors said, presents unique challenges and could increase risks for people who are undertaking medical gender transitions.

The researchers note that failure to collect relevant data from people who identify as queer may have “important downstream consequences” for AI system development in health care. “It can become impossible to assess fairness and model performance across the omitted dimensions,” the paper reads. “The coupled risk of a decrease in performance and an inability to measure it could drastically limit the benefits from AI in health care for the queer community, relative to cisgendered heterosexual patients. To prevent the amplification of existing inequities, there is a critical need for targeted fairness research examining the impacts of AI systems in health care for queer people.”

The paper considers a number of ways AI can be used to target queer people or impact them negatively in areas like free speech, privacy, and online abuse. Another recent study found shortcomings for people who identify as nonbinary when it comes to AI for fitness tech like the Withings smart scale.

On social media platforms, automated content moderation systems can be used to censor content classified as queer, while automated online abuse detection systems are often not trained to protect transgender people from intentional instances of misgendering or “deadnaming.”

On the privacy front, the paper states that AI for queer people is also an issue of data management practices, particularly in countries where revealing a person’s sexual or gender orientation can be dangerous. You can’t recognize a person’s sexual orientation from their face as a 2017 Stanford University study claimed, but coauthors of that paper cautioned that AI could be developed to try to classify sexual orientation or gender identity from online behavioral data. AI that claims it can detect people who identify as queer can be used to carry out technology-driven malicious outing campaigns, a particular threat in certain parts of the world.

“The ethical implications of developing such systems for queer communities are far-reaching, with the potential of causing serious harms to affected individuals. Prediction algorithms could be deployed at scale by malicious actors, particularly in nations where homosexuality and gender non-conformity are punishable offenses,” the DeepMind paper reads. “In order to ensure queer algorithmic fairness, it will be important to develop methods that can improve fairness for marginalized groups without having direct access to group membership information.”

The paper recommends applying machine learning that uses differential privacy or other privacy-preserving techniques to protect people who identify as queer in online environments. The coauthors also suggest exploration of technical approaches or frameworks that take an intersectional approach to fairness for evaluating AI models. The researchers examine the challenge of mitigating the harm AI inflicts on people who identify as queer, but also on other groups of people with identities or characteristics that cannot be simply observed. Solving algorithmic fairness issues for people who identify as queer, the paper argues, can produce insights that are transferrable to other unobservable characteristics, like class, disability, race, or religion.

The paper also cites studies on the performance of AI for queer communities that have been published in the last few years.

The DeepMind paper is Google’s most recent work on the importance of ensuring algorithmic fairness for specific groups of people. Last month, Google researchers concluded in a paper that algorithm fairness approaches developed in the U.S. or other parts of the Western world don’t always transfer to India or other non-Western nations.

But these papers examine how to ethically deploy AI at a time when Google’s own AI ethics operations are associated with some pretty unethical behavior. Last month, the reported that DeepMind cofounder and ethics lead Mustafa Suleyman had most of his management duties stripped before he left the company in 2019, following complaints of abuse and harassment from coworkers. An investigation was subsequently carried out by a private law firm. Months later, Suleyman took a job at Google advising the company on AI policy and regulation, and according to a company spokesperson, Suleyman no longer manages teams.

Google AI ethics lead Margaret Mitchell still appears to be under internal investigation, which her employer took the unusual step of sharing in a public statement. Mitchell recently shared an email she said she sent to Google before the investigation started. In that email, she characterized Google’s choice to fire Ethical AI team colead Timnit Gebru weeks earlier as “forever after a really, really, really terrible decision.”

Gebru was fired while she was working on a research paper about the dangers of large language models. Weeks later, Google released a trillion-parameter model, the largest known language model of its kind. A recently published analysis of GPT-3, a 175-billion parameter language model, concluded that companies like Google and OpenAI have only a matter of months to set standards for addressing the societal consequences of large language models — including bias, disinformation, and the potential to replace human jobs. Following the Gebru incident and meetings with leaders of Historically Black Colleges and Universities (HBCU), earlier this week Google pledged to fund digital skills training for 100,000 Black women. Prior to accusations of retaliation from former Black female employees like Gebru and diversity recruiter April Curley, Google was accused of mistreatment and retaliation by multiple employees who identify as queer.

Bloomberg reported Wednesday that Google is restructuring its AI ethics research efforts under Google VP of engineering Marian Croak, who is a Black woman. According to Bloomberg, Croak will oversee the Ethical AI team and report directly to Google AI chief Jeff Dean.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!