AI & RoboticsNews

DeepMind researchers propose rebuilding the AI industry on a base of anticolonialism


Researchers from Google’s DeepMind and the University of Oxford recommend that AI practitioners draw on decolonial theory to reform the industry, put ethical principles into practice, and avoid further algorithmic exploitation or oppression.

The researchers detailed how to build AI systems while critically examining colonialism and colonial forms of AI already in use in a preprint paper released Thursday. The paper was coauthored by DeepMind research scientists William Isaac and Shakir Mohammed and Marie-Therese Png, an Oxford doctoral student and DeepMind Ethics and Society intern who previously provided tech advice to the United Nations.

The researchers posit that power is at the heart of ethics debates and that conversations about power are incomplete if they do not include historical context and recognize the structural legacy of colonialism that continues to inform power dynamics today. They further argue that inequities like racial capitalism, class inequality, and heteronormative patriarchy have roots in colonialism and that we need to recognize these power dynamics when designing AI systems to avoid perpetuating such harms.

“Any commitment to building the responsible and beneficial AI of the future ties us to the hierarchies, philosophy, and technology inherited from the past, and a renewed responsibility to the technology of the present,” the paper reads. “This is needed in order to better align our research and technology development with established and emerging ethical principles and regulation, and to empower vulnerable peoples who, so often, bear the brunt of negative impacts of innovation and scientific progress.”

The paper incorporates a range of suggestions, such as analyzing data colonialism and decolonization of data relationships and employing the critical technical approach to AI development Philip Agre proposed in 1997.

The notion of anticolonial AI builds on a growing body of AI research that stresses the importance of including feedback from people most impacted by AI systems. An article released in earlier this week argues that the AI community must ask how systems shift power and asserts that “an indifferent field serves the powerful.” VentureBeat explored how power shapes AI ethics in a special issue last fall. Power dynamics were also a main topic of discussion at the ACM FAccT conference held in early 2020 as more businesses and national governments consider how to put AI ethics principles into practice.

The Deepmind paper interrogates how colonial features are found in algorithmic decision-making systems and what the authors call “sites of coloniality,” or practices that can perpetuate colonial AI. These include beta testing on disadvantaged communities — like Cambridge Analytica conducting tests in Kenya and Nigeria or Palantir using predictive policing to target Black residents of New Orleans. There’s also “ghost work,” the practice of relying on low-wage workers for data labeling and AI system development. Some argue ghost work can lead to the creation of a new global underclass.

The authors define “algorithmic exploitation” as the ways institutions or businesses use algorithms to take advantage of already marginalized people and “algorithmic oppression” as the subordination of a group of people and privileging of another through the use of automation or data-driven predictive systems.

Ethics principles from groups like G20 and OECD feature in the paper, as well as issues like AI nationalism and the rise of the U.S. and China as AI superpowers.

“Power imbalances within the global AI governance discourse encompasses issues of data inequality and data infrastructure sovereignty, but also extends beyond this. We must contend with questions of who any AI regulatory norms and standards are protecting, who is empowered to project these norms, and the risks posed by a minority continuing to benefit from the centralization of power and capital through mechanisms of dispossession,” the paper reads. Tactics the authors recommend include political community action, critical technical practice, and drawing on past examples of resistance and recovery from colonialist systems.

A number of members of the AI ethics community, from relational ethics researcher Abeba Birhane to Partnership on AI, have called on machine learning practitioners to place people who are most impacted by algorithmic systems at the center of development processes. The paper explores concepts similar to those in a recent paper about how to combat anti-Blackness in the AI community, Ruha Benjamin’s concept of abolitionist tools, and ideas of emancipatory AI.

The authors also incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.


Author: Khari Johnson.
Source: Venturebeat

Related posts
CryptoNews

FBI Seeks Crypto Fraud Victims in Major Market Manipulation Case

CryptoNews

London Man Denies Running Illegal Cryptocurrency ATMs

CryptoNews

FBI Warns Investors of Growing Crypto Scams Amid Billion

AI & RoboticsNews

New high quality AI video generator Pyramid Flow launches — and it’s fully open source!

Sign up for our Newsletter and
stay informed!

Worth reading...
Pixel 4a 5G + Pixel 5 confirmed, Android 11 Beta 2, more