AI & RoboticsNews

AI researchers urge tech to go beyond scale to address systemic social issues

The definition of success for startups and Big Tech companies alike has long been summed up by three words: hockey stick growth. Speedy gains in terms of both users and revenue is the dream for any company looking to scale. But according to a paper recently published by Google senior research scientist Alex Hanna and independent researcher Tina Park, a growing number of AI researchers say companies interested in purpose beyond profit need to consider approaches beyond rapid growth.

The paper argues that scale thinking is not just a way to grow a business, but a method that impacts all parts of that business, actively inhibits participation in tech and society, and “forces particular types of participation to operate as extractive or exploitative labor.”

“Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one’s product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems,” the paper reads.

The authors go on to say that companies rooted in scale thinking are unlikely to be as “effective at deep, systemic change as their purveyors imagine. Rather, solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality.”

This kind of thinking runs counter to not only dogma at the heart of Big Tech companies like Facebook and Google, but also the way media and analysts typically assess the value of emerging startups.

Earlier this month, Congress released an antitrust investigation that found Big Tech companies rely on scale to maintain and strengthen monopolies across the digital economy. A Department of Justice (DOJ) lawsuit filed Tuesday against Google, the first against a major tech company in two decades, also points to the scale achieved through algorithms and collection of personal user data as a key factor in the government’s decision to sue the Alphabet company.

The opposition includes scale evangelists like Y Combinator cofounder Paul Graham, AWS CTO Werner Vogels, and former Google CEO Eric Schmidt, who is quoted in the DOJ lawsuit as saying “scale is the key” to Google’s strength in search.

Embedded in scale thinking, Hanna and Park argue, is the idea that scalability is morally good and solutions that cannot scale are morally impoverished. Authors say that’s part of why Big Tech companies place such a high value on artificial intelligence.

“Large tech firms spend much of their time hiring developers who can envision solutions which can be implemented algorithmically. Code and algorithms which scale poorly are seen as undesirable and inefficient. Many of the most groundbreaking infrastructural developments in Big Tech have been those which increase scalability, such as Google File System (and subsequently the MapReduce computing schema) and distributed and federated machine learning models,” the paper reads.

Hanna and Park also characterize scale thinking as shortsighted because it requires companies to treat resources and people as interchangeable units and encourages the datafication of users in order to “find ways to rationalize the individual into legible data points.” This approach can lead to systems that are not made to serve everyone equally and that negatively impact the lives of those who fall outside their scaled solutions.

The paper also notes that scale thinking is an inefficient way to increase hiring or retention of employees from diverse backgrounds. Since the deaths of Black Americans like Breonna Taylor and George Floyd led to calls for racial justice earlier this year, a number of major tech companies have recommitted to diversity goals, but for years now progress has been virtually undetectable. Examples in the paper include a tendency to focus on bias workshops or other inclusion metrics rather than the experiences of marginalized people within the company.

Rather than making scale a company’s North Star, the authors suggest approaches like “mutual aid,” in which businesses adopt an interdependent model and take responsibility for meeting the direct material needs of individuals. The idea of mutual aid arose in part from the kinds of support systems that sprung up in the wake of the COVID-19 pandemic.

“While scale thinking emphasizes abstraction and modularity, mutual aid networks encourage concretization and connection,” the paper reads. “While mutual aid is not the only framework through which we can consider a move away from scale thinking-based collaborative work arrangements, we find it to be a fruitful one to theorize and pursue.”

In addition to exploring mutual aid, the paper encourages developers to ask questions about any system they create, such as whether it legitimizes or expands social systems people are trying to dismantle, whether it encourages broad participation, and whether it centralizes power or distributes power among developers and users.

Recommendations are in line with a range of ethically centered technology models proposed by members of the AI community in recent months. Other approaches include the idea of anticolonial AI, which rejects algorithmic oppression and data colonization, queering machine learning, data feminism, and building AI based on the African philosophy of Ubuntu, which focuses on the interconnectedness of people and the natural world.

There’s also “Good Intentions, Bad Inventions,” a Data & Society primer published earlier this month that attempts to dispel common myths about the best ways to build technology and improve user well-being.

Titled “Against Scale: Provocations and Resistances to Scale Thinking,” the paper was highlighted this week at a Computer-Supported Cooperative Work and Social Computing (CSCW) conference workshop. Before writing critically about scale, Hanna and colleagues at Google published a paper in late 2019 that argues the algorithmic fairness community should look to critical race theory as a way to interrogate AI systems and their impact on human lives.


The audio problem:

Learn how new cloud-based API solutions are solving imperfect, frustrating audio in video conferences. Access here



Author: Khari Johnson
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!