AI & RoboticsNews

AI ethics is all about power

At The Common Good in the Digital Age, a tech conference held in Vatican City recently, Pope Francis urged Facebook executives, venture capitalists, and government regulators to be wary of the impact of AI and other technologies. “If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest,” he said.

In a related but contextually different conversation, this summer Joy Buolamwini testified before Congress in questioning with Rep. Alexandria Ocasio-Cortez (D-NY) that multiple audits found facial recognition technology generally works best on white men and worst on women of color.

What these two events have in common is their relationship to power dynamics in the AI ethics debate.

Arguments about AI ethics can wage without mention of the word “power,” but it’s often there just under the surface. It’s rarely the direct focus, but it needs to be; power in AI is like gravity, an invisible force that influences every consideration of ethics in artificial intelligence.

Power provides the means to influence which use cases are relevant; which problems are priorities; and who the tools, products, and services are made to serve.

It underlies debates about how corporations and countries create policy governing use of the technology.

It’s there in AI conversations about democratization, fairness, and responsible AI. It’s there when Google CEO Sundar Pichai moves AI researchers into his office and top machine learning practitioners are treated like modern day philosopher kings.

It’s there when people like Elon Musk expound on the horrors that future AI technologies may wreak on humanity in decades to come, even though facial recognition technology is already being used today to track and detain China’s Uighur Muslim population on a massive scale.

And it’s there when a consumer feels data protection is hopeless or an engineer knows something is ethically questionable but sees no avenue for recourse.

Broadly speaking, startups may regard ethics as a nice addition but not a must-have. Engineers working to be first to market and to meet product release deadlines can scoff at the notion that precious time be put aside to consider ethics. CEOs and politicians may pay ethics lip service but end up only sending sympathetic signals or engaging in ethics washing.

But AI ethics isn’t just a feel-good add-on — a want but not a need. In fact, AI has been called one of the great human rights challenges of the 21st century. And it’s not just about doing the right thing or making the best AI systems possible, it’s about who wields power and how AI affects the balance of power in everything it touches.

These power dynamics are set to define business, society, government, the lives of individuals around the world, the future of privacy, and even the right to a future. As virtually every AI product manager likes to say, things are just getting started, but failure to address uneven power dynamics in the age of AI can lead to a perilous future.

The labor market and the new Gilded Age

A confluence of trends led to the present-day reemergence of AI at a precarious time in history. Deep learning, cloud computing, processors like GPUs, and compute power required to train neural networks faster — technology that’s become a cornerstone of major tech companies — fuel today’s revival.

The fourth industrial revolution is happening alongside historic income inequality and the new Gilded Age. Like the railroad barons who took advantage of farmers anxious to get their crop to market in the 1800s, tech companies with proprietary data sets use AI to further entrench their market position and monopolies.

When data is more valuable than oil, the companies with valuable data have the advantage and are most likely to consolidate wealth or the position of industry leaders. This applies of course to big-name companies like Apple, Facebook, Google, IBM, and Microsoft, but it’s also true of large legacy businesses.

At the same time, mergers and acquisitions by large businesses continue to accelerate and further consolidate power, a trend that cements other trends, as research and development belongs almost entirely to large businesses. A 2018 SSTI analysis found that companies with 250 employees or more account for 88.5% of R&D spending, while companies with 5,000 employees or more account for nearly two-thirds of R&D spending.

The growing proliferation of AI could lead to great imbalance in society, according to a recent report from the Stanford Institute for Human-Centered AI (HAI).

“The potential financial advantages of AI are so great, and the chasm between AI haves and have-nots so deep, that the global economic balance as we know it could be rocked by a series of catastrophic tectonic shifts,” reads a proposal from HAI that calls for the U.S. government to invest $120 billion in education, research, and entrepreneurship investments over the next decade.

The proposal’s co-author is former Google Cloud chief AI scientist Dr. Fei-Fei Li. “If guided properly, the age of AI could usher in an era of productivity and prosperity for all,” she said. “PwC estimates AI will deliver $15.7 trillion to the global economy by 2030. However, if we don’t harness it responsibly and share the gains equitably, it will lead to greater concentrations of wealth and power for the elite few who usher in this new age — and poverty, powerlessness, and a lost sense of purpose for the global majority.”

Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy, studies the impact of AI on the future of work and spoke recently at a Stanford AI ethics symposium. Regarding the number of jobs suitable for machine learning that are likely to be replaced in the years ahead, Brynjolfsson said, “If you look at the economy overall, there’s a tidal wave coming. It’s barely hit yet.”

Machine intelligence can be used to redesign and augment workplace tasks, but it is most often used to replace jobs, Brynjolfsson said.

Automation’s impact on job loss is predicted to differ city to city and state to state, according to both Brookings Institution analysis and research by Brynjolfsson and Tom Mitchell of Carnegie Mellon University. Fresno, California is expected to get hit harder than Minneapolis, for example, but job instability or loss is expected to disproportionately impact low income households and people of color. A recent McKinsey report says that African-American men are expected to see the greatest job loss as a result of automation.

This follows a trend of median income in the United States remaining stagnant since 2000. The end of the rise of median income tied to a rise in productivity is what Brynjolfsson calls “the great decoupling.”

“For most of the 20th century, those roles in tandem — more production, more wealth, more productivity — went hand in hand with the typical person being better off, but recently those lines have diverged,” he said. “Well, the pie is getting bigger, we’re creating more wealth, but it’s going to a smaller and smaller subset of people.”

Brynjolfsson believes AI community challenges have led to great leaps forward in state-of-the-art AI like the DARPA autonomous vehicle challenge and ImageNet for computer vision, but businesses and the AI community should begin to turn their attention toward shared prosperity.

“It’s possible for many people to be left behind and indeed, many people have. And that’s why I think the challenge that is most urgent now is not simply more better technology, though I’m all for that, but creating shared prosperity,” he said.

Tech giants and access to power

Another major trend underway as AI spreads is that for the first time in U.S. history, the majority of the workforce are people of color. Most cities in the U.S. — and in time, the nation as a whole — will no longer have a racial majority by 2030, according to U.S. Census projections.

These demographic shifts make lack of diversity within AI companies all the more glaring. Critically, there’s a lack of race and gender diversity in the creation of decision-making systems — what AI Now Institute director Kate Crawford calls AI’s “white guy problem.”

Google 2019 gender and race tech workforce representation

Above: Google 2019 gender and race stats for technical workforce representation

Image Credit: Google 2019 Diversity Report


Author: Khari Johnson
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!