Every so often, VentureBeat writes a story about something that needs to go away. A few years back, my colleague Blair Hanley Frank argued that AI systems like Einstein, Sensei, and Watson must go because corporations tend to overpromise results for their products and services. I’ve taken runs at charlatan AI and white supremacy.
This week, a series of events at the intersection of the workplace and AI lent support to the argument that techno-utopianism has no place in the modern world. Among the warning signs in headlines was a widely circulated piece by a journalist who said she was wrong to be optimistic about robots.
The reporter describes how she used to be a techno-optimist but in the course of her reporting found that robots can crunch people into their system and force them to work at a robot’s pace. In the article, she cites the Center for Investigative Reporting’s analysis of internal Amazon records that found instances of human injury were higher in Amazon facilities with robots than in facilities without robots.
“Dehumanization and intensification of work is not inevitable,” wrote the journalist, who’s quite literally named Sarah O’Connor. Fill in your choice of Terminator joke here.
Also this week: The BBC quoted HireVue CEO Kevin Parker as saying AI is more impartial than a human interviewer. After facing opposition on multiple fronts, HireVue announced last month it would no longer use facial analysis in its AI-powered video interview analysis of job candidates. Microsoft Teams got similar tech this week to recognize and highlight who’s enjoying a video call.
External auditors have examined the Al used by HireVue and hiring software company Pymetrics, which refers to its AI as “entirely bias free,” but the processes seem to have raised more questions than they’ve answered.
And VentureBeat published an article about a research paper with a warning: Companies like Google and OpenAI have a matter of months to confront negative societal consequences of the large language models they release before they perpetuate stereotypes, replace jobs, or are used to spread disinformation.
What’s important to understand about that paper, written by researchers at OpenAI and Stanford, is that before criticism of large language models became widespread, research and dataset audits found major flaws in large computer vision datasets that were over a decade old, like ImageNet and 80 Million Tiny Images. An analysis of face datasets dating back four decades also found ethically questionable practices.
A day after that article was published, OpenAI cofounder Greg Brockman tweeted what looked like an endorsement of a 90-hour work week. Run the math on that. If you slept seven hours a night, you would have about four hours a day to do anything that is not work — like exercise, eating, resting, or spending time with your family.
Agreed: https://t.co/1rZc1Dhgma pic.twitter.com/vESf1DrBn4
— Greg Brockman (@gdb) February 10, 2021
An end to techno-utopianism doesn’t have to mean the death of optimistic views about ways technology can improve human lives. There are still plenty of people who believe that indoor farming can change lives for the better or that machine learning can accelerate efforts to address climate change.
Google AI ethics co-lead Margaret Mitchell recently made a case for AI design that keeps the bigger picture in mind. In an email sent to company leaders before she was placed under investigation, she said consideration of ethics and inclusion is part of long-term thinking for long-term beneficial outcomes.
“The idea is that, to define AI research , we must look to where we want to be in the future, working backwards from ideal futures to this moment, right now, in order to figure out what to work on today,” Mitchell said. “When you can ground your research thinking in both foresight and an understanding of society, then the research questions to currently focus on fall out from there.”
With that kind of long-term thinking in mind, Google’s Ethical AI team and Google DeepMind researchers have produced a framework for carrying out internal algorithm audits, questioned the wisdom of scale when addressing societal issues, and called for a culture change in the machine learning community. Google researchers have also advocated rebuilding the AI industry according to principles of anticolonialism and queer AI and evaluating fairness using sociology and critical race theory. And ethical AI researchers recently asserted that algorithmic fairness cannot simply be transferred from Western nations to non-Western nations or those in the Global South, like India.
The death of techno-utopia could entail creators of AI systems recognizing that they may need to work with the communities their technology impacts and do more than simply abide by the scant regulations currently in place. This could benefit tech companies as well as the general public. As Parity CEO Rumman Chowdhury told VentureBeat in a recent story about what algorithmic auditing startups need to succeed, unethical behavior can have reputation and financial costs that stretch beyond any legal ramifications.
Computer scientists trying to find a path to ‘Ethical AI’ but refusing to learn anything about white supremacy, heteropatriarchy, capitalism, ablism, or settler colonialism pic.twitter.com/vPf0aa4gLa
— Sasha Costanza-Chock (@schock) February 8, 2021
The lack of comprehensive regulation may be why some national governments and groups like Data & Society and the OECD are building algorithmic assessment tools to diagnose risk levels for AI systems.
Numerous reports and surveys have found automation on the rise during the pandemic, and events of the past week remind me of the work of MIT professor and economist Daron Acemoglu, whose research has found one robot can replace 3.3 human jobs.
In testimony before Congress last fall about the role AI will play in the economic recovery in the United States, Acemoglu warned the committee about the dangers of excessive automation. A 2018 National Bureau of Economic Research (NBER) paper coauthored by Acemoglu says automation can create new jobs and tasks, as it has done in the past, but says excessive automation is capable of constraining labor market growth and has potentially acted as a drag on productivity growth for decades.
“AI is a broad technological platform with great promise. It can be used for helping human productivity and creating new human tasks, but it could exacerbate the same trends if we use it just for automation,” he told the House Budget committee. “Excessive automation is not an inexorable development. It is a result of choices, and we can make different choices.”
To avoid excessive automation, in that 2018 NBER paper Acemoglu and his coauthor, Boston University research fellow Pascual Restrepo, call for reforms of the U.S. tax code, because it currently favors capital over human labor. They also call for new or strengthened institutions or policy to ensure shared prosperity, writing, “If we do not find a way of creating shared prosperity from the productivity gains generated by AI, there is a danger that the political reaction to these new technologies may slow down or even completely stop their adoption and development.”
This week’s events involve complexities like robots and humans working together and language models with billions of parameters, but they all seem to raise a simple question: “What is intelligence?” To me, working 90 hours a week is not intelligent. Neither is perpetuating bias or stereotypes with language models or failing to consider the impact of excessive automation. True intelligence takes into account long-term costs and consequences, historical and social context, and, as Sarah O’Connor put it, makes sure “the robots work for us, and not the other way around.”
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Khari Johnson
Senior AI Staff Writer
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more
Author: Khari Johnson
Source: Venturebeat