AI & RoboticsNews

How to create space for ethics in AI

In a year that has seen decades’ worth of global shocks, bad news, and scandals squeezed into 12 excruciatingly long months, the summer already feels like a distant memory. In August 2020, the world was in the throes of a major social and racial justice movement, and I argued hopefully in VentureBeat that the term “ethical AI” was finally starting to mean something.

It was not the observation of a disinterested observer but an optimistic vision for coalescing the ethical AI community around notions of power, justice, and structural change. Yet in the intervening months it has proven to be, at best, an overly simplistic vision, and at worst, a naive one.

The piece critiqued “second wave” ethical AI as being preoccupied with technical fixes to problems of bias and fairness in machine learning. It observed that focusing on technical interventions to address ethical harms skewed the conversation away from issues of structural injustice and permitted the “co-option of socially conscious computer scientists” by big tech companies.

I realize now that this argument minimised the contribution of ethical AI researchers – scientists and researchers inside of tech companies, and their collaborators – to the broader justice and ethics agenda. I saw only co-option and failed to highlight the critical internal pushback and challenges to entrenched power structures that ethical AI researchers propagate, and the potential their radical research has to change the shape of technologies.

Ethics researchers contribute to this movement just by showing up to work every day, taking part in the everyday practice of making technology and championing a “move slow and fix things” agenda against a tide of productivity metrics and growth KPIs. Many of these researchers are taking a principled stand as members of minoritized groups. I was arguing that a focus on technical accuracy narrows the discourse on ethics in AI. What I didn’t recognize was that such research can itself undermine the technological orthodoxy that is at the root of unethical development of tech and AI.

Google’s decision to fire Dr. Timnit Gebru is clear confirmation that ethical tech researchers represent a serious challenge to the companies where they work. Dr. Gebru is a respected Black computer scientist whose most prominent work has championed technically-targeted interventions to ethical harms. Her contract termination by Google has been the subject of much commentary and debate. It reflects an important point: that it doesn’t matter if “ethical AI” is starting to mean something to those of us working to improve how tech impacts society; it only matters if it means something to the most powerful companies in the world.

For that reason, Google’s decision to unceremoniously fire an expert, vocal, high-profile employee opens up a critical faultline in the ethical AI agenda and exposes the underbelly of big tech.

An ethical agenda holds that moral principles of right and wrong should shape the development of advanced technologies, even as those technologies are too embryonic, amorphous, or mercurial for existing regulatory frameworks to grasp or restrain at speed. “Ethical AI” aims to plug the gaps with a range of tools – analysis grounded in moral philosophy, critical theory and social science; principles, frameworks and guidelines; risk and impact assessments, bias audits and external scrutiny. It is not positioned as a substitute for law and regulation but as a placeholder for it or a complement to it. Thinking about the ethical issues AI raises should help us identify where regulation is needed, which research should not be pursued, and whether the benefits of technology accrue equitably and sustainably.

But in order for it to work, it has to happen in the places where AI research and tech development is happening. In research institutes, at universities, and especially in tech companies. Small companies building autonomous vehicles, medium-sized AI research labs, and tech giants building the dominant commerce and communication platforms all need to recognize, internalize, and provide space for thinking about ethics in order for it to make a difference. They must make principles of equity and diversity foundational, by embracing perspectives, voices, and approaches from across society, particularly racial and gender diversity. Most importantly, they must give such work the weight it deserves by setting up ethics review processes with teeth, sanctioned and supported by senior leadership.

Until now, many companies have talked the talk. Google, Facebook, and DeepMind have all established ethics officers or ethics teams within their AI research departments. Ethics have become more explicitly part of the remit of chief compliance officers and trust and safety departments at many tech companies. Rhetorical commitments to ethics have become mainstream on tech podcasts and at tech conferences.

Outside of corporate structures, the AI research community has confronted head on its own responsibility to ensure ethical AI development. Most notably, this year the leading AI conference, NeurIPS, required researchers submitting papers to account for the societal impact of their work as well as any financial conflict of interest.

And yet, as a recent survey of 24 ethical AI practitioners demonstrates, even when companies appoint dedicated ethical AI researchers and practitioners, they are consistently failing to create the space and conditions for them to do their work. Interviewees in the survey “reported being measured on productivity and contributions to revenue, with little value placed on preventing reputational or compliance harm and mitigating risk,” let alone ensuring societal benefit. The survey reveals that corporate actors are unable to operationalize the long-term benefits of ethical AI development when it comes at the expense of short-term profit metrics.

The survey reveals that ethical AI practitioners face a risk of retribution or harm for reporting ethical concerns. Some ethics teams report being firewalled from certain projects that deserved their attention or being siloed into addressing narrow parts of much broader problems. Retributive action in the form of retrenchment is more than a theoretical peril for ethical AI researchers, as Dr. Gebru’s firing demonstrates: Google fired her after she critiqued the harms and risks of large language models.

If one of the world’s most profitable, influential, and scrutinized companies can’t make space for ethical critique within its ranks, is there any hope for advancing truly ethical AI?

Not unless the structural conditions that underpin AI research and development fundamentally change. And that change begins when we no longer allow a handful of tech companies to maintain complete dominance of the raw materials of AI research: data.

Monopolistic strangleholds in the digital realm disincentivise ethical AI research. They allow a few powerful players to advance AI research that expands their own power and reach, edging out new entrants to the market that might compete. To the extent that consumers will view ethical AI as more trustworthy, reliable, and societally legitimate, its adoption could be a byproduct of a more competitive market. But in an environment of restrained consumer choices and concentrated power, there are few business incentives to develop products designed to attract public trust and confidence.

For that reason, in 2021 the most important instruments of ethical AI will be tech regulation and competition reform. The writing is already on the wall – multiple antitrust lawsuits are now pending against the largest platforms in the United States, and this week the European Commission announced a package of reforms that will fundamentally reshape platforms and the power they wield, just as the UK government signaled its own intention to bring regulatory reform to impose a “duty of care” on platforms with regards to online harms. Such reforms should critically correct the landscape of tech and AI development, permitting alternative avenues of innovation, stimulating new business models, clearing away the homogeneity of the digital ecosystem.

However, they will not be a panacea. Big tech’s influence on academic research will not dissipate with competition reform. And while there is likely to be a protracted fight over regulating a handful of key actors, thousands of small and medium tech enterprises must urgently confront the ethical questions AI research provokes with respect to human agency and autonomy; fairness and justice; and labour, wellbeing and the planet.

To create space for ethical research now — both inside and outside the tech sector — we cannot wait for big tech regulation. We must better understand the culture of technology companies, push for whistleblower protections for ethics researchers, help to upskill regulators, develop documentation and transparency requirements, and run audits and regulatory inspections. And there must be an industry-wide reckoning when it comes to addressing systemic racism and extractive labor practices. Only then will people building technologies be empowered to orient their work towards social good.


Author: Carly Kind
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!