AI & RoboticsNews

Access Now resigns from Partnership on AI due to lack of change among tech companies

International digital and human rights organization Access Now has resigned in protest from its role as a member of the Partnership on AI (PAI) due to a lack of change among businesses associated with the group or incorporation of opinions posed by civil society organizations. PAI was formed in September 2016 by a consortium of Big Tech companies and corporate giants like Apple, Amazon, Facebook, Google, IBM, and Microsoft. Since then, PAI has grown to include more than 100 member organizations, over half of which are now nonprofit, civic, or human rights-focused groups like Data & Society and Human Rights Watch.

“We have learned from the conversations with our peers, and PAI has afforded us the chance to contribute to the larger discussion on artificial intelligence in a new forum,” a letter published Tuesday reads. “However, we have found that there is an increasingly smaller role for civil society to play within PAI. We joined PAI hoping it would be a helpful forum for civil society to make an impact on corporate behavior and to establish evidence-based policies and best practices that will ensure that the use of AI systems is for the benefit of people and society. While we support dialogue between stakeholders, we did not find that PAI influenced or changed the attitude of member companies or encouraged them to respond to or consult with civil society on a systematic basis.”

Access Now also resigned because the group advocates for the ban of facial recognition and other biometric technology that can be used for mass surveillance. Earlier this year, the Partnership on AI produced an educational resource on facial recognition for policymakers and the public, but PAI has taken no position on whether the technology should be used. Access Now joined PAI about one year ago, and in the letter addressed to the PAI leadership team, Access Now leaders concluded that PAI is unlikely to change its stance and support a ban of facial recognition.

“The events of this year, from the public health crisis to the global reckoning on racial justice, have only underscored the urgency of addressing the risks of these technologies in a meaningful way,” the letter reads. “As more government authorities around the world are open to imposing outright bans on technologies like facial recognition, we want to continue to focus our efforts where they will be most impactful to achieve our priorities.”

Government use of surveillance technology has been on the rise in democratic and authoritarian nations alike in recent years. The 2020 Freedom of the Net report released today by Freedom House found a year-over-year decline in internet freedom in many parts of the world and that governments are increasingly using COVID-19 as an excuse to enable surveillance.

The American Civil Liberties Union (ACLU), Amnesty International, and Electronic Frontier Foundation (EFF) – all members of PAI — have led or supported facial recognition bans in major cities, state legislatures, and in the U.S. Congress. Conversely, PAI members like Amazon and Microsoft are some of the best-known facial recognition vendors in the world. During the largest protests in U.S. history in June, Amazon and Microsoft announced temporary moratoriums on facial recognition sales to police in the United States. Reform efforts may be on the agenda for the next Congress to address privacy, racial bias, and free speech issues raised by facial recognition.

More than two years after its founding, PAI began to engage with specific policy and AI ethics issues such as advocating that governments create special visas for AI researchers. PAI also opposed the use of algorithms in pretrial risk assessments like the kind the federal Bureau of Prisons used earlier this year to decide which prisoners were released early due to COVID-19. PAI publicly shares the names of members, but rarely shares the names of specific members who contributed to policy position papers produced by PAI staff.

In response to the Access Now resignation letter, PAI executive director Terah Lyons told VentureBeat that PAI works closely with tech companies to address and adjust their behavior, and hopefully that work comes to fruition over the course of the next year. But, she said, engaging in a multi-stakeholder process and trying to reach consensus among diverse voices to ensure AI benefits people and society can be challenging and take time.

“It’s definitely been a learning journey for us,” she said. “It’s also something that takes a lot of time to accomplish to move industry practice in meaningful ways, and because we have just had program work for two years as a pretty young nonprofit organization, I anticipate it will still take us some time to really meaningfully move the needle in that respect, but I think the good news is that we’ve laid a lot of important groundwork, and we’re already starting to see evidence of that paying dividends and some of the incremental choices that our corporate members have made as a result of their engagement.”

Examples of the kinds of incremental change she refers to come from companies like Facebook and Microsoft participating in the deepfake detection challenge, which PAI steering committee oversaw. She also pointed to specific examples from PAI’s work in fairness, accountability, and transparency but declined to share the names of specific companies or organizations that took part.

“A lot of the work we did with them on that issue set specifically I think really influenced how they thought about and internally addressed the challenges they face related to those questions, in addition to some of the other companies involved,” she said.

Lyons said PAI chose not to take a stand on facial recognition because the nonprofit assesses each topic on a case-by-case basis to determine where PAI can best have an impact.

“It’s not necessarily the case that on every single question, we are going to be in the best position to take a stance. But we do try to do our best to make sure that we’re providing some sort of service and value in support of making sure these debates as they unfold in public or private settings are as well informed and evidence-based as possible, and that we are equipping and empowering all of our organizations to really be in direct conversation with one another over these tough issues,” she said.

In other AI ethics and policy issues, Lyons said PAI has not produced any research or formed a steering committee to address the role AI plays in the concentration of power by tech companies. Last week, an antitrust subcommittee in the House of Representatives concluded a 16-month investigation with a lengthy report that concluded that Amazon, Apple, Facebook, and Google are monopolies. The report concluded that power consolidated by Big Tech companies threatens competitive markets and democracy. It also says artificial intelligence and the acquisition of startups in AI and emerging fields as instrumental parts of continuing to grow the competitive advantage of Big Tech companies. PAI however did create a shared prosperity initiative that will attempt to address how to more equally distribute power and wealth so that continual concentration of power by tech companies is no longer seen as an inevitability. The shared prosperity group includes a number of noted AI ethics researchers and detailed in a blog post last month.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!