AI & RoboticsNews

Former White House advisors and tech researchers co-sign new statement against AI harms

Two former White House AI policy advisors, along with over 150 AI academics, researchers and policy practitioners, have signed a new “Statement on AI Harms and Policy” published by ACM FaaCT (the Conference on Fairness, Accountability and Transparency) which is currently holding its annual conference in Chicago.

Alondra Nelson, former deputy assistant to President Joe Biden and acting director at the White House Office of Science and Technology Policy, and Suresh Venkatasubramanian, a former White House advisor for the “Blueprint for an AI Bill of Rights,” both signed the statement. It comes just a few weeks after a widely shared Statement on AI Risk signed by top AI researchers and CEOs cited concern about human “extinction” from AI, and three months after an open letter calling for a six-month AI “pause” on large-scale AI development beyond OpenAI’s GPT-4.

Unlike the previous petitions, the ACM FaaCT statement focuses on current harmful impacts of AI systems and calls for a policy based on existing research and tools. It says: “We, the undersigned scholars and practitioners of the Conference on Fairness, Accountability, and Transparency welcome the growing calls to develop and deploy AI in a manner that protects public interests and fundamental rights. From the dangers of inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation, our research has long anticipated harmful impacts of AI systems of all levels of complexity and capability. This body of work also shows how to design, audit, or resist AI systems to protect democracy, social justice, and human rights. This moment calls for sound policy based on the years of research that has focused on this topic. We already have tools to help build a safer technological future, and we call on policymakers to fully deploy them.”

After sharing the statement on Twitter, Nelson cited the opinion of the AI Policy and Governance Working Group at the Institute for Advanced Study, where she currently serves as a professor, having stepped down from the Biden Administration in February.

“The AI Policy and Governance Working Group, representing different sectors, disciplines, perspectives, and approaches, agree that it is necessary and possible to address the multitude of concerns raised by the expanding use of AI systems and tools and their increasing power,” she wrote on Twitter. “We also agree that both present-day harms and risks that have been unattended to and uncertain hazards and risks on the horizon warrant urgent attention and the public’s expectation of safety.”

Other AI researchers who signed the statement include Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), as well as researchers from Google DeepMind, Microsoft, Stanford University and UC Berkeley.

I signed (even though I’d like a moratorium on statements please ?) https://t.co/F4PtdFbI8o

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Two former White House AI policy advisors, along with over 150 AI academics, researchers and policy practitioners, have signed a new “Statement on AI Harms and Policy” published by ACM FaaCT (the Conference on Fairness, Accountability and Transparency) which is currently holding its annual conference in Chicago.

Alondra Nelson, former deputy assistant to President Joe Biden and acting director at the White House Office of Science and Technology Policy, and Suresh Venkatasubramanian, a former White House advisor for the “Blueprint for an AI Bill of Rights,” both signed the statement. It comes just a few weeks after a widely shared Statement on AI Risk signed by top AI researchers and CEOs cited concern about human “extinction” from AI, and three months after an open letter calling for a six-month AI “pause” on large-scale AI development beyond OpenAI’s GPT-4.

Unlike the previous petitions, the ACM FaaCT statement focuses on current harmful impacts of AI systems and calls for a policy based on existing research and tools. It says: “We, the undersigned scholars and practitioners of the Conference on Fairness, Accountability, and Transparency welcome the growing calls to develop and deploy AI in a manner that protects public interests and fundamental rights. From the dangers of inaccurate or biased algorithms that deny life-saving healthcare to language models exacerbating manipulation and misinformation, our research has long anticipated harmful impacts of AI systems of all levels of complexity and capability. This body of work also shows how to design, audit, or resist AI systems to protect democracy, social justice, and human rights. This moment calls for sound policy based on the years of research that has focused on this topic. We already have tools to help build a safer technological future, and we call on policymakers to fully deploy them.”

After sharing the statement on Twitter, Nelson cited the opinion of the AI Policy and Governance Working Group at the Institute for Advanced Study, where she currently serves as a professor, having stepped down from the Biden Administration in February.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.


Register Now

“The AI Policy and Governance Working Group, representing different sectors, disciplines, perspectives, and approaches, agree that it is necessary and possible to address the multitude of concerns raised by the expanding use of AI systems and tools and their increasing power,” she wrote on Twitter. “We also agree that both present-day harms and risks that have been unattended to and uncertain hazards and risks on the horizon warrant urgent attention and the public’s expectation of safety.”

Other AI researchers who signed the statement include Timnit Gebru, founder of the Distributed AI Research Institute (DAIR), as well as researchers from Google DeepMind, Microsoft, Stanford University and UC Berkeley.

>>Don’t miss our special issue: Building the foundation for customer data quality.<<

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!