AI & RoboticsNews

AI Weekly: UN proposes moratorium on ‘risky’ AI while ICLR solicits blog posts

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


UN High Commissioner for Human Rights Michelle Bachelet this week called for a moratorium on the sale and use of AI systems that pose “a serious risk to human rights.” Bachelet said adequate safeguards must be put in place before development resumes on such systems and that any systems that can’t be used in compliance with international human rights law should be banned.

“AI can be a force for good, helping societies overcome some of the great challenges of our times. But AI technologies can have negative, even catastrophic, effects if they are used without sufficient regard to how they affect people’s human rights,” Bachelet said.

Of course, defining which systems pose a risk to human rights isn’t necessarily a straightforward task. The Human Rights Council outlines a few examples in a new report, including systems that “deepen privacy intrusions” through the increased use of personal data and “lead to discriminatory decisions.” But as recent comments submitted to the European Parliament and European Council suggest, definitions of “risky” can vary widely, depending on the stakeholder.

As Wired’s Khari Johnson recently wrote, some businesses responding to the European Union’s AI Act — which proposes oversight of “high-risk” AI — believe the legislation goes too far, with innovation-stifling and potentially costly rules. Meanwhile, human rights groups and ethicists maintain it doesn’t go far enough, leaving people vulnerable to those with the resources to deploy powerful algorithms.

While an agreed-upon definition of “risk” remains elusive, particularly as companies like Alphabet’s DeepMind and OpenAI work toward general-purpose, multitasking systems that defy conventional labels, the Human Rights Council’s report identifies ways to help prevent and limit the harms introduced by AI. For example, the report argues that AI development must be equitable and non-discriminatory, with participation and accountability embedded as core parts of the processes. In addition, it asserts that requirements of legality, legitimacy, necessity, and proportionality must be “consistently applied” to AI technologies, which should be deployed in a way “that facilitates the realization of economic, social, and cultural rights.”

“AI now reaches into almost every corner of our physical and mental lives and even emotional states. AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” Bachelet said. “We cannot afford to continue playing catch-up regarding AI — allowing its use with limited or no boundaries or oversight and dealing with the almost inevitable human rights consequences after the fact … Action is needed now to put human rights guardrails on the use of AI, for the good of all of us.”

ICLR introduces a blog post track

In other news this week, the International Conference on Learning Representations (ICLR), one of the largest machine learning conferences in the world, announced a call for contributions to the very first Blog Post Track. The goal is to solicit submissions in blog format, allowing researchers to discuss previously published research papers that have been accepted to ICLR.

“[Blog Post Track] recognizes and values summarization work as opposed to novel work,” Sebastien Bubeck, ICLR blog post chair and senior principal research manager at Microsoft Research, told VentureBeat via email. “For example, certain published papers might have difficult and technical mathematical proofs for quite abstract settings. Blog posts in that case might work out a specific subcase of that general theory, distilling the insights into some practical examples. Alternatively, a post might propose a new simpler proof of the same result, or perhaps connect the proof with ideas in other areas of computer science.”

Bubeck believes encouraging researchers to review older, peer-reviewed scientific work might allow them to highlight studies’ shortcomings and help synthesize knowledge in the AI community. He traces the initiative to the period following the second world war in France, when a collective of mathematicians under the pseudonym Nicolas Bourbaki decided to write a series of textbooks about the foundations of mathematics.

“For more applied papers, blog posts might be a good way to revisit experiments, with the overall goal [of helping] with the reproducibility crisis in machine learning. In fact, … contrary to main conference papers, blog posts might focus on smaller-scale experiments, investigating whether certain phenomenon are due to scale​ or whether they are intrinsic to the architecture or problem at hand,” Bubeck said.

AI, like many scientific fields, has a reproducibility problem. Studies often provide benchmark results in lieu of source code, which becomes problematic when the thoroughness of the benchmarks is in question. One recent report found that 60% to 70% of answers provided by natural language processing models were embedded somewhere in the benchmark training sets, indicating that the models were often simply memorizing the answers. Another study — a meta-analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

For the first edition of the Blog Post Track at ICLR, Bubeck says the conference chairs will select the reviewers for the submitted posts. Going forward, he hopes to bring blog post tracks to more computer science conferences — not only those focused primarily on AI and machine learning.

“Blog posts provide an opportunity to informally discuss scientific ideas. They offer substantial value to the scientific community by providing a flexible platform to foster open, human, and transparent discussions about new insights or limitations of a scientific publication,” Bubeck said.

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!