AI & RoboticsNews

IBM unveils Policy Lab, advocates ‘precision regulation’ of AI

During a panel discussion hosted by IBM CEO Ginni Rometty at the World Economic Forum on Wednesday with Siemens CEO Joe Kaeser, White House Deputy Chief of Staff for Policy Coordination Chris Liddell, and OECD Secretary General Angel Gurria, IBM formally announced the IBM Policy Lab, an initiative aimed at providing policymakers with recommendations for emerging problems in technology. The company also outlined a set of priorities to be taken into consideration when looking at AI regulation, including several directly addressing issues around compliance and explainability.

The Policy Lab — which soft-launched in November 2019 — serves as a forum providing a “vision” and actionable suggestions to “harness the benefits of innovation while ensuring trust,” according to press materials published this morning. Under the leadership of by co-directors Ryan Hagemann, a former senior policy fellow at the International Center for Law and Economics the Niskanen Center, and Jean-Marc Leclerc, who currently vice-chairs the American Chamber of Commerce to the European Union’s Digital Economy Committee and chairs The Software Alliance’s Europe, Middle East, and Africa Policy Committee, the think tank convenes stakeholders and leaders in public policy, academia, civil society, and tech to formulate ideas that might help to tackle global challenges

Concretely, in the future, the IBM Policy Lab will publish studies and research that buoy industry and state decision-making processes. And it’ll develop “bold” policy positions that “look forward to the opportunities of tomorrow” but which are intended to be implemented relatively quickly. “Our approach is grounded in the belief that tech can continue to disrupt and improve civil society while protecting individual privacy,” wrote Hagemann and Leclerc in a joint statement. “As technological innovation races ahead, our mission to raise the bar for a trustworthy digital future could not be more urgent.”

On the subject of AI, the IBM Policy Lab calls for what it describes as “precision regulation” of AI, or laws that make it incumbent on companies to develop and operate “trustworthy” systems. IBM’s proposed framework takes into account whether companies are providers or owners (or both) of AI systems, in addition to the level of risk presented by particular products as determined by (1) the potential for harm associated with the intended use; (2) the level of automation and human involvement; and (3) whether an end-user is substantially reliant on the AI system based on end-user and use-case.

IBM advocates for the appointment of AI ethics officials to guarantee compliance with providers’ and owners’ expectations. They’d be accountable for internal guidance and compliance mechanisms like AI ethics boards, which would oversee risk assessments and harm mitigation strategies, and for improving public acceptance and trust of systems while driving commitments to responsible development, deployment, and stewardship of AI.

The IBM Policy Lab also posits different rules for different levels of system risk, to the extent that companies conduct high-level assessments of their AI’s potential for harm followed by in-depth tests for high-risk applications. In the latter case, IBM says evaluations should be documented in auditable formats and retained for agreed-upon minimum periods of time.

IBM recommends policies of transparency — that is, making the purpose of AI systems clear to consumers and businesses — while acknowledging that low-risk systems might not require exhaustive disclosures. That said, it asserts that any AI system on the market making determinations or recommendations with “potentially significant implications” for individuals should be able to explain and contextualize how and why it arrived at a particular conclusion.

Providers and owners of AI systems should maintain audit trails surrounding their input and training data, according to the IBM Policy Lab, and operators of those systems should make available documentation that details essential information for consumers to be aware of (e.g., confidence measures, levels of procedural regularity, and error analysis). IBM further says that companies should test their AI for fairness, bias, robustness, and security and take remedial actions both before deployment and after their systems are operationalized, and that the companies should retain responsibility for ensuring use of their systems is aligned with anti-discrimination laws as well as statutes addressing safety, privacy, financial disclosure, consumer protection, employment, and other sensitive contexts.

IBM suggests this might be achieved at the government level by designating existing co-regulatory mechanisms, like the National Institute of Standards and Technology in the U.S., to identify definitions, frameworks, and benchmarks for standards in AI systems. Supporting minority-serving organizations and impacted communities in efforts to engage with academia and industry could accelerate the development of these criteria, and providing various levels of liability safe harbor protections could incentivize the adoption of new standards and validation regimes.

Finally, IBM says that any action or practice prohibited by anti-discrimination laws should continue to be prohibited when it involves an automated decision-making system. “Among companies building and deploying artificial intelligence, and the consumers making use of this technology, trust is of paramount importance,” continued Hagemann and Leclerc. “Companies want the comfort of knowing how their AI systems are making determinations, and that they are in compliance with any relevant regulations, and consumers want to know when the technology is being used and how (or whether) it will impact their lives.”

IBM’s announcements come a day after Google and parent company Alphabet CEO Sundar Pichai called for AI to be regulated with “international alignment,” and a week after it was revealed that the European Commission is considering a five-year ban on facial recognition technologies. The White House earlier this month published its own proposed regulatory principles and urged Europe to “avoid heavy-handed innovation-killing models,” and Jeff Bezos in a comment to a reporter last September said that Amazon is drafting facial recognition laws to pitch to lawmakers. Separately, Microsoft executives have called on lawmakers to investigate facial recognition and craft policies guiding its usage.

State-level regulation of AI systems is piecemeal globally, at best, and some companies have struggled to adopt lasting policies around development. Notably, Google dissolved an external ethics board designed to monitor its use of AI just one week after forming it.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!

Worth reading...
5G iPhones will help 2020 device growth, before slowdown returns – Gartner