AI & RoboticsNews

How Microsoft, OpenAI, and OECD are putting AI ethics principles into practice

Microsoft’s AI ethics committee helped craft internal Department of Defense contract policy, and G20 member nations wouldn’t have passed AI ethics principles if it weren’t for Japanese leadership. That’s according to a case study examining projects at Microsoft, OpenAI, and OECD out this week.

Published Tuesday, the UC Berkeley Center for Long-Term Cybersecurity (CLTC) case study examines how organizations are putting AI ethics principles into practice. Ethics principles are often vaguely phrased rules that can be challenging to translate into the daily practices of an engineer or other frontline worker. CLTC research fellow Jessica Cussins Newman told VentureBeat that many AI ethics and governance debates have focused more on what is needed, but less on the practices and policies necessary to implement goals enshrined in principles.

The study focuses on OpenAI’s rollout of GPT-2; the adoption of AI principles by OECD and G20; and the creation of the AI, Ethics, and Effects in Engineering and Research (AETHER) committee at Microsoft. The OECD Policy Observatory launched in February to help convert principles into practice for 36 member nations. Newman said the case study includes previously unpublished information about the structure of Microsoft’s AETHER committee and seven internal working groups and its role in determining policy such as use of facial recognition in a federal U.S. prison.

Also new in the case study is an account of how and why G20 nations endorsed AI ethics principles that are identical to OECD principles. The OECD created the first ethical principles adopted by the world’s democratic nations last year.

VB Transform 2020 Online – July 15-17: Join leading AI executives at the AI event of the year. Register today and save 30% off digital access passes.

The study finds AI governance has gone through three stages since 2016: The release of about 85 ethics principles by tech companies and governments in recent years marked the first stage, followed by consensus around themes like privacy, human control, explainability, and fairness. The third stage, which began in 2019 and continues today, is converting principles into practice. In this third stage, she argues that businesses and nations that adopted principles will face pressure to keep their word.

“Decisions about how to operationalize AI principles and strategies are currently faced by nearly all AI stakeholders, and are determining practices and policies in a meaningful way,” the report reads. “There is growing pressure on AI companies and organizations to adopt implementation efforts, and those actors perceived to verge from their stated intentions may face backlash from employees, users, and the general public. Decisions made today about how to operationalize AI principles at scale will have major implications for decades to come, and AI stakeholders have an opportunity to learn from existing efforts and to take concrete steps to ensure that AI helps us build a better future.”

The case study was compiled in part through interviews with leaders at each organization, including Microsoft chief scientist Eric Horvitz and OECD staff.

Due to organizational, technological, and regulatory lock-in effects, Newman believes early efforts like those from Microsoft and OECD will be especially influential, and a growing universality of AI ethics principles will “lead to increased pressure to establish methods to ensure AI principles and strategies are realized.”

Newman stresses that each case offers lessons, like how AETHER illustrates the need for top-down ethical leadership, though each approach may not be an ideal model for other organizations to replicate.

For example, OpenAI faced pushback from researchers who called the GPT-2 release over nine months a PR stunt or a betrayal of the core scientific process of peer review, but Newman believes OpenAI deserves credit for encouraging developers to consider the ethical implication of releasing a model. She notes that acknowledging and stating the social impact of an AI system is not yet the norm. For the first time this year, NeurIPS, the world’s largest AI research conference, will require authors to address impact on society and any financial conflict of interest.

The study also compiles a list of recent initiatives to turn principles into practice including frameworks, oversight boards, and tools like IBM’s explainable AI toolkit and Microsoft’s InterpretML, as well as privacy regulation like CCPA in California and GDPR in the European Union.

Last month, a series of AI researchers from organizations like Google and OpenAI recommended organizations implement things like bias bounties or create a third-party auditing marketplace in order to turn ethics principles into practice, create more robust systems, and ensure that AI remains beneficial to humanity. In March, Microsoft Research leaders, together with AETHER and nearly 50 engineers from a dozen organizations, released an AI ethics checklist for AI practitioners.


Author: Khari Johnson.
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!