AI & RoboticsNews

AI Weekly: Microsoft’s new moves in responsible AI

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Want AI Weekly for free each Thursday in your inbox? Sign up here.


We may be enjoying the first few days of summer, but whether it’s Microsoft, Google, Amazon or anything AI-powered, artificial intelligence news never takes a break to sit on the beach, walk in the sun or fire up the BBQ.

In fact, it can be hard to keep up. Over the past few days, for example, all this took place:

  • Amazon’s re:MARS announcements led to media-wide facepalms over possible ethical and security concerns (and overall weirdness) around Alexa’s newfound ability to replicate dead people’s voices.
  • Over 300 researchers signed an open letter condemning the deployment of GPT-4chan.
  • Google released yet another text-to-image model, Parti.
  • I booked my flight to San Francisco to attend VentureBeat’s in-person Executive Summit at Transform on July 19. (OK, that’s not really news, but I’m looking forward to seeing the AI and data community finally come together IRL. See you there?)

But this week, I’m focused on Microsoft’s release of a new version of its Responsible AI Standard — as well as its announcement this week that it plans to stop selling facial analysis tools in Azure.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

Let’s dig in.

Sharon Goldman, senior editor and writer

This week’s AI beat

Responsible AI was at the heart of many of Microsoft’s Build announcements this year. And there’s no doubt that Microsoft has tackled issues related to responsible AI since at least 2018 and has pushed for legislation to regulate facial-recognition technology.

Microsoft’s release this week of version 2 of its Responsible AI Standard is a good next step, AI experts say, though there is more to be done. And while it was hardly mentioned in the Standard, Microsoft’s widely covered announcement that it will retire public access to facial recognition tools in Azure – due to concerns about bias, invasiveness and reliability – was seen as part of a larger overhaul of Microsoft’s AI ethics policies.

Microsoft’s ‘big step forward’ in specific responsible AI standards

According to computer scientist Ben Shneiderman, author of Human-Centered AI, Microsoft’s new Responsible AI Standard is a big step forward from Microsoft’s 18 Guidelines for Human-AI Interaction

“The new standards are much more specific, shifting from ethical concerns to management practices, software engineering workflows, and documentation requirements,” he said.

Abhishek Gupta, senior responsible AI leader at Boston Consulting Group and principal researcher at the Montreal AI Ethics Institute, agrees, calling the new standard a “much-needed breath of fresh air, because it goes a step beyond high-level principles which have largely been the norm so far.” he said.

Mapping previously articulated principles to specific sub-goals and their applicability to the kinds of AI systems and phases of the AI lifecycle makes it an actionable document, he explained, while it also means that practitioners and operators “can move past the overwhelming degree of vagueness that they experience when trying to put principles to practice.”

Unresolved bias and privacy risks

Given the unresolved bias and privacy risks in facial-recognition technology, Microsoft’s decision to stop selling its Azure tool is a “very responsible one,” Gupta added. “It is the first stepping stone in my belief that instead of a ‘move fast and break things’ mindset, we need to adopt a ‘responsibly evolve fast and fix things’ mindset.”

But Annette Zimmerman, VP analyst at Gartner, says she believes that Microsoft is doing away with facial demographic and emotion detection simply because the company may have no control over how it’s used.

“It is the continued controversial topic of detecting demographics, such as gender and age, possibly pairing it with emotion and using it to make a decision that will impact this individual that was assessed, such as a hiring decision or selling a loan,” she explained. “Since the main issue is that these decisions could be biased, Microsoft is doing away with this technology including the emotion detection.”

Products like Microsoft’s, which are SDKs or APIs that can be integrated into an application that Microsoft has no control over is different than end-to-end solutions and dedicated products where there is full transparency, she added.

“Products that detect emotions for market research purposes, storytelling or customer experience – all cases where you don’t make a decision other than improving a service – will still thrive in this technology market,” she said.

What’s missing from Microsoft’s Responsible AI Standard

There is still more work to be done by Microsoft when it comes to responsible AI, say experts.

What’s missing, said Shneiderman, are requirements for things like audit trails or logging; independent oversight; public incident reporting websites; availability of documents and reports to stakeholders, including journalists, public interest groups, industry professionals; open reporting of problems encountered; and transparency about Microsoft’s process for its internal review of projects.

One factor that deserves more attention is accounting for the environmental impacts of AI systems, “especially given the work that Microsoft does towards large-scale models,” said Gupta. “My recommendation is to start thinking about environmental considerations as a first-class citizen alongside business and functional considerations in the design, development, and deployment of AI systems,” he said. 

The future of responsible AI

Gupta predicted that Microsoft’s announcements should trigger similar actions coming out of other firms over the next 12 months.

“We might also see the release of more tools and capabilities within the Azure platform that will make some of the standards mentioned in their Responsible AI Standard more broadly accessible to customers of the Azure platform, thus democratizing RAI capabilities towards those who don’t necessarily have the resources to do so themselves,” he said.

Shneiderman said that he hoped other companies would up their game in this direction, pointing to IBM’s AI Fairness 360 and related approaches as well as Google’s People and AI Research (PAIR) Guidebook.

“The good news is that large firms and smaller ones are moving from vague ethical principles to specific business practices by requiring some forms of documentation, reporting of problems, and sharing information with certain stakeholders/customers,” he said, adding that more needs to be done to make these systems open to public review: “I think there is a growing recognition that failed AI systems generate substantial negative public attention, making reliable, safe, and trustworthy AI systems a competitive advantage.”


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!