AI & RoboticsNews

AI Weekly: The road to ethical adoption of AI

All the sessions from Transform 2021 are available on-demand now. Watch now.


As new principles emerge to guide the development ethical, safe, and inclusive AI, the industry faces self-inflicted challenges. Increasingly, there are many sets of guidelines — the Organization for Economic Cooperation and Development’s AI repository alone hosts more than 100 documents — that are vague and high-level. And while a number of tools are available, most come without actionable guidance on how to use, customize, and troubleshoot them.

This is cause for alarm, because as the coauthors of a recent paper write, AI’s impacts are hard to assess — especially when they have second- and third-order effects. Ethics discussions tend to focus on futuristic scenarios that may not come to pass and unrealistic generalizations that make the conversations untenable. In particular, companies run the risk of engaging in “ethics shopping,” “ethics washing,” or “ethics shirking,” in which they ameliorate their position with customers to build trust while minimizing accountability.

The points are salient in light of efforts by European Commission’s High-level Expert Group on AI (HLEG) and the U.S. National Institute of Standards and Technology, among others, to create standards for building “trustworthy AI.” In a paper, digital ethics researcher Mark Ryan argues that AI isn’t the type of thing that has the capacity to be trustworthy because the category of “trust” simply doesn’t apply to AI. In fact, AI can’t have the capacity to be trusted as long as it can’t be held responsible for its actions, he argues.

“Trust is separate from risk analysis that is solely based on predictions based on past behavior,” he explains. “While reliability and past experience may be used to develop, confer, or reject trust placed in the trustee, it is not the sole or defining characteristic of trust. Though we may trust people that we rely on, it is not presupposed that we do.”

Responsible adoption

Productizing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

Recognizing this, organizations must overcome a misalignment of incentives, disciplinary divides, distributions of responsibilities, and other blockers in responsibly adopting AI. It requires an impact assessment framework that’s not only broad, flexible, iterative, possible to operationalize, and guided, but highly participatory as well, according to the paper’s coauthors. They emphasize the need to shy away from anticipating impacts that are assumed to be important and become more deliberate in deployment choices. As a way of normalizing the practice, the coauthors advocate for including these ideas in documentation the same way that topics like privacy and bias are currently covered.

Another paper — this from researchers at the Data & Society Research Institute and Princeton — posits “algorithmic impact assessments” as a tool to help AI designers analyze the benefits and potential pitfalls of algorithmic systems. Impact assessments can address the issues of transparency, fairness, and accountability by providing guardrails and accountability forums that can compel developers to make changes to AI systems.

This is easier said than done, of course. Algorithmic impact assessments focus on the effects of AI decision-making, which doesn’t necessarily measure harms and may even obscure them — real harms can be difficult to quantify. But if the assessments are implemented with accountability measures, they can perhaps foster technology that respects — rather than erodes — dignity.

As Montreal AI ethics researcher Abhishek Gupta recently wrote in a column: “Design decisions for AI systems involve value judgements and optimization choices. Some relate to technical considerations like latency and accuracy, others relate to business metrics. But each require careful consideration as they have consequences in the final outcome from the system. To be clear, not everything has to translate into a tradeoff. There are often smart reformulations of a problem so that you can meet the needs of your users and customers while also satisfying internal business considerations.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!