AI & RoboticsNews

Responsible AI is a must for achieving AI at scale

responsible AI

This article is part of a VB special issue. Read the full series here: The quest for Nirvana: Applying AI at scale.

When it comes to applying AI at scale, responsible AI cannot be an afterthought, say experts.

“AI is responsible AI — there’s really no differentiating between [them],” said Tad Roselund, a managing director and senior partner with Boston Consulting Group (BCG).

And, he emphasized, responsible AI (RAI) isn’t something you just do at the end of the process. “It is something that must be included right from when AI starts, on a napkin as an idea around the table, to something that is then deployed in a scalable manner across the enterprise.”

Making sure responsible AI is front and center when applying AI at scale was the topic of a recent World Economic Forum article authored by Abhishek Gupta, senior responsible AI leader at BCG and founder of the Montreal AI Ethics Institute; Steven Mills, partner and chief AI ethics officer at BCG; and Kay Firth-Butterfield, head of AI and ML and member of the executive committee at the World Economic Forum.

“As more organizations begin their AI journeys, they are at the cusp of having to make the choice on whether to invest scarce resources toward scaling their AI efforts or channeling investments into scaling responsible AI beforehand,” the article said. “We believe that they should do the latter to achieve sustained success and better returns on investment.”

Responsible AI (RAI) may look different for each organization

There is no agreed-upon definition of RAI. The Brookings research group defines it as “ethical and accountable” artificial intelligence, but says that “[m]aking AI systems transparent, fair, secure, and inclusive are core elements of widely asserted responsible AI frameworks, but how they are interpreted and operationalized by each group can vary.”

That means that, at least on the surface, RAI could look a little different organization-to-organization, said Roselund.

“It has to be reflective of the underlying values and purpose of an organization,” he said. “Different corporations have different value statements.”

He pointed to a recent BCG survey that found that more than 80% of organizations think that AI has great potential to revolutionize processes.

“It’s being looked at as the next wave of innovation of many core processes across an organization,” he said.

At the same time, just 25% have fully deployed RAI.

To get it right means incorporating responsible AI into systems, processes, culture, governance, strategy and risk management, he said. When organizations struggle with RAI, it’s because the concept and processes tend to be siloed in one group.

Building RAI into foundational processes also minimizes the risk of shadow AI, or solutions outside the control of the IT department. Roselund pointed out that while organizations aren’t risk-averse, “they are surprise-averse.”

Ultimately, “you don’t want RAI to be something separate, you want it to be part of the fabric of an organization,” he said.

Leading from the top down

Roselund used an interesting metaphor for successful RAI: a race car.

One of the reasons a race car can go really fast and roar around corners is that it has appropriate brakes in place. When asked, drivers say they can zip around the track “because I trust my brakes.”

RAI is similar for C-suites and boards, he said — because when processes are in place, leaders can encourage and unlock innovation.

“It’s the tone at the top,” he said. “The CEO [and] C-suite set the tone for an organization in signaling what is important.”

And there’s no doubt that RAI is all the buzz, he said. “Everybody is talking about this,” said Roselund. “It’s being talked about in boardrooms, by C-suites.”

It’s similar to when organizations get serious about cybersecurity or sustainability. Those that do these well have “ownership at the highest level,” he explained.

Key principles

The good news is that ultimately, AI can be scaled responsibly, said Will Uppington, CEO of machine language testing firm TruEra.

Many solutions to AI imperfections have been developed, and organizations are implementing them, he said; they are also incorporating explainability, robustness, accuracy and bias minimization from the outset of model development.

Successful organizations also have observability, monitoring and reporting methods in place on models once they go live to ensure that the models continue to operate in an effective, fair manner.

“The other good news is that responsible AI is also high-performing AI,” said Uppington.

He identified several emerging RAI principles:

  • Explainability
  • Transparency and recourse
  • Prevention of unjust discrimination
  • Human oversight
  • Robustness
  • Privacy and data governance
  • Accountability
  • Auditability
  • Proportionality (that is, the extent of governance and controls is proportional to the materiality and risk of the underlying model/system)

Developing an RAI strategy

One generally agreed-upon guide is the RAFT framework.

“That means working through what reliability, accountability, fairness and transparency of AI systems can and should look like at the organization level and across different types of use cases,” said Triveni Gandhi, responsible AI lead at Dataiku.

This scale is important, she said, as RAI has strategic implications for meeting a higher-order ambition, and can also shape how teams are organized.

She added that privacy, security and human-centric approaches must be components of a cohesive AI strategy. It’s becoming increasingly important to manage rights over personal data and when it is fair to collect or use it. Security practices around how AI could be misused or impacted by bad-faith actors pose concerns.

And, “most importantly, the human-centric approach to AI means taking a step back to understand exactly the impact and role we want AI to have on our human experience,” said Gandhi.

Scaling AI responsibly begins by identifying goals and expectations for AI and defining boundaries on what kinds of impact a business wants AI to have within its organization and on customers. These can then be translated into actionable criteria and acceptable-risk thresholds, a signoff and oversight process, and regular review.

Why RAI?

There’s no doubt that “responsible AI can seem daunting as a concept,” said Gandhi.

“In terms of answering ‘Why responsible AI?’: Today, more and more companies are realizing the ethical, reputational and business-level costs of not systematically and proactively managing risks and unintended outcomes of their AI systems,” she said.

Organizations that can build and implement an RAI framework in conjunction with larger AI governance are able to anticipate and mitigate — even ideally avoid — critical pitfalls in scaling AI, she added.

And, said Uppington, RAI can enable greater adoption by engendering trust that AI’s imperfections will be managed.

“In addition, AI systems can not only be designed to not create new biases, they can be used to reduce the bias in society that already exists in human-driven systems,” he said.

Organizations must consider RAI as critical to how they do business; it is about performance, risk management and effectiveness.

“It’s something that is built into the AI life cycle from the very beginning, because getting it right brings tremendous benefits,” he said.

The bottom line: For organizations who seek to succeed in applying AI at scale, RAI is nothing less than critical. Warned Uppington: “Responsible AI is not just a feel-good project for companies to undertake.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!