AI & RoboticsNews

Trustworthy AI: How to ensure trust and ethics in AI

Did you miss a session at the Data Summit? Watch On-Demand Here.


A pragmatic and direct approach to ethics and trust in artificial intelligence (AI) — who would not want that? This is how Beena Ammanath describes her new book, Trustworthy AI

Ammanath is the executive director of the Global Deloitte AI Institute. She has had stints at GE, HPE and Bank of America, in roles such as vice president of data science and innovation, CTO of artificial intelligence and lead of data and analytics.

AI we can trust

Oftentimes discussions about ethics and trust in AI have a very narrow focus, limited to fairness and bias, which can be frustrating as a professional in the industry, explained Ammanath. While fairness and bias are relevant aspects, Ammanath says that they aren’t the only aspects or even the most important ones. There’s a lot of nuance there and that is part of what Ammanath sets out to address.

What should be talked about when we discuss AI ethics, then? That can be a daunting question to contemplate. For organizations that are not interested in philosophical quests, but in practical approaches, terms such as “AI ethics” or “responsible AI” can feel convoluted and abstract. 

The term “trustworthy AI” has been used in places ranging from the EU Commission to IBM and from the ACM to Deloitte. In her book, Ammanath lists and elaborates on the multiple dimensions she sees that are collectively defining trustworthy AI as it fits these criteria.

Fair and impartial, robust and reliable, transparent, explainable, secure, safe, accountable, responsible and with privacy. While we couldn’t possibly do justice to all those dimensions, as an entire chapter is devoted to each in the book, we can endeavor to convey the thinking behind this definition, as well as the overall approach.

In her role in Deloitte, Ammanath works with organizations looking to apply AI in practice. With AI growing so rapidly, there is much noise around it as well. However, business executives have specific problems they are looking to apply AI to — optimizing a supply chain, for example.

There are many options to consider: Perhaps there’s an existing product that can be bought off the shelf, or a startup working on the problem that could potentially be acquired, or a partnership with academia. Ammanath’s book attempts to bring all of that together.

Ammanath is quick to point out that while there is a lot of research still happening in AI and the technology isn’t  fully mature, AI is being used in the real world because it drives a lot of value. That means that often there are unforeseen side effects.

While Ammanath strived to identify and elaborate on the dimensions of trustworthy AI, that doesn’t necessarily mean her definition is complete. There may well be additional dimensions that are relevant for specific use cases. And, by and large, prioritizing those dimensions is very much use-case specific too.

Fairness and bias are a good example. Even though those are probably the first things that come to mind when people talk about AI Ethics, they aren’t always necessarily the most relevant ones Ammanath points out.

“If you’re building an AI solution that is doing patient diagnosis, fairness and bias are super important. But if you’re building an algorithm that predicts jet engine failure, fairness and bias aren’t as important. Trustworthy AI is really a structure to get you started to think about the dimensions of trust within your organization. To start having those corporate discussions on what are the ways this could go wrong and how do we mitigate it,” said Ammanath.

The importance of asking the right AI questions

Ammanath says that what happens in most businesses today is that ethics tends to be put into a separate bucket. What she has set out to do is help move the topic from the philosophy arena to the real-world arena and equip everyone from CIOs to data scientists to see their role in the big picture and ask the right questions.

The book’s structure reflects that goal. A fictitious company is used as the backdrop for the analysis of each dimension comprising trustworthy AI. At the beginning of each chapter, a scenario is put forth involving company leaders applying AI solutions to solve problems and create value for the company.

Those leaders may set out with the best intentions, but as the scenarios evolve, they find out that many things can go wrong when applying AI in the real world. Those scenarios have been inspired by Ammanath’s own experiences and are used to elaborate on the dimensions of trustworthy AI in a problem-analysis-solution fashion.

One of the key takeaways of the book, however, is precisely the fact that there are no one-size-fits all solutions. Instead, what Ammanath advocates for is learning to ask the right questions — and that applies to everyone in the organization:

“There’s a belief that ethics is just for data scientists, or just for the IT team, and that’s not true. It’s relevant for the CHRO who might be buying a tool for recruitment. Or the CFO whose team might be using AI in account management or document management. It’s relevant for every C suite executive. It’s not restricted just to the IT team, or just the CEO,” said Ammanath. “Every person within the organization needs to know what trustworthy AI means and how it applies to their organization. Even if you’re a marketing intern who is part of a vendor discussion, you should know what questions to ask beyond the functionality, such as — what kind of datasets have you trained on?”

In most AI projects, Ammanath added, the focus is on value creation and ROI — cost savings, new products and so on. She suggests that people spend time thinking about the ways this could go wrong. 

As to how this translates in terms of staffing — again, it depends. For organizations building AI products themselves, it would probably make sense to appoint a role like a chief ethicist. For others who simply use AI products, having access to the right expertise may be enough. However, it’s important to remember that AI ethics is something that permeates organizations — it’s not a burden that can be offloaded to a single role.

Ammanath sees trustworthy AI under the lens of socio-technical systems and proposes specific approaches organizations can adopt to embrace it. Those approaches are based on identifying the relevant dimensions of cultivating trust through people, processes and technologies. 

Because this is building on existing practices, organizations don’t need to start from scratch. Ammanath advocates for appending existing training material and processes. Simple things can be accomplished, such as a hotline to access experts or adding AI risk factors in project management and software sourcing:

“Basic changes like that in the process are super important to make it relevant. Having these different buckets is a great way to start operationalizing trust, but you’ll never get it fully right,” Ammanath said. “Because it’s all going to be a learning [process] and there are so many different angles to it, you need so many different perspectives. But I think you’ll at least get started.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: George Anadiotis
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!