AI & RoboticsNews

Embracing responsibility with explainable AI

Explainability is not a technology issue — it is a human issue. 

Therefore, it is incumbent on humans to be able to explain and understand how AI models come to the inferences that they do, said Madhu Narasimhan, EVP and head of innovation, strategy, digital and innovation at Wells Fargo

“That’s a key part of why explainable AI becomes so important,” she emphasized to the audience during a fireside chat at today’s VentureBeat Transform 2023 event. 

Narasimhan explained to the crowd and moderator Jana Eggers, cofounder and CEO of synaptic intelligence platform Nara Logics, that Wells Fargo did a “tremendous amount” of post hoc testing on its Fargo virtual assistant to understand why the model was interpreting language the way that it was.

In building out models, the company concurrently builds out explainability and has an independent group of data scientists who separately validate them. 

>>Follow all our VentureBeat Transform 2023 coverage<<

“We use that as part of our testing to make sure that when a customer starts using the virtual assistant, it’s behaving exactly the way they expect it to,” said Narasimhan. “Because virtual assistants are so common, no other experience will be acceptable.”

Essentially, said Narasimhan, the goal is to have models that behave the way a human would, “because that’s the whole premise of AI.” 

One of the key challenges is that humans are biased, and it is critical to ensure that models are not biased. “We have to protect and manage the bias in the data,” she said. 

As part of its model development process, Wells Fargo looks at all data elements for bias, both at the attribute level and the dataset level, she explained. 

Eggers, for her part, noted that while removing bias is important, outright cleaning of data is not. 

“I always tell people, ‘Don’t clean your data,’ because lots of data is dirty and messy,” she said. “And that’s just life, and we have to have models that adjust to that.”

If a machine can tell people what it’s seeing in data, they can then go in and tell it to stop seeing a certain bias, she pointed out. 

“It’s not that I want to take data out,” said Eggers. “It’s that I want to tune and adjust, just like with a human where we want to bring awareness: ‘Hey, you have some bias.’”

Ultimately, it is important to understand what generative AI can do, as well as its limits, said Narasimhan. The leading economic force will be building more and more complex models, so explainability will continue to be required to support unexpected inferences. 

To help support this across the board, Wells Fargo’s data scientists have created a Python-interpretable open access toolkit that the company has shared with other financial institutions. 

“That is what I’m excited about: Being able to develop a tool that’s available in an open access manner that allows everyone to look at how you can inherently explain models,” said Narasimhan. “The more you can explain how we get to the explainability of a model, I think it’s better for us all around.”

Join top executives in San Francisco on July 11-12 and learn how business leaders are getting ahead of the generative AI revolution. Learn More


Explainability is not a technology issue — it is a human issue. 

Therefore, it is incumbent on humans to be able to explain and understand how AI models come to the inferences that they do, said Madhu Narasimhan, EVP and head of innovation, strategy, digital and innovation at Wells Fargo

“That’s a key part of why explainable AI becomes so important,” she emphasized to the audience during a fireside chat at today’s VentureBeat Transform 2023 event. 

Narasimhan explained to the crowd and moderator Jana Eggers, cofounder and CEO of synaptic intelligence platform Nara Logics, that Wells Fargo did a “tremendous amount” of post hoc testing on its Fargo virtual assistant to understand why the model was interpreting language the way that it was.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

In building out models, the company concurrently builds out explainability and has an independent group of data scientists who separately validate them. 

>>Follow all our VentureBeat Transform 2023 coverage<<

“We use that as part of our testing to make sure that when a customer starts using the virtual assistant, it’s behaving exactly the way they expect it to,” said Narasimhan. “Because virtual assistants are so common, no other experience will be acceptable.”

Behaving the way a human would

Essentially, said Narasimhan, the goal is to have models that behave the way a human would, “because that’s the whole premise of AI.” 

One of the key challenges is that humans are biased, and it is critical to ensure that models are not biased. “We have to protect and manage the bias in the data,” she said. 

As part of its model development process, Wells Fargo looks at all data elements for bias, both at the attribute level and the dataset level, she explained. 

Eggers, for her part, noted that while removing bias is important, outright cleaning of data is not. 

“I always tell people, ‘Don’t clean your data,’ because lots of data is dirty and messy,” she said. “And that’s just life, and we have to have models that adjust to that.”

If a machine can tell people what it’s seeing in data, they can then go in and tell it to stop seeing a certain bias, she pointed out. 

“It’s not that I want to take data out,” said Eggers. “It’s that I want to tune and adjust, just like with a human where we want to bring awareness: ‘Hey, you have some bias.’”

Working together toward explainability

Ultimately, it is important to understand what generative AI can do, as well as its limits, said Narasimhan. The leading economic force will be building more and more complex models, so explainability will continue to be required to support unexpected inferences. 

To help support this across the board, Wells Fargo’s data scientists have created a Python-interpretable open access toolkit that the company has shared with other financial institutions. 

“That is what I’m excited about: Being able to develop a tool that’s available in an open access manner that allows everyone to look at how you can inherently explain models,” said Narasimhan. “The more you can explain how we get to the explainability of a model, I think it’s better for us all around.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
DefenseNews

After Army canceled helo program, industry had to pivot

DefenseNews

Here’s when the US Army will pick next long-range spy plane

DefenseNews

Raytheon picks Spain’s Sener to make Patriot interceptor parts

Cleantech & EV'sNews

Gogoro announces major partnership to help accelerate global expansion

Sign up for our Newsletter and
stay informed!