AI & RoboticsNews

Mastercard, eBay and Capital One talk equitable generative AI and innovation

Women in AI Breakfast

The Women in AI Breakfast, sponsored for the third year in a row by Capital One, kicked off this year’s VB Transform: Get Ahead of the Generative AI Revolution. Over one hundred attendees gathered live and the session was livestreamed to a virtual audience of over 4,000. Sharon Goldman, senior writer at VentureBeat, welcomed Emily Roberts, SVP, head of enterprise consumer product at Capital One, JoAnn Stonier, fellow of data and AI at Mastercard, and Xiaodi Zhang, VP, seller experience at eBay.

Last year, the open-door breakfast discussion tackled predictive AI, governance, minimizing bias and avoiding model drift. This year, generative AI kicked in the door, and it’s dominating conversations across industries — and breakfast events.

There’s fascination across both customers and executives, who see the opportunity, but for most companies, it still hasn’t fully taken shape, said Emily Roberts, SVP, head of enterprise consumer product at Capital One.

“A lot of what we’ve been thinking about is how do you build continuously learning organizations?” she said. “How do you think about the structure in which you’re going to actually apply this to our thinking and in the day-to-day?”

And a huge part of the picture is ensuring that you’re building diversity of thought and representation into these products, she added. The sheer number of experts involved in creating these projects and seeing them to completion, from product managers, engineers and data scientists to business leaders across the organization yields even more opportunity to make equity the foundation.

“A big part of what I want us to be really thinking about is how do we get the right people in the conversation,” Roberts said. “How do we be extraordinarily curious and make sure the right people are in the room, and the right questions are being asked so that we can include the right people in that conversation.”

Part of the issue is, as always, the data, Stonier noted, especially with public LLMs.

“I think now one of the challenges we see with the public large language models that is so fascinating to think about, is that the data it’s using is really, really historically crappy data,” she explained. “We didn’t generate that data with the use [of LLMs] in mind; it’s just historically out there. And the model is learning from all of our societal foibles, right? And all of the inequities that have been out there, and so those baseline models are going to keep learning and they’ll get refined as we go.”

The crucial thing to do, as an industry, is ensure the right conversations are taking place, to draw borders around what exactly is being built, what outcomes are expected, and how to assess those outcomes as companies build their own products on top of it — and note potential issues that may crop up, so that you’re never taken unaware, particularly in financial services, and especially in terms of fraud.

“If we have bias in the data sets, we have to understand those as we’re applying this additional data set on a new tool,” Stonier said. “So, outcome-based [usage] is going to become more important than purpose-driven usage.”

It’s also crucial to invest in these guardrails right from the start, Zhang added. Which right now means figuring out what those look like, and how they can be integrated.

“How do we have some of the prompts in place and constraints in place to ensure equitable and unbiased results?” she said. “It is definitely a completely different sphere compared to what we are used to, so that it requires all of us to be continuously learning and being flexible and being open to experimenting.”

While there are still risks remaining, companies are cautious about launching new use cases; instead, they’re investing time in internal innovation, to get a better look at what’s possible. At eBay, for instance, their recent hackathon was entirely focused on gen AI.

“We really believe in the power of our teams, and I wanted to see what our employees can come up with, leveraging all the capabilities and just using their imagination,” Zhang said. “It was definitely a lot more than the executive team can even imagine. Something for every company to consider is leverage your hackathon, your innovation weeks and just focus on generative AI and see what your team members can come up with. But we definitely have to be thoughtful about that experimentation.”

At Mastercard, they’re encouraging internal innovation, but recognized the need to put up guardrails for experimentation and submission of use cases. They’re seeing applications like knowledge management, customer service and chatbots, advertising and media and marketing services, as well as refining interactive tools for their customers — but they’re not yet ready to put those into the public, before they eliminate the possibility of bias.

“This tool can do lots of powerful things, but what we’re finding is that there’s a concept of distance that we are trying to apply, where the more important the outcome, the more distance between the output and applying,” Stonier said. “For healthcare we would hate for the doctors’ decisions to be wrong, or a legal decision to be wrong.”

Regulations have already been modified to now include generative AI, but at this point, companies are still scrambling to understand what documentation will be required going forward — what regulators will be looking for, as companies experiment, and how they will be required to explain their projects as they progress.

“I think you need to be ready for those moments as you launch — can you then demonstrate the thoughtfulness of your use case in that moment, and how you’re probably going to refine it?” Stonier said. “So I think that’s what we’re up against.”

“I think the technology has leapfrogged regular regulations, so we need to all be flexible and design in a way for us to respond to regulatory decisions that come down,” Zhang said. “Something to be mindful of, and indefinitely. Legal is our best friend right now.”

Roberts noted that Capital One rebuilt its fraud platform from the ground up to harness the power of the cloud, data, and machine learning. Now more than ever, it’s about considering how to build the right experiments, and ladder up to the right applications.

“We have many, many opportunities to build in this space, but doing so in a way that we can experiment, we can test and learn and have human-centered guardrails to make sure we’re doing so in a well-managed, well-governed way,” she explained. “Any emerging trend, you’re going to see potentially regulation or standards evolve, so I’m much more focused on how do we build in a well-managed, well-controlled way, in a transparent way.”

The Women in AI Breakfast, sponsored for the third year in a row by Capital One, kicked off this year’s VB Transform: Get Ahead of the Generative AI Revolution. Over one hundred attendees gathered live and the session was livestreamed to a virtual audience of over 4,000. Sharon Goldman, senior writer at VentureBeat, welcomed Emily Roberts, SVP, head of enterprise consumer product at Capital One, JoAnn Stonier, fellow of data and AI at Mastercard, and Xiaodi Zhang, VP, seller experience at eBay.

Last year, the open-door breakfast discussion tackled predictive AI, governance, minimizing bias and avoiding model drift. This year, generative AI kicked in the door, and it’s dominating conversations across industries — and breakfast events.

Building a foundation for equitable gen AI

There’s fascination across both customers and executives, who see the opportunity, but for most companies, it still hasn’t fully taken shape, said Emily Roberts, SVP, head of enterprise consumer product at Capital One.

“A lot of what we’ve been thinking about is how do you build continuously learning organizations?” she said. “How do you think about the structure in which you’re going to actually apply this to our thinking and in the day-to-day?”

And a huge part of the picture is ensuring that you’re building diversity of thought and representation into these products, she added. The sheer number of experts involved in creating these projects and seeing them to completion, from product managers, engineers and data scientists to business leaders across the organization yields even more opportunity to make equity the foundation.

“A big part of what I want us to be really thinking about is how do we get the right people in the conversation,” Roberts said. “How do we be extraordinarily curious and make sure the right people are in the room, and the right questions are being asked so that we can include the right people in that conversation.”

Part of the issue is, as always, the data, Stonier noted, especially with public LLMs.

“I think now one of the challenges we see with the public large language models that is so fascinating to think about, is that the data it’s using is really, really historically crappy data,” she explained. “We didn’t generate that data with the use [of LLMs] in mind; it’s just historically out there. And the model is learning from all of our societal foibles, right? And all of the inequities that have been out there, and so those baseline models are going to keep learning and they’ll get refined as we go.”

The crucial thing to do, as an industry, is ensure the right conversations are taking place, to draw borders around what exactly is being built, what outcomes are expected, and how to assess those outcomes as companies build their own products on top of it — and note potential issues that may crop up, so that you’re never taken unaware, particularly in financial services, and especially in terms of fraud.

“If we have bias in the data sets, we have to understand those as we’re applying this additional data set on a new tool,” Stonier said. “So, outcome-based [usage] is going to become more important than purpose-driven usage.”

It’s also crucial to invest in these guardrails right from the start, Zhang added. Which right now means figuring out what those look like, and how they can be integrated.

“How do we have some of the prompts in place and constraints in place to ensure equitable and unbiased results?” she said. “It is definitely a completely different sphere compared to what we are used to, so that it requires all of us to be continuously learning and being flexible and being open to experimenting.”

Well-managed, well-governed innovation

While there are still risks remaining, companies are cautious about launching new use cases; instead, they’re investing time in internal innovation, to get a better look at what’s possible. At eBay, for instance, their recent hackathon was entirely focused on gen AI.

“We really believe in the power of our teams, and I wanted to see what our employees can come up with, leveraging all the capabilities and just using their imagination,” Zhang said. “It was definitely a lot more than the executive team can even imagine. Something for every company to consider is leverage your hackathon, your innovation weeks and just focus on generative AI and see what your team members can come up with. But we definitely have to be thoughtful about that experimentation.”

At Mastercard, they’re encouraging internal innovation, but recognized the need to put up guardrails for experimentation and submission of use cases. They’re seeing applications like knowledge management, customer service and chatbots, advertising and media and marketing services, as well as refining interactive tools for their customers — but they’re not yet ready to put those into the public, before they eliminate the possibility of bias.

“This tool can do lots of powerful things, but what we’re finding is that there’s a concept of distance that we are trying to apply, where the more important the outcome, the more distance between the output and applying,” Stonier said. “For healthcare we would hate for the doctors’ decisions to be wrong, or a legal decision to be wrong.”

Regulations have already been modified to now include generative AI, but at this point, companies are still scrambling to understand what documentation will be required going forward — what regulators will be looking for, as companies experiment, and how they will be required to explain their projects as they progress.

“I think you need to be ready for those moments as you launch — can you then demonstrate the thoughtfulness of your use case in that moment, and how you’re probably going to refine it?” Stonier said. “So I think that’s what we’re up against.”

“I think the technology has leapfrogged regular regulations, so we need to all be flexible and design in a way for us to respond to regulatory decisions that come down,” Zhang said. “Something to be mindful of, and indefinitely. Legal is our best friend right now.”

Roberts noted that Capital One rebuilt its fraud platform from the ground up to harness the power of the cloud, data, and machine learning. Now more than ever, it’s about considering how to build the right experiments, and ladder up to the right applications.

“We have many, many opportunities to build in this space, but doing so in a way that we can experiment, we can test and learn and have human-centered guardrails to make sure we’re doing so in a well-managed, well-governed way,” she explained. “Any emerging trend, you’re going to see potentially regulation or standards evolve, so I’m much more focused on how do we build in a well-managed, well-controlled way, in a transparent way.”


Author: VB Staff
Source: Venturebeat

Related posts
AI & RoboticsNews

Microsoft brings AI to the farm and factory floor, partnering with industry giants

AI & RoboticsNews

Edge data is critical to AI — here’s how Dell is helping enterprises unlock its value

AI & RoboticsNews

Box continues to expand beyond just data sharing, with agent-driven enterprise AI studio and no-code apps

Cleantech & EV'sNews

Porsche launches three new Taycan EV models, adding more performance and range

Sign up for our Newsletter and
stay informed!