AI & RoboticsNews

Verizon exec reveals responsible AI strategy amid ‘Wild West’ landscape


Verizon is using generative AI applications to enhance customer support and experience for its more than 100 million phone customers, and is expanding its responsible AI team to mitigate its risks.

Michael Raj, a vice president overseeing AI for Verizon’s network enablement, said the company is implementing several measures as part of this initiative. These include requiring data scientists to register AI models with a central data team to ensure security reviews, and increasing scrutiny of the types of large language models (LLMs) used in Verizon’s applications to minimize bias and prevent toxic language.

AI auditing is like the “Wild West”

Raj spoke during the VentureBeat AI Impact event in New York City last week, where the focus was on how to audit generative AI applications, where the LLMs used can be notoriously unpredictable. He and other speakers agreed that the field of AI auditing is still in its early stages and that companies need to accelerate their efforts in this area, given that regulators have not yet set specific guidelines.

The steady drumbeat of big mistakes by customer support AI agents, for example from big names like Chevy, Air Canada, and even New York City, or even by leading LLM providers like Google, which featured black Nazis, has brought a renewed focus on the need for more reliability.

 

 

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now


 

The technology is advancing so rapidly that government regulators are only publishing the high-level guidelines, leaving private companies to define the details behind them, said Justin Greenberger, a senior vice president at UiPath, which helps large companies with automation, including generative AI. “In some ways, it feels like the Wild West,” added Rebecca Qian, co-founder of Patronus AI, a company that helps firms audit their LLM projects.

Many companies are currently focused on the first step of AI governance—defining rules and policies for using generative AI. Audits are the next step, ensuring that applications adhere to these policies, but very few companies have the resources to do this correctly, the speakers noted.

A recent Accenture report found that while 96% of organizations support some level of government regulation around AI, only 2% have fully operationalized responsible AI across their operations.

Verizon’s focus is to support agents with smart AI

Raj stated that Verizon aims to be a leading player in applied AI, with a significant focus on equipping frontline employees with a smart conversational assistant to help them manage customer interactions. These customer support or Verizon store agents face an overload of information, but a generative AI-based assistant can alleviate this burden. It can instantly provide agents with personalized information about a customer’s plan and preferences and handle “80 percent of the repetitive stuff,” such as details about different devices and phone plans. This allows agents to focus on the “20 percent of issues where human intervention is actually needed” and make personalized recommendations.

Verizon is also using generative AI and other deep learning technologies to improve customer experience on its network and website, as well as in learning about its products and services. Raj mentioned that the company has implemented models to predict churn propensity among its more than 100 million customers. (See video of his full remarks below).

“One point of contact” for AI safety decisions

Verizon has made a substantial investment in AI governance, including tracking model drift bias, Raj said. This has been facilitated by consolidating all governance functions into a single “AI and Data” organization, which includes the “Responsible AI” unit. Raj noted that this unit is “scaling up” to drive standards around privacy and respectful language. He said the unit is a necessary “single point of contact” to help on anything related to AI safety, and works closely with the CISO office as well as procurement execs. Verizon published its responsible AI roadmap earlier this year in a white paper in partnership with Northeastern University (pdf download).

To ensure AI models are properly managed, Verizon has made datasets available to developers and engineers, allowing them to interact directly with the models rather than using unapproved models, Raj said.

This trend of registering AI models is expected to become more established over time at other B2C companies, said UiPath’s Greenberger. Models will need to be “version controlled and audited,” similar to how pharmaceutical companies handle drugs. He suggested that companies should increase the frequency of evaluating their risk profiles due to the rapid pace of technological change. Legislation in the U.S. and other countries is being debated to enforce model registration, considering how these models have been trained on publicly available data, Greenberger added.

The emergence of “AI Governance” units

Most sophisticated companies are setting up centralized AI teams, similar to Verizon’s, Greenberger said. The emergence of “AI Governance” groups is also gaining momentum in many companies. Working with third-party suppliers of LLM models is also forcing enterprise companies to rethink their approach to collaboration. Each provider offers multiple LLM models with diverse and dynamic capabilities.

The nature of generative AI applications is fundamentally different from other technologies, making it difficult to legislate the auditing process. LLMs, by their very nature, provide unpredictable outputs, said Patronus AI’s Qian, leading to failures around safety, bias, hallucinations, and insecure output. This necessitates regulations for each category of these failures and industry-specific regulations, she said. In sectors like transportation or health, failures can mean “life or death,” while in e-commerce recommendations, the stakes are lower, Qian explained.

In the nascent field of AI auditing, creating transparency in models is a significant challenge. Traditional AI could be understood by examining its code, but generative AI is more complex. Even getting the basics of auditing right is a challenge most companies haven’t met, with only about 5% having completed pilot projects focusing on bias and responsible AI, estimated Greenberger.

As the landscape of AI continues to evolve at a breakneck pace, Verizon’s commitment to responsible AI can serve as another example of a benchmark for the industry, while the multiple possibilities for failure of LLMs underscore the critical need for more governance, transparency, and ethical standards in their deployment. See the video of the full speaker Q&A below.

Full disclosure: UiPath sponsored this New York event stop of VentureBeat’s AI Impact Tour, but the speakers from Verizon and Patronus were independently selected by VentureBeat.Check out our next stops on the AI Impact Tour, including how to apply for an invite for the next events in SF on July 9-11 (our flagship, VB Transform) and Boston on August 7.

Author: Matt Marshall
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!