AI & RoboticsNews

Responsible AI must be a priority — now

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Responsible artificial intelligence (AI) must be embedded into a company’s DNA. 

“Why is bias in AI something that we all need to think about today? It’s because AI is fueling everything we do today,” Miriam Vogel, president and CEO of EqualAI, told a live stream audience during this week’s Transform 2022 event. 

Vogel discussed the topics of AI bias and responsible AI in depth in a fireside chat led by Victoria Espinel of the trade group The Software Alliance

Vogel has extensive experience in technology and policy, including at the White House, the U.S. Department of Justice (DOJ) and at the nonprofit EqualAI, which is dedicated to reducing unconscious bias in AI development and use. She also serves as chair of the recently launched National AI Advisory Committee (NAIAC), mandated by Congress to advise the President and the White House on AI policy.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

As she noted, AI is becoming ever more significant to our daily lives — and greatly improving them — but at the same time, we have to understand the many inherent risks of AI. Everyone — builders, creators and users alike — must make AI “our partner,” as well as efficient, effective and trustworthy. 

“You can’t build trust with your app if you’re not sure that it’s safe for you, that it’s built for you,” said Vogel. 

Now is the time

We must address the issue of responsible AI now, said Vogel, as we are still establishing “the rules of the road.” What constitutes AI remains a sort of “gray area.”

And if it isn’t addressed? The consequences could be dire. People may not be given the right healthcare or employment opportunities as the result of AI bias, and “litigation will come, regulation will come,” warned Vogel. 

When that happens, “We can’t unpack the AI systems that we’ve become so reliant on, and that have become intertwined,” she said. “Right now, today, is the time for us to be very mindful of what we’re building and deploying, making sure that we are assessing the risks, making sure that we are reducing those risks.”

Good ‘AI hygiene’

Companies must address responsible AI now by establishing strong governance practices and policies and establishing a safe, collaborative, visible culture. This has to be “put through the levers” and handled mindfully and intentionally, said Vogel. 

For example, in hiring, companies can begin simply by asking whether platforms have been tested for discrimination. 

“Just that basic question is so extremely powerful,” said Vogel. 

An organization’s HR team must be supported by AI that is inclusive and that doesn’t discount the best candidates from employment or advancement. 

It is a matter of “good AI hygiene,” said Vogel, and it starts with the C-suite. 

“Why the C-suite? Because at the end of the day, if you don’t have buy-in at the highest levels, you can’t get the governance framework in place, you can’t get investment in the governance framework, and you can’t get buy-in to ensure that you’re doing it in the right way,” said Vogel. 

Also, bias detection is an ongoing process: Once a framework has been established, there has to be a long-term process in place to continuously assess whether bias is impeding systems. 

“Bias can embed at each human touchpoint,” from data collection, to testing, to design, to development and deployment, said Vogel. 

Responsible AI: A human-level problem

Vogel pointed out that the conversation of AI bias and AI responsibility was initially limited to programmers — but Vogel feels it is “unfair.” 

“We can’t expect them to solve the problems of humanity by themselves,” she said. 

It’s human nature: People often imagine only as broadly as their experience or creativity allows. So, the more voices that can be brought in, the better, to determine best practices and ensure that the age-old issue of bias doesn’t infiltrate AI. 

This is already underway, with governments around the world crafting regulatory frameworks, said Vogel. The EU is creating a GDPR-like regulation for AI, for instance. Additionally, in the U.S., the nation’s Equal Employment Opportunity Commission and the DOJ recently came out with an “unprecedented” joint statement on reducing discrimination when it comes to disabilities — something AI and its algorithms could make worse if not watched. The National Institute of Standards and Technology was also congressionally mandated to create a risk management framework for AI. 

“We can expect a lot out of the U.S. in terms of AI regulation,” said Vogel. 

This includes the recently formed committee that she now chairs. 

“We are going to have an impact,” she said.

Don’t miss the full conversation from the Transform 2022 event.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Taryn Plumb
Source: Venturebeat

Related posts
AI & RoboticsNews

The show’s not over: 2024 sees big boost to AI investment

AI & RoboticsNews

AI on your smartphone? Hugging Face’s SmolLM2 brings powerful models to the palm of your hand

AI & RoboticsNews

Why multi-agent AI tackles complexities LLMs can’t

DefenseNews

US Army buys long-flying solar drones to watch over Pacific units

Sign up for our Newsletter and
stay informed!