AI & RoboticsNews

Responsible AI is a top management concern, so why aren’t organizations deploying it? 

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


Even though responsible artificial intelligence (AI) is considered a top management concern, a newly released report from Boston Consulting Group and MIT Sloan Management Review finds that few leaders are prioritizing initiatives to make it happen.

Of the 84% of respondents who believe that responsible AI should be a top management priority, only 56% said that it is, in fact, a top priority — with only 25% of those reporting their organizations has a fully mature program in place, according to the research.

Further, only 52% of organizations reported they have a responsible AI program in place – and 79% of those programs are limited in scale and scope, the BCG/MIT Sloan report said. With less than half of organizations viewing responsible AI as a top strategic priority, among them, only 19% confirmed they have a fully implemented responsible program in place.

This indicates that responsible AI lags behind strategic AI priorities, according to the report.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

Factors working against the adoption of responsible AI include a lack of agreement on what “responsible AI” means along with a lack of talent, prioritization and funding.

Meanwhile, AI systems across industries are susceptible to failures, with nearly a quarter of respondents stating that their organization has experienced issues ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk, according to the research.

Why responsible AI isn’t happening and why it matters

Responsible AI is not being prioritized because of the competition for management’s attention, Steve Mills, chief AI ethics officer and managing director and partner at BCG, told VentureBeat.

“Responsible AI is fundamentally about a cultural transformation and this requires support from everyone within an organization, from the top down,” Mills said. “But today, many issues compete for management’s attention — evolving ways of working, global economic conditions, lingering supply chain challenges — all of which can down-prioritize responsible AI.”

There is also an uncertain regulatory environment even with AI-specific laws emerging in jurisdictions around the world, he said.

“On the surface, this should accelerate [the] adoption of responsible AI, but many regulations remain in draft form and specific requirements are still emerging. Until companies have a clear view of the requirements, they may hesitate to act,” Mills said.

He stressed that companies need to move quickly. Less than half of respondents reported feeling prepared to address emerging regulatory requirements — even among responsible AI leaders, only 51% reported feeling prepared.

“At the same time, our results show that it takes companies three years on average to fully mature responsible AI,” he said. “Companies cannot wait for regulations to settle before getting started.”

There is also a perception challenge.

“Much of the hesitation and skepticism regarding responsible AI revolves around a common misconception that it slows down innovation due to the need for additional checklists, reviews and expert engagement,’’ Mills said. “In fact, we see that the opposite is true. Nearly half of responsible AI leaders report that their responsible AI efforts already result in accelerated innovation.”

Responsible AI can be difficult to deploy

Mills acknowledged that responsible AI can be hard to implement, but said, “the payoff is real.”

Once leaders prioritize and give attention to responsible AI, they still need to provide appropriate funding and resources and build awareness, he said. “Even once those early issues are resolved, access to responsible AI talent and training present lingering challenges.”

Yet, Mills makes the case for companies to overcome these challenges, saying there are “clear rewards. Responsible AI yields products that are more trusted and better at meeting customer needs, producing powerful business benefits,” he said.

Having a leading responsible AI program in place reduces the risk of scaling AI, according to Mills.

“Companies that have leading responsible AI programs and mature AI report 30% fewer AI system failures than those with mature AI alone,” he said.

This makes sense, intuitively, Mills said, because as companies scale AI, more systems are deployed and the risk of failures increases.

A leading responsible AI program offsets that risk, reducing the number of failures and identifying them earlier, minimizing their impact.

Additionally, companies with mature AI and leading responsible AI programs report over twice the business benefits as those with mature AI, alone, Mills said.

“The human-centered approaches that are core to responsible AI lead to stronger customer engagement, trust and better-designed products and services,” he said.

“More importantly,” Mills added, “it’s simply the right thing to do and is a key element of corporate social responsibility.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Esther Shein
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!