AI & RoboticsNews

How responsible AI creates measurable ROI

Register now for your free virtual pass to the Low-Code/No-Code Summit this November 9. Hear from executives from Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.


Even in the midst of an economic downturn, artificial intelligence (AI) adoption in enterprises around the world is still climbing. IBM’s recently released 2022 AI Adoption Index, for example, reports that the AI adoption rate is around 35% — up four percentage points from one year ago. It also found that despite increasing adoption rates, 74% of companies admit they haven’t taken any steps to actually make sure their AI is responsible and bias-free. 

The question is, why not?

Navrina Singh, CEO and founder of the Palo Alto-based Credo AI, which announced what it claims is the first responsible AI governance platform in April, says it’s because companies are burnt out with the way conversation happens around the topic of responsible AI — and getting more people on board begins with changing the conversation surrounding it. While definitions of responsible AI vary, Accenture describes it as, “the practice of designing, developing and deploying AI with good intention to empower employees and businesses, and fairly impact customers and society — allowing companies to engender trust and scale AI with confidence”

“I think it’s really talking about the [return on investment] ROI of responsible AI — the ROI of RAI,” she said. “Enterprises are not focusing on the positive aspects — or the ROI of it. I think we need to change the conversation from one of the soft metrics to actual ROI of trust and actual ROI of responsible AI.” Secondly, she added, organizations need to be shown a pathway to [the] implementation of responsible AI. 

Event

Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.


Register Here

Balancing risk and trust with governance

Just as humans have a psychological fight or flight instinct, Singh explained that she has noticed a similar phenomenon happening with enterprise adoption of AI — one of balancing risk and trust.

“When I think about AI, it is literally risk or trust, she said, adding that right now company leaders emphasize just getting by on compliance so they are not on anyone’s risk radar.   

“I think that is a very dangerous mentality for enterprises to own, especially as they’re bringing more and more machine learning into their operations,” she said. “What my ask of this community is to sort of take a bet on themselves and on artificial intelligence so that they can understand what trust by design can bring to their organizations.” 

Singh is part of the national AI Advisory Committee that advises President Biden and the National AI team on upcoming regulations and policies. She founded Credo AI in early 2020 when she noticed while working on AI products for Qualcomm and Microsoft that conversations around governance were often happening too late in the game. 

“I think there’s a misplaced notion that if you are adding those governance compliance risk checks earlier in the game, that is just going to slow down our technology,” she said. “What we started to see was there was actually an added benefit — one where not only were the systems performing better, but these systems were building trust at scale.” 

Under the hood

Credo AI has two offerings: One is a software-as-a-service (SaaS) tool for the cloud that works with AWS and Azure. Another is on-premises for more regulated industries. The platform sits on top of an enterprise’s machine learning (ML) infrastructure and has three main components:

  1. Setting requirements: It pulls in requirements to set a framework for the tool to use as guidelines. This can include any regulation, like New York’s upcoming AI law or company values or policies. 
  2. Technical assessment: Next, the tool performs a technical assessment against these guidelines. The open-source assessment framework, called Credo Lens, and interrogates your company’s models and datasets against the guidelines to see what matches and where there may be pitfalls. Professionals responsible for the AI then must provide evidence against this. 
  3. Generating governance artifacts: After the technical assessments are performed on a company’s models, processes and datasets according to any regulations and outlined parameters set, Credo AI then creates reports, audits and documentation for transparency to be easily shared among stakeholders. 

Singh claims that companies that have adopted Credo AI as a tool have reported success in bridging the gap between technical and business stakeholders, while also visualizing risks in a more tangible and actionable way. 

Credo AI is also seeing some action. In May, the company raised $12.8 million in a series A round. Its total funding to date currently sits at $18.3 million, according to Crunchbase

Building an ecosystem of responsible AI

One tool isn’t going to solve the world’s responsible AI dilemma, but focusing on developing an ecosystem of responsible AI might be the place to start, Singh said. This was also a key point throughout its Global Responsible AI Summit, held for the first time ever last week. 

Overwhelmingly, the event’s sessions underscored that an ecosystem of responsible AI has to include multiple stakeholders and angles because it is “more than just a product at play,” according to Singh. 

Unlike previous technological revolutions, artificial intelligence is truly different, she explained.  

“It is going to impact everything you and I have seen, as well as our beliefs and understanding,” she said. “We should not be calling out the word ‘responsible,’ but right now, we are in a moment in time where it needs to be called out. But, it needs to become a fabric of not only design, development and communication, but also of how we serve our consumers.” 

Developing an ecosystem of responsibility around AI isn’t easy from the ground up., Although tools can help, experts like say it starts with leadership. 

As an article from McKinsey, analysts Roger Burkhardt, Nicolas Hohn and Chris Wigley write, the “CEO’s role is vital to the consistent delivery of responsible AI systems and that the CEO needs to have at least a strong working knowledge of AI development to ensure he or she is asking the right questions to prevent potential ethical issues.” 

Singh concurred, pointing out that as the economy flips toward a possible recession, C-suite leadership and education of AI will become increasingly vital for the enterprise as companies look to automate and reduce costs where they can. 

“There needs to be an incentive alignment that the C-suite needs to push down between the technical stakeholders and the governance and risk teams to make sure that as more artificial intelligence is getting deployed,” Singh said. “They need to ensure that incentives exist for technical teams to build responsibly and incentives exist for compliance and governance teams, to not only manage risk but to build trust — which, by the way, is going to be the underpinning of the next wave after recession for these companies to thrive in.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Ashleigh Hollowell
Source: Venturebeat

Related posts
AI & RoboticsNews

Hacking internal AI chatbots with ASCII art is a security team’s worst nightmare

AI & RoboticsNews

Microsoft launches new Azure AI tools to cut out LLM safety and reliability risks

AI & RoboticsNews

AI21 Labs juices up gen AI transformers with Jamba

DefenseNews

Northrop says Air Force design changes drove higher Sentinel ICBM cost

Sign up for our Newsletter and
stay informed!