AI & RoboticsNews

73% of organizations are embracing gen AI, but far fewer are assessing risks


A new survey from PwC of 1,001 U.S.-based executives in business and technology roles finds that 73% of the respondents currently or plan to use generative AI in their organizations.

However, only 58% of respondents have started assessing AI risks. For PwC, responsible AI relates to value, safety and trust and should be part of a company’s risk management processes.

Jenn Kosar, U.S. AI assurance leader at PwC, told VentureBeat that six months ago, it would be acceptable that companies began deploying some AI projects without thinking of responsible AI strategies, but not anymore. 

“We’re further along now in the cycle so the time to build on responsible AI is now,” Kosar said. “Previous projects were internal and limited to small teams, but we’re now seeing large-scale adoption of generative AI.”

She added gen AI pilot projects actually inform a lot of responsible AI strategy because enterprises will be able to determine what works best with their teams and how they use AI systems. 

Responsible AI and risk assessment have come to the forefront of the news cycle in recent days after Elon Musk’s xAI deployed a new image generation service through its Grok-2 model on the social platform X (formerly Twitter). Early users report that the model appears to be largely unrestricted, allowing users to create all sorts of controversial and inflammatory content, including deepfakes of politicians and pop stars committing acts of violence or in overtly sexual situations.

Priorities to build on

Survey respondents were asked about 11 capabilities that PwC identified as “a subset of capabilities organizations appear to be most commonly prioritizing today.” These include:

  1. Upskilling
  2. Getting embedded AI risk specialists
  3. Periodic training
  4. Data privacy
  5. Data governance
  6. Cybersecurity
  7. Model testing
  8. Model management
  9. Third-party risk management
  10. Specialized software for AI risk management
  11. Monitoring and auditing

According to the PwC survey, more than 80% reported progress on these capabilities. However, 11% claimed they’ve implemented all 11, though PwC said, “We suspect many of these are overestimating progress.”

It added that some of these markers for responsible AI can be difficult to manage, which could be a reason why organizations are finding it difficult to fully implement them. PwC pointed to data governance which will have to define AI models’ access to internal data and put guard rails around. “Legacy” cybersecurity methods could be insufficient to protect the model itself against attacks such as model poisoning. 

Accountability and responsible AI go together

To guide companies undergoing the AI transformation, PwC suggested ways to build a comprehensive responsible AI strategy. 

One is to create ownership, which Kosar said was one of the challenges those surveyed had. She said it’s important to ensure accountability and ownership for responsible AI use and deployment be traced to a single executive. This means thinking of AI safety as something beyond technology and having either a chief AI officer or a responsible AI leader who works with different stakeholders within the company to understand business processes. 

“Maybe AI will be the catalyst to bring technology and operational risk together,” Kosar said. 

PwC also suggests thinking through the entire lifecycle of AI systems, going beyond the theoretical and implementing safety and trust policies across the entire organization, preparing for any future regulations by doubling down on responsible AI practices and developing a plan to be transparent to stakeholders. 

Kosar said what surprised her the most with the survey were comments from respondents who believed responsible AI is a commercial value add for their companies, which she believes will push more enterprises to think deeper about it. 

“Responsible AI as a concept is not just about risk, but it should also be value creative. Organizations said that they’re seeing responsible AI as a competitive advantage, that they can ground services on trust,” she said. 

 

Author: Emilia David
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!