AI & RoboticsNews

How new regulation is driving the AI governance market

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


AI governance, or the process of defining policies to guide AI development, is a fast-growing market opportunity. A report from StrategyR highlights this, predicting that AI governance software and services could be worth $402 million by 2026, up from $49.3 million in 2020.

“Amid the COVID-19 crisis, the global market for AI governance [has grown significantly],” StrategyR wrote in a press release. “The report presents fresh perspectives on opportunities and challenges in a significantly transformed post-COVID-19 marketplace.”

AI governance adoption

The pandemic forced companies to rethink models used to manage AI risk, but many face continued challenges. According to a Deloitte analysis, as of March, 38% of organizations either lacked or had an insufficient governance structure for handling data and AI models. And a survey by Pegasystems predicts that if the current trend holds, a lack of accountability within the private sector will lead to governments taking over responsibility for AI regulation over the next five years.

Last year, the University of California, Berkeley Center for Long-Term Cybersecurity published a report positing that AI governance has gone through three stages since 2016. The first stage was marked by the release of ethics principles by tech companies and governments, followed by consensus around themes like privacy, human control, explainability, and fairness. The third stage, which began in 2019, is converting principles into practice.

Responsible AI practices including governance can bring major business value to bear. A study by Capgemini found customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them.

This being the case, not all organizations have gotten onboard. In a recent KPMG report, 94% of IT decision makers said that they feel that firms need to focus more on corporate responsibility and ethics when developing their AI solutions. Analysts like StrategyR are betting that emerging laws such as the European Union’s algorithm framework and “AI registries” in Amsterdam, Helsinki, and other cities will spur companies into action, accelerating the demand for AI governance solutions that ease the adoption of best governance practices.

“In jurisdictions worldwide, new policy initiatives and regulations concerning the governance of data and AI signal the end of self-regulation and the rise of new oversight,” researchers at KPMG wrote in the aforementioned report. “As the regulatory environment continues to evolve at traditional pace, leading organizations are addressing AI ethics and governance proactively rather than waiting for requirements to be enforced upon them.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!