AI & RoboticsNews

It’s time to train professional AI risk managers

Join Transform 2021 this July 12-16. Register for the AI event of the year.


Last year I wrote about how AI regulations will lead to the emergence of professional AI risk managers. This has already happened in the financial sector where regulations patterned after Basel rules have created a financial risk management profession to assess financial risks. Last week, the EU published a 108-page proposal to regulate AI systems. This will lead to the emergence of professional AI risk managers.

The proposal doesn’t cover all AI systems, just those deemed high-risk, and the regulation would vary depending on how risky the specific AI systems are:

  • Unacceptable risks like social credit scoring are outright banned
  • High-risk like financial credit scoring and resume screening have to be extensively audited
  • Limited-risk like chatbots and deep fakes have transparency requirements
  • Minimal-risk like spam filters do not have additional requirements

Above: Source: European framework

Since systems with unacceptable risks would be banned outright, most of the regulation is about high-risk AI systems. So what are high-risk systems? (From the European Commission):

  • Critical infrastructure that could put the life and health of citizens at risk (e.g. transport)
  • Educational or vocational training that may determine the access to education professional course of someone’s life (e.g. scoring of exams)
  • Safety components of products (e.g. AI application in robot-assisted surgery)
  • Employment, workers management and access to self-employment (e.g. CV-sorting software for recruitment procedures)
  • Essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan)
  • Law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence)
  • Migration, asylum and border control management (e.g. verification of authenticity of travel documents)
  • Administration of justice and democratic processes (e.g. applying the law to a concrete set of facts)

The business impact is significant not only from potentially being fined 6% of revenue but also from accessing the $130 billion EU software market. Anyone who wants market access has to comply — although small vendors are exempted (those with fewer than 50 employees and $10 million in revenue or balance sheet). Europe’s privacy regulations, GDPR, set the tone for global privacy laws. So will its AI proposal now set the tone for broad AI regulations globally? We already know this topic is top of mind for US regulators. The Federal Trade Commission recently published AI guidelines ending with the point “Hold yourself accountable — or be ready for the FTC to do it for you.” Everyone will take this seriously. So what do vendors of high-risk systems need to do?

A lot. But I’ll focus here on the need for what the proposal calls conformity assessments, or simply put, audits. Audits are done to certify that the AI system complies with the regulation. Some systems can be audited internally by the vendors’ employees, while other systems, like credit scoring or biometric identification, need to audited by a third party. For startups, this will be a whole-company effort with plenty of founder involvement. Large corporations will start setting up teams. And consulting firms will start knocking on their doors.

Above: Source: European Commission

The audit is comprehensive and requires a team that has “in-depth understanding of artificial intelligence, technologies, data and data computing, fundamental rights, health and safety risks, and knowledge of existing and legal requirements.” The audit covers the following (from the European Commission):

  • Adequate risk assessment and mitigation systems
  • High quality datasets feeding the system to minimize risks and discrimination
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose so that authorities can assess its compliance
  • Clear and adequate information to the user
  • Appropriate human oversight measures to minimize risk
  • High level of robustness, security, and accuracy

Even current financial risk managers in banks are not equipped to address the breadth of the audit. Just understanding how to measure the quality of a dataset is a university-level course by itself. Reading between the lines of the proposal, there is concern about the talent shortage needed to enforce the regulation. The proposed regulation will exacerbate the AI talent shortage. Consulting firms will be the stopgap.

While it will take years before the regulation is enforced, 2024 being the earliest, it is time to address the talent gap. A coalition of professional associations, industry practitioners, academics, and technology providers should collectively design a program to train the forthcoming domain-flexible AI risk managers in the form of professional certifications, like GARP’s FRM certification, or university degrees, like NYU’s MSc in risk management. (Full disclosure: I used to be a certified FRM, but am not active anymore, and I’m not affiliated with NYU.)

Kenn So is an associate at Shasta Ventures investing in AI and software startups. He previously worked as a financial risk consultant at Ernst & Young, building and auditing bank models and was one of the financial risk managers that emerged out of the Basel standards.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kenn So
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!