AI & RoboticsNews

The emergence of the professional AI risk manager

When the 1970s and 1980s were colored by banking crises, regulators from around the world banded together to set international standards on how to manage financial risk. Those standards, now known as the Basel standards, define a common framework and taxonomy on how risk should be measured and managed. This led to the rise of professional financial risk managers, which was my first job. The largest professional risk associations, GARP and PRMIA, now have over 250,000 certified members combined, and there are many more professional risk managers out there who haven’t gone through those particular certifications.

We are now beset by data breaches and data privacy scandals, and regulators around the world have responded with data regulations. GDPR is the current role model, but I expect a global group of regulators to expand the rules to cover AI more broadly and set the standard on how to manage it. The UK ICO just released a draft but detailed guide on auditing AI. The EU is developing one as well. Interestingly, their approach is very similar to that of the Basel standards: specific AI risks should be explicitly managed. This will lead to the emergence of professional AI risk managers.

Below I’ll flesh out the implications of a formal AI risk management role. But before that, there are some concepts to clarify:

  • Most of the data regulations around the world have focused on data privacy
  • Data privacy is a subset of data protection. GDPR is more than just privacy
  • Data protection is a subset of AI regulation. The latter covers algorithm/model development as well.

Rise of a global AI regulatory standard

The Basel framework is a set of international banking regulation standards developed by the Bank of International Settlements (BIS) to promote the stability of the financial markets. By itself, BIS does not have regulatory powers, but its position as the ‘central bank of central banks’ makes Basel regulations the world standard. The Basel Committee on Banking Supervision (BCBS), which wrote the standards, formed at a time of financial crises around the world. It started with a group of 10 central bank governors in 1974 and is now composed of 45 members from 28 jurisdictions.

Given the privacy violations and scandals in recent times, we can see GDPR as a Basel standard equivalent for the data world. And we can see the European Data Protection Supervisor (EDPS) as the BCBS for data privacy. (EDPS is the supervisor of GDPR.) I expect a more global group will emerge as more countries enact data protection laws.

There is no leading algorithm regulation yet. GDPR only covers a part of it. One reason is that it is difficult to regulate algorithms themselves and another is that regulation of algorithms is embedded into sectoral regulations. For example, Basel regulates how algorithms should be built and deployed in banks. There are similar regulations in healthcare. Potential conflicting or overlapping regulations make writing a broader algorithmic regulation difficult. Nevertheless, regulators in the EU, UK, and Singapore are taking the lead in providing detailed guidance on how to govern and audit AI systems.

Common framework and methodologies

Basel I was written more than three decades ago in 1988. Basel II in 2004. Basel III in 2010. These regulations set the standards on how risk models should be built, what the processes are to support those models, and how risk will affect the bank’s business. It provided a common framework to discuss, measure, and evaluate the risks that banks are exposed to. This is what is happening with the detailed guidance being published by EU/UK/SG. All are taking a risk-based approach and helping define the specific risks of AI and the necessary governance structures.

Above: The Basel II Framework
Above: The UK ICO Framework

New profession and C-level jobs

A common framework allows professionals to quickly share concepts, adhere to guidelines, and standardize practices. Basel led to the emergence of financial risk managers and professional risk associations. A new C-level position was also created, the Chief Risk Officer (CRO). Bank CROs are independent from other executives and often report directly to the CEO or board of directors.

GDPR jumpstarted this development for data privacy. It required that organizations with over 250 employees have a data protection officer (DPOs). This caused a renewed interest in the International Association of Privacy Professionals. Chief Privacy and Data Officers (CPOs and CDOs) are also on the rise. With broader AI regulations coming, there will be a wave of professional AI risk managers and a global professional community forming around it. DPOs are the first iteration.

What will a professional AI risk manager need or do?

The job will combine some duties and skill sets of financial risk managers and data protection officers. A financial risk manager needs technical skills to build, evaluate, and explain models. One of their major tasks is to audit a bank’s lending models while they are being developed and when they’re in deployment. DPOs have to monitor internal compliance, conduct data protection impact assessments (DPIAs), and act as the contact point for top executives and regulators. AI risk managers have to be technically adept yet have a good grasp of regulations.

What does this mean for innovation?

AI development will be much slower. Regulation is the primary reason banks have not been at the forefront of AI innovation. Lending models are not updated for years to avoid additional auditing work from internal and external parties.

But AI development will be much safer as well. AI risk managers will require that a model’s purpose be explicitly defined and that only the required data is copied for training. No more sensitive data in a data scientist’s laptop.

What does this mean for startups?

The emergence of the professional AI risk manager will be a boon to startups building in data privacy and AI auditing.

Data privacy. Developing models on personal data will automatically require a DPIA. Imagine data scientists having to ask for approval before they start a project. (Hint: not good) To work around this, data scientists would want tools to anonymize data at scale or generate synthetic data so they can avoid DPIAs. So the opportunities for startups are twofold: There will be demand for software to comply with regulations, and there will be demand for software that provides workarounds to those regulations, such as sophisticated synthetic data solutions.

AI auditing. Model accuracy is one AI-related risk for which we already have common assessment techniques. But for other AI-related risks, there are none. There is no standard to auditing fairness and transparency. Making AI models robust to adversarial attacks is still an active area of research. So this is an open space for startups, especially those in the explainable AI space, to help define the standards and be the preferred vendor.

Kenn So is an associate at Shasta Ventures investing in AI/smart software startups. He was previously an associate at Ernst & Young, building and auditing bank models and was one of the financial risk managers that emerged out of the Basel standards.


Author: Kenn So.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!

Worth reading...
OxygenOS Open Beta 10 for OnePlus 7/7 Pro brings February 2020 patch…