AI & RoboticsNews

An AI regulation strategy that could really work

When even the companies developing AI themselves agree with the need for regulation, it is time to stop discussing abstract principles and get down to the business of how to regulate a rapidly advancing technology landscape.

It is clear that our regulatory system needs an update. If we try to regulate 21st century technology and beyond with 20th century tools, we’ll get none of the benefits of regulation and all of the downsides.

So, if we need to reinvent the rules to keep pace with the technological change advanced by the likes of Google, Amazon, and Facebook, where do we start?

Google’s Sundar Pichai is right that technology companies cannot simply build AI and leave it to the will of the market.

But what we can do is try to use the best traits of markets — competition, transparency, rapid iteration — to reform our regulatory system. Specifically, this means pairing strong government oversight with private sector “regulatory markets.”

We are already seeing governments essentially outsource their roles as regulators, leaving matters to self-regulation on an increasing scale. European governments, for example, after creating the right to be forgotten on search engines that pre-dated GDPR, left the task of enforcing this right to search engines themselves. The reason? Governments lacked the technological competence, resources, and political coherence to do so themselves.

regulatory market is a new solution to the problem of the limited capacity of traditional regulatory agencies, invented for the nation-state manufacturing age, to keep up with the global digital age.

It combines the incentives that markets create to invent more effective and less burdensome ways to provide a service with hard government oversight to ensure that whatever the regulatory market produces, it satisfies the goals and targets set by democratic governments.

So, instead of governments writing detailed rules, governments instead set the goals: What accident rates are acceptable in self-driving cars? What amount of leakage from a confidential data set is too much? What factors must be excluded from an algorithmic decision?

Then, instead of tech companies deciding for themselves how they will meet those goals, the job is taken on by independent companies that move into the regulatory space, incentivized to invent streamlined ways to achieve government-set goals.

This might involve doing big data analysis to identify the real risk factors for accidents in self-driving cars, using machine learning to detect money laundering transactions more effectively than current methods, or building apps that detect when another app is violating its own privacy policies.

Independent private regulators would compete to provide the regulatory services tech companies are required by government to purchase.

How does this not become a race to the bottom, with private regulators trying to outbid each other to be as lenient as possible, the way that continued self-regulation might?

The answer is for governments to shift their oversight to regulating the regulators. A private regulator would require a license to compete, and could only get and maintain that license if it continues to demonstrate it is achieving the required goals.

The wisdom of the approach rests on this hard government oversight; private regulators have to fear losing their license if they cheat, get hijacked by the tech companies they regulate, or simply do a bad job.

The failure of government oversight is, of course, the challenge that got us here in the first place, as in the case of Boeing self-regulating safety standards on the ill-fated 737 Max.

But the government oversight challenge in a regulatory market will often be easier to solve than in the traditional setting by having fewer regulators than tech companies, an incentive for regulators to maintain their license, and industry-wide data.

And because regulators could operate on a global scale, seeking licenses from multiple governments, they would be less likely to respond to the interests of a handful of domestic companies when they are at risk of losing their ability to operate around the world.

This approach may not solve every regulatory challenge or be appropriate in every context. But it could transform AI regulation into a more manageable and transparent problem.

At the very least, creating regulatory markets could attract some of the venture capital and smart engineers now working hard to build AI to independent companies dedicated to inventing the very technologies we are going to need to keep AI in line.

This model is not the only novel idea we can explore, and it will not work everywhere. But we need better solutions to the need to regulate AI, we need them soon, and then we need to build them.

Professor Gillian Hadfield is director of the Schwartz Reisman Institute for Technology and Society, University of Toronto, and author of Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy.


Author: Gillian Hadfield.
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!