AI & RoboticsNews

AI Weekly: An outline for government regulation of AI

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


Governments face a range of policy challenges around AI technologies, many of which are exacerbated by the fact that they lack sufficiently detailed information. A whitepaper published this week by AI ethicist Jess Whittlestone and former OpenAI policy director Jack Clark outlines a potential solution that involves investing in governments’ capacity to monitor the capabilities of AI systems. As the paper points out, AI as an industry routinely creates a range of data and measures, and if the data was synthesized, the insights could improve governments’ ability to understand the technologies while helping to create tools to intervene.

“Governments should play a central role in establishing measurement and monitoring initiatives themselves while subcontracting out other aspects to third parties, such as through grantmaking, or partnering with research institutions,” Whittlestone and Clark wrote. “It’s likely that successful versions of this scheme will see a hybrid approach, with core decisions and research directions being set by government actors, then the work being done by a mixture of government and third parties.”

Whittlestone and Clark recommend that governments invest in initiatives to investigate aspects of AI research, deployment, and impacts, including analyzing already-deployed systems for any potential harms. Agencies could develop better ways to measure the impacts of systems where such measures don’t already exist. And they could track activity and progress in AI research by using a combination of analyses, benchmarks, and open source data.

“Setting up this infrastructure will likely need to be an iterative process, beginning with small pilot projects,” Whittlestone and Clark wrote. “[It would need to] assess the technical maturity of AI capabilities relevant to specific domains of policy interest.”

Whittlestone and Clark envision governments evaluating the AI landscape and using their findings to fund the creation of datasets to fill representation gaps. Governments could work to understand a country’s competitiveness on key areas of AI research and host competitions to make it easier to measure progress. Beyond this, agencies could fund projects to improve assessment methods in specific “commercially important” areas. Moreover, governments could track the deployment of AI systems for particular tasks in order to better track, forecast, and ultimately prepare for the societal impacts of these systems.

“Monitoring concrete cases of harm caused by AI systems on a national level [would] keep policymakers up to date on the current impacts of AI, as well as potential future impacts caused by research advances,” Whittlestone and Clark say. “Monitoring the adoption of or spending on AI technology across sectors [would] identify the most important sectors to track and govern, as well as generalizable insights about how to leverage AI technology in other sectors. [And] monitoring the share of key inputs to AI progress that different actors control (i.e., talent, computational resources and the means to produce them, and the relevant data) [would help to] better understand which actors policymakers will need to regulate and where intervention points are.”

Slow progress

Some governments have already taken steps toward stronger governance and monitoring of AI systems. For example, the European Union’s proposed standards for AI would subject “high-risk” algorithms in recruitment, critical infrastructure, credit scoring, migration, and law enforcement to strict safeguards. Amsterdam and Helsinki have launched “algorithm registries” that list the datasets used to train a model, a description of how an algorithm is used, how humans use the prediction, and other supplemental information. And China is drafting rules that would require companies to abide by ethics and fairness principles in deploying recommendation algorithms in apps and social media.

But other efforts have fallen short, particularly in the U.S. In spite of city- and state-level bans on facial recognition and algorithms used in hiring and recruitment, federal legislation like the SELF DRIVE Act and Algorithmic Accountability Act, which would require companies to study and fix flawed AI systems that result in inaccurate, unfair, biased, or discriminatory decisions impacting U.S. citizens, remains stalled.

If governments opt not to embrace oversight oversight of AI, Whittlestone and Clark predict that private sector interests will exploit the lack of measurement infrastructure to deploy AI technology that has “negative externalities,” and that governments will lack the tools available to address them. Information asymmetries between the government and the private sector could widen as a result, spurring harmful deployments that catch policymakers by surprise.

“Other interests will step in to fill the evolving information gap; most likely, the private sector will fund entities to create measurement and monitoring schemes which align with narrow commercial interests rather than broad, civic interests,” Whittlestone and Clark said. “[This would] lead to hurried, imprecise, and uninformed lawmaking.”

For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!