AI & RoboticsNews

The U.K.’s new AI transparency standard is a step closer to accountable AI

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


The UK government announced last week a new algorithmic transparency standard. The move follows a year when the UK’s Test and Trace program could ping your phone at any point and cancel your weekend plans — and when students took to the streets chanting “F一 the algorithm!” after exams were canceled and their university acceptances were instead decided by an algorithm.

The new standard is meant to push public agencies to create transparency about how, when, and where algorithmic decision-making is happening.

The use of algorithms to make decisions in public services is coming under increasing scrutiny around the world. Most recently in the UK, we’ve seen a legal challenge against the Department for Work and Pensions after it was revealed that disabled people were being disproportionately targeted by an algorithm for a benefits fraud review. In the US, following years of researchers raising alarm bells, journalists have brought together the largest collection of evidence that crime-prediction software that promised to be free of racial biases has actually perpetuated them.

These cases often come as a shock to the public 一 and a surprise to the communities they affect. Polling from the Centre for Data Ethics and Innovation found that 38% of the UK public was not aware that algorithmic systems were used to support decisions using personal data to, for example, to assess the risk that someone might need social care support. Currently, when a decision is being made about you, it’s very hard to know whether it’s being made by a human or an algorithm.

Right now, the only way it’s often possible to find out which systems are being used, and where, is through investigations by journalists, civil-society organizations, campaigners, or academic researchers. And even then, these investigators usually get answers only after many freedom of information (FOI) requests. It’s a slow, cumbersome method of gathering evidence — with lots of potential for obfuscation and confusion. Researchers report getting back comprehensive documentation of a local authority’s use of Microsoft Word — not quite the high-risk technology they were looking for.

Even among government departments, public bodies, and local authorities there’s a knowledge gap; they often don’t know which systems are being used and why, and this prevents the sharing of information, benefits, and harms that would support a healthy ecosystem of risk-aware progress.

This fundamental gap in information and knowledge is what led the Ada Lovelace Institute and others to suggest transparency registers for public sector algorithmic decision-making systems — systems that use automation to make, or significantly support humans in making, decisions. A transparency register would collect together information on government systems in a single location and in a clear, accessible way, allowing for scrutiny by the public, as well as the journalists, campaigners and academics who act on their behalf.

Transparency registers are being trialled in cities like Amsterdam, Helsinki, Nantes, and New York, and they are providing detail on systems ranging from maternal healthcare chatbots to algorithms for identifying potentially illegal housing provision. These registers are one of a range of approaches and policies designed to create greater accountability for the use of algorithms in public services being explored by governments around the world, such as Canada, which recently introduced an algorithmic impact assessment for public-sector bodies.

The UK government’s new algorithmic transparency standard is the first step towards a transparency register in the UK. It provides a structured way to describe a system, and that structured information could be submitted as regular reporting to a register. The Cabinet Office described the standard as “… organized into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model, and the level of human oversight.”

The announcement follows the government’s recent data consultation on the future of the UK’s data-protection regime, which included a proposal to introduce transparency reporting on public-sector use of algorithms in decision-making. This suggests the emergence of a clear direction of travel towards greater transparency in the use of public-sector algorithms.

Some may ask what this means for the companies that are developing the software that’s being used in the public sector. At a time when the tech sector is awash with talk of trustworthy technology and explainable AI, a transparency standard offers an opportunity for those developing government tech to explain what they’re doing, why and how, in a way that’s accessible to the public and demonstrates responsibility.

For the algorithmic transparency standard to deliver on its objectives, it will need to be trialled with government departments and public bodies, and undergo user testing with members of the public, community groups, journalists and academics. It will be essential that government departments and public-sector bodies publish completed standards to support modelling and development of good practice, and allow others to learn from them.

Algorithmic transparency standards for the public sector can also be seen as part of a wider move towards transparency standards in technology regulation. The European Union’s draft Digital Services Act, the UK’s draft Online Safety Bill, as well as similar proposed legislation in Canada and elsewhere, require standardized transparency reporting from online tech platforms. In the U.S., Congress is discussing an updated version of the Algorithmic Accountability Act first introduced in 2019, which would place regular reporting requirements on businesses using automated decision-making systems in sectors such as healthcare, education, housing, and recruitment. It may prove prudent for tech companies to start looking at how to describe their systems’ purposes, functions, data, and methods sooner rather than later.

The public want greater transparency; they want to know information is available about the systems deciding whether they receive benefits, who is prioritized for attention by children’s social care services, etc. The UK’s new algorithm transparency standard is an important step towards regaining the public’s trust, and potentially catching the next scandal before it happens.

Jenny Brennan is Senior Researcher at , leading Ada’s work on ethics and accountability in practice. Her research includes methods for inspecting and assessing algorithmic systems and their impact on people and society.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Jenny Brennan, Ada Lovelace Institute
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!