AI & RoboticsNews

What is AI governance? 

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


All computer algorithms must follow rules and live within the realm of societal law, just like the humans who create them. In many cases, the consequences are so small that the idea of governing them isn’t worth considering. Lately, though, some artificial intelligence (AI) algorithms have been taking on roles so significant that scientists have begun to consider just what it means to govern or control the behavior of the algorithms. 

For example, artificial intelligence algorithms are now making decisions about sentencing in criminal trials, deciding eligibility for housing, or setting the price of insurance. All of these areas are heavily constrained by laws which humans working on the tech must adhere to. There’s no reason why algorithms for AI technologies shouldn’t follow the same regulations, or perhaps different ones all their own. 

What’s different about governing an AI?

Some scientists like to strip away the word “artificial” and just speak of governing “intelligence” or a “decision-making process.” It is simpler than trying to distinguish between where the algorithm ends and the role of any human begins. 

Speaking only of an intelligent entity helps normalize AI governance with the time-tested human political process, but it hides the ways in which algorithms are not like humans. Some notable differences include:

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.


Register Here

  • Hyper-rational – While some AI algorithms are hard for humans to understand, at the core they remain very mathematical operations that are implemented on machines that speak only in logic. 
  • Governable – The AI can be trained to follow any logical governance process built only of logical rules. If the rules can be written, the AI will follow them. The problems occur when the rules aren’t perfect or we ask for outcomes that don’t follow the rules. 
  • Repeatable – Unless there’s a specific choice to add random outcomes searching for fairness, the AI algorithms will make the same decision when presented with the same data. 
  • Inflexible – While “repeatable” is often a good trait, it is closely related to being inflexible or incapable of adapting. 
  • Focused – The data presented to the AI controls the outcome. If you don’t want the algorithm to see certain data, it can be easily excluded. Of course, bias hides in other parts of the data, but in principle, the algorithm can focus. 
  • Literal-minded – The algorithm will do what it’s told, up to a point. But if the training has biases, then the algorithm will interpret them literally. 

[Related: Research confirms AI adoption growing, but governance is lagging]

Is AI governance for lawyers? 

The idea of governing algorithms involves laws, but not all the work is strictly legal. Indeed, many developers use the word “governance” to refer to any means of controlling how the algorithms work with people and others. Database governance, for example, often includes decisions about who has access to the data and what control they can exert over it. 

Artificial intelligence governance is similar. Some frequently asked questions related to this are:

  • Who can train the model?
  • Who decides which data is included in the training set? 
  • Are there any rules on which data can be included? 
  • Who can examine the model after training?
  • When can the model be adjusted and retrained?
  • How can the model be tested for bias?
  • Are there any biases that must be defended against?
  • How is the model performing?
  • Are there new biases appearing? 
  • Does the model need retraining? 
  • How does performance compare to any ground truth? 
  • Do the data sources in the model comply with privacy regulations? 
  • Are the data sources used for training a good representation of the general domain in which the algorithm will operate?

What are the main challenges for AI governance? 

The work of AI governance is still being defined, but the initial movement was motivated to solve some of the trickiest problems when humans interact with AIs like:: 

  • Explainability – How can the developers and trainers of the AI understand how the model is working? How can this understanding be shared with users who might be asked to accept the decisions of the AI? 
  • Fairness – Does the model satisfy some larger demands for fairness from society and the people who must live with the decisions of the AI. 
  • Safety – Is the model making decisions that protect humans and property? Is the algorithm designed with safeguards to prevent dangerous behavior? 
  • Human-AI collaboration – How can humans use the results from the AI to guide their decisions? How can humans feed their insights back into the AI to improve the model? 
  • Liability – Who must pay for mistakes? Is the structure of the business strong and well-understood enough to correctly and accurately assign liability? 

[Related: Turning the promise of AI into a reality for everyone and every industry]

What are the layers of AI governance? 

It can be helpful to break apart the governance of AI algorithms into layers. At the lowest-level, close to the process are the rules of which humans have control over the training, retraining and deployment. The issues of accessibility and accountability are largely practical and implemented to prevent unknowns from changing the algorithm or its training set, perhaps maliciously. 

At the next level, there are questions about the enterprise that is running the AI algorithm. The corporate hierarchy that controls all actions of the corporation is naturally part of the AI governance because the curators of the AI fall into the normal reporting structure. Some companies are setting up special committees to consider ethical, legal and political aspects of governing the AI. 

Each entity also exists as part of a larger society. Many of the societal rule making bodies are turning their attention to AI algorithms. Some are simply industry-wide coalitions or committees. Some are local or national governments and others are nongovernmental organizations. All of these groups are often talking about passing laws or creating rules for how AI can be leashed. 

What are governments doing about AI governance? 

While the general challenge of AI governance extends well beyond the reach of traditional human governments, questions about AI performance are starting to be a concern that governments need to pay attention to. Most of these problems occur when some political faction is unhappy with how the AIs behave. 

Globally, governments are starting to launch programs and pass laws explicitly designed to constrain and regulate artificial intelligence algorithms. Some notable new ones include:

  • The White House established the National Artificial Intelligence (AI) Research Resource Task Force with the specific charge to “democratize access to research tools that will promote AI innovation and fuel economic prosperity.” 
  •  The Commerce Department created the National Artificial Intelligence Advisory Committee to address a broad range of issues, including questions of accountability and legal rights. 
  • The National AI Initiative runs AI.gov, a website that acts as a clearing house for government initiatives. In the announcement, the initiative is said to be “dedicated to connecting the American people with information on federal government activities advancing the design, development and responsible use of trustworthy artificial intelligence (AI).” 

[Related: How AI is shaping the future of work]

How are major industry leaders addressing AI governance? 

Aside from governments, industry leaders are paying attention too. Google has been one of the leaders in developing what it calls “Responsible AI” and governance is a major part of its program. The company’s tools such as Explainable AI, Model Cards and the TensorFlow open-source toolkit provide more open access to the insides of the model to promote more understanding and make governance possible. Their Explainable AI approach, provides the data for tracking the performance of any model or system so that humans can make decisions and, perhaps, rein it in. 

Additionally, Microsoft’s focus on responsible AI relied upon several company-wide teams that examine how AI solutions are being developed and used, suggesting different models for governance. Tools like Fairlearn and InterpretML can track how models are performing while watching to see that the technology is delivering fair answers. Microsoft also creates specific tools for governments which have more complex rules for governance. 

Many of Amazon’s tools are also directly focused on managing the teams that manage the AI. AWS Control Tower and AWS Organizations, for instance, manages teams that work with all parts of the AWS environment, including the AI tools. 

IBM too, is building tools to help organizations automate many of the chores of AI governance. Users can track the creation of models, follow its deployment and assess its success. The process begins with careful curation and governance of data storage and follows through training of the model. The Watson Studio, one of IBM’s tools for creating models for instance, has tightly integrated features that can be used for governing the models produced. Several particular tools like AI Fairness 360, AI Explainability 360 and AI Adversarial Robustness 360 are particularly useful. 

Further, Oracle’s tools for AI governance are often extensions of their general tools for governing databases. The Identity Governance is a general solution for organizing teams and ensuring they can only access the right types of data. The Cloud Governance also constrains who controls software running in their cloud, which includes many AI models. Many of the AI tools already offer a variety of features for evaluating the models and their performance. The OML4Py Explainability module, for instance, can explore the weights and structure of any model it builds to support governance. 

How are startups delivering AI governance? 

Many AI startups are following much of the same approach as the market leaders. Their size and focus may be smaller and narrower, but they attempt to answer many of the same questions about AI’s explainability and control. 

Akira AI is just one example of a startup that has launched public discussions of the best way for users to manage models and balance control. Many AI startups follow the same general approach. 

One of the areas where governance is most crucial and complex is in the pursuit of safe self-driving cars. The potential market is huge, but the dangers of collision and death are daunting. All the companies are moving slowly and relying upon extensive testing in controlled conditions. 

The companies emphasize that the goal is to produce a tool that can deliver better results than a human. Waymo, for instance, cites the statistic that 94% of the 36,096 road deaths in the United States in 2019 involved human error. A good governance structure could match the best parts of human intelligence with the steadfast and tireless awareness of AI. The company also shares research data to encourage public discussion and scrutiny in order to build a shared awareness of the technology. 

Appropriately, AI Governance is the name of a startup that focuses directly on the larger job of training teams and setting policies for companies. They offer courses and consulting for companies, governments and other organizations that must balance their interest in the technology with their responsibilities to stakeholders. 

[Related:: This AI attorney says companies need a chief AI officer — pronto]

Why AI governance matters

Where AI governance matters the most is  where the decisions are the most contentious. While the algorithms can provide at least the semblance of neutrality, they cannot simply eliminate the human conflict. If people are unhappy with the result, a good governance mechanism can only reduce some acrimony. 

Indeed, the success of the governance is limited by the size and magnitude of the problems that the AI is asked to solve. Larger problems with more in-depth effects generate deeper conflict. While people may direct their acrimony at the algorithm, the source of the conflict is the larger process. Asking AIs to make decisions that affect people’s health, wealth or careers is asking for frustration. 

There are also limits to the best practices for governance. Often, the rule structure just assigns control of particular elements to certain people. If the people end up being corrupt, foolish or wrong, their decisions will simply flow through the governance mechanism and make the AI behave in a way that’s corrupt, foolish or wrong. 

Another limitation to governance appears when people ask the AI algorithm to explain its decision. These answers can be too complex to be satisfying. Governance mechanisms can only control and guide the AI. They can’t make them easy to understand or change their internal processes. 

Read next:How to apply decision intelligence to automate decision-making


Author: Peter Wayner
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!