AI & RoboticsNews

4 principles for responsible AI

Did you miss a session from the Future of Work Summit? Head over to our Future of Work Summit on-demand library to stream.


Alejandro Saucedo is the engineering director at Seldon, and a chief scientist at the Institute for Ethical AI and Machine Learning, as well as the chair of the Linux Foundation’s GPU Acceleration Committee.

Artificial Intelligence (AI) is set to become ubiquitous over the coming decade – with the potential to upend our society in the process. Whether it be improved productivity, reduced costs or even the creation of new industries, the economic benefits of the technology are set to be colossal. In total, McKinsey estimates that AI will contribute more than $13 trillion to the global economy by 2030.

Like any technology, AI poses personal, societal, and economic risks. It can be exploited by malicious players in the market in a variety of ways that can substantially affect both individuals and organizations, infringe on our privacy, result in catastrophic errors, or perpetuate unethical biases along the lines of protected features such as age, sex, or race. Creating responsible AI principles and practices is critical.

So, what rules could the industry adopt in order to prevent this and ensure that it’s using AI responsibly? The team at the Institute for Ethical AI and ML has assembled eight principles that can be used to guide teams to ensure that they are using AI responsibly. I’d like to run through four — human augmentation, bias evaluation, explainability, and reproducibility.

Principles for responsible AI

1. Human augmentation

When a team looks at the responsible use of AI to automate existing manual workflows, it is important to start by evaluating the existing requirements of the original non-automated process. This includes identifying the risks of potentially undesirable outcomes that may arise at a societal, legal, or moral level. In turn, this allows for a deeper understanding of the processes and touchpoints where human intervention may be required, as the level of human involvement in processes should be proportional to the risk involved.

For example, an AI that serves movie recommendations carries with it far fewer risks of high-impact outcomes to individuals, compared to an AI that automates loan approval processes. The former requires less scope for process and intervention than the latter. Once a team has identified the risks involved in AI workflows, it is then possible to assess the relevant touchpoints when a human needs to be pulled in for review. We call such a paradigm a “human-in-the-loop” review process — known in short as ‘HITL.’

HITL ensures that when a process is automated via AI, various touchpoints are clearly defined where humans are involved in checking or validating the respective predictions from the AI – and where relevant, providing a correction or performing an action manually. This can involve teams of both technologists and subject-matter experts (i.e, in the example of the loan scenario above, an underwriter) to review the decisions of AI models to ensure they’re correct, whilst also lining up with relevant use-cases or industry-specific policies.

2. Bias evaluation

When addressing ‘bias’ in AI, we should also remember that the way in which AI works — which is by learning the optimal way to discriminate towards the ‘correct’ answer. In this sense, the idea of completely removing bias from AI would be impossible.

The challenge facing us in the field, then, is not making sure that AI is ‘unbiased’. Instead, it is to ensure that undesired biases and hence undesired outcomes are mitigated through relevant processes, relevant human intervention, use of best practice and responsible AI principles, and leveraging the right tools at each stage of the machine learning lifecycle.

To do this, we should always start with the data that an AI model learns from. If a model only receives data that contains distributions that reflect existing undesired biases, the underlying model itself would learn those undesired biases.

However, this risk is not limited to the training data phase of an AI model. Teams also must develop processes and procedures to identify any potentially undesirable biases around an AI’s training data, the training and evaluation of the model, and the operationalization lifecycle of the model. One example of such a framework that can be followed is the eXplainable AI Framework from the Institute for Ethical AI & Machine Learning.

3. Explainability

To ensure that an AI model is fit for the purpose of its use case, we also need to involve relevant domain experts. Such experts can help teams make sure a model is using relevant performance metrics that go beyond simple statistical performance metrics like accuracy.

For this to work, though, it is also important to ensure that the predictions of the model can be interpreted by the relevant domain experts. However, advanced AI models often use state-of-the-art deep learning techniques that may not make it simple to explain why a specific prediction was made.

To address this and help domain experts make sense of an AI model’s decisions, organizations can leverage a broad range of tools and techniques for machine learning explainability that can be introduced to interpret the predictions of AI models – a comprehensive and curated list of these tools is useful to reference.

The following phase is the operationalization of the responsible AI model, which sees the model’s use be monitored by relevant stakeholders. The lifecycle of an AI model only begins when it’s put in production, and AI models can suffer from divergence in performance as the environment changes. Whether it be concept drift or changes in the environment where the AI operates, a successful AI requires constant monitoring when placed in its production environment. If you’d like to learn more, an in-depth case study is covered in detail in this technical conference presentation.

4. Reproducibility

Reproducibility in AI refers to the ability of teams to repeatedly run an algorithm on a data point and obtain the same result. Reproducibility is a key quality for AI to have, as it is important to ensure that a model’s prior predictions would be issued if it were re-run at a later point.

But reproducibility is also a challenging problem due to the complex nature of AI systems. Reproducibility requires consistency on all of the following:

  1. The code to compute the AI inference.
  2. The weights learned from the data used.
  3. The environment/configuration that was used for the code to run, and;
  4. The inputs and input structure are provided to the model.

Changing any of these components could yield different outputs, which means that in order for AI systems to become fully reproducible, teams need to ensure each of these components are implemented in a robust manner that allows for each of these to become atomic components that would behave the exact same way regardless of when the model is re-run.

This is a challenging problem, especially when tackled at scale with the broad and heterogeneous ecosystem of tools and frameworks involved in the machine learning space. Fortunately for AI practitioners, there is a broad range of tools that simplify the adoption of best practices to ensure reproducibility throughout the end-to-end AI lifecycle —many of them can be found in this list.

The above responsible AI principles are just for teams to follow to ensure the responsible design, development, and operation of AI systems. Through high-level principles like these, we can ensure best practices are used to mitigate undesired outcomes of AI systems and the technology does not become a tool that disempowers the vulnerable, perpetuates unethical biases, and dissolves accountability. Instead, we can ensure that AI is used as a tool that drives productivity, growth, and common benefit.

Alejandro Saucedo is the engineering director at Seldon, and a chief scientist at the Institute for Ethical AI and Machine Learning, as well as the chair of the Linux Foundation’s GPU Acceleration Committee.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Alejandro Saucedo
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!