AI & RoboticsNews

How to make AI more ethical 

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


A recent Pew Research study found that a majority of experts and advocates worry AI will continue to focus on optimizing profits and social control and will not likely develop an ethical basis within the next decade. And in an academic study earlier this year, researchers from Cornell and the University of Pennsylvania found that two thirds of the machine learning researchers indicated AI safety should be prioritized more than it is presently. They also found that people are willing to place trust in AI when it is supported by existing international bodies such as the UN or the EU.

Some of these worries are based on early AI models that showed unintended biases. For example, Twitter’s algorithm for selectively cropping image previews showed an apparent bias for certain groups (Twitter later independently evaluated the algorithm and decided to take it down). Similar biases have been found not just in computer vision, but virtually all domains of machine learning.

We have seen several recent attempts to mitigate such problems. Last year, for example, the Department of Defense published five AI principles, recommending that AI technology should be responsible, equitable, traceable, reliable and governable. Google, Zendesk, and Microsoft, also issued guidelines, offering a framework to reach ambitious goals around ethical AI development. These are all good places to start.

Ethical AI is still in its nascency but is becoming increasingly more important for companies to take action on. My team approached ethical AI from a first principles perspective and augmented it with research from other players. We came up with these principles as we develop our own ethical AI framework and hope they are helpful to other teams:

1. Articulate the problem you’re trying to solve and identify the potential for bias

The first step to developing ethical AI is clearly articulating the problem you are trying to solve. If you are developing a credit scoring algorithm, for example, outline exactly what it is you’d like your algorithm to determine about an applicant and highlight any data points that may unintentionally lead to bias (e.g., racial confounders based on where someone lives). This also means understanding any implicit biases engineers or product managers may have and ensuring these biases don’t get enshrined into the code. One way to identify biases at the design stage is to involve team members from the very start who have diverse perspectives, both in terms of their business functions (such as legal, product, and marketing) and in terms of their own experiences and backgrounds.

2. Understand your underlying datasets and models

Once you’ve articulated the problem and identified potentials for bias, you should study the bias quantitatively by instilling processes to measure diversity in your datasets and model performance across groups of interest. This means sampling training data to ensure it fairly represents groups of interest, and segmenting model performance by these groups of interest to ensure you don’t see degraded performance for certain groups. For example, when developing computer vision models, like sentiment detection algorithms, ask yourself: Do they work equally well for both men and women? For various skin tones and ages? It is critical to understand the makeup of your dataset and any biases that may be inadvertently introduced either in training or in production.

3. Be transparent and approachable

AI teams should also seek to better understand their AI models and transparently share that understanding with the right stakeholders. This could have several dimensions but should focus primarily on what your AI models can and can’t do and on the underlying dataset they were built on. Consider a content recommender system: Can you articulate how much information it needs before being able to surface relevant recommendations to your customers? What steps, if any, does it take to mitigate amplification of viewpoints and homogenization of the user experience? The more you understand the underlying AI technologies you are building, the better you can transparently explain them to your users and other teams internally. Google has provided a good example of this with model cards — simple explanations of its AI models that describe when the models work best (and when they don’t).

Another element of transparency is making AI approachable to everyone, even those who aren’t technically versed in machine learning or statistics. This means writing content like model cards in approachable terms and providing simple explanations of how AI algorithms like convolutional neural networks work (without diving into the engineering or mathematical complexities).

4. Develop review processes to instill rigor

Ethical AI is not the responsibility of product and engineering teams alone. Groups across the company should weigh in on AI projects and do so in a systematic way. This means developing processes to ensure ethical rigor in AI projects and identify any issues as early as possible. One way to accomplish this is to have one or more independent bodies review the AI products when they are still in the design phase, and then again later in the lifecycle. We practice this at my company through a cross-functional Privacy and Ethics Board and a separate ethical AI subcommittee. These groups help us define corporate ethical principles and procedures as well as assess product ideas to provide tangible guidance on ethical AI development.

5. Ensure security and privacy of customer data

Finally, as with any data project, AI teams should operate with a privacy and security first mindset. This means that any AI efforts follow all internal guidelines around data privacy and security and go further when particularly sensitive data is involved (such as personally identifiable information or vision data).

Ensuring privacy also means having procedures for what algorithms and data you store and share, both internally and externally. For example, if facial recognition is important to an AI feature you’re building, it is critical to think through if and how you store facial landmark information and who has access to this data. For sensitive data, it should only be the immediate engineering teams working on the feature (and even that can be done with audits in place to prevent misuse).

Looking towards the future

While AI has made tremendous technical progress in the past few years, we are still in the early stages of defining the foundations for ethical AI.  There are tangible ways in which we can make progress towards ethical AI in the near term. Over time, we will need to develop more and more comprehensive and systematic ways to embed ethical design principles into the very fabric of AI development. Ultimately, this will be a requirement for AI to deliver the strongest benefits to society as a whole.

Ali Akhtar is head of big data and machine learning at Samsara.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Ali Akhtar, Samsara
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!