AI & RoboticsNews

A doctor walks into a bar: Tackling image generation bias with Responsible AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

A doctor walks into a bar…

What does the setup for a likely bad joke have to do with image bias in DALL-E?

DALL-E is an artificial intelligence program developed by OpenAI that creates images from text descriptions. It uses a 12-billion-parameter version of the GPT-3 Transformer model to interpret natural language inputs and generate corresponding images. DALL-E can generate realistic images and is one of the best multi-modal models available today.

Its inner functioning and source are not publicly available, but we can invoke it through an API layer by passing a text prompt with the description of the image to generate. It’s a prime example of a popular pattern called “model-as-a-service.” Naturally, for such an amazing model, there was a long wait, and when I finally got access I wanted to try out all sorts of combinations.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

Register Here

One thing I wanted to uncover was the possible inherent biases that the model would exhibit. So, I input two separate prompts, and you can see the results associated with each in the above illustration.

From the text prompt “Doctor walks into a bar” the model produced only male doctors in a bar. It intelligently places the doctor, dressed in a suit with a stethoscope and a medical chart, inside a bar, which it gave a dark setting. However, when I input the prompt “Nurse walks into a bar” the results were exclusively female and more cartoonish, highlighting the bar more as a children’s playroom. Besides the male and female bias for the terms “doctor” and “nurse,” you can also see the change in how the bar was rendered based on the gender of the person.

How responsible AI can help tackle bias in machine learning models

OpenAI has been extremely quick to notice this bias and made changes to the model to try and mitigate it. They have been testing the model on under-represented populations in their training sets — a male nurse, a female CEO, and so on. This is an active approach to hunting for bias, measuring and mitigating it by adding more training samples in biased categories.

While this activity makes sense for a widely popular model like DALL-E, it might not be implemented in many enterprise models unless specifically asked for. For example, it would take a lot of extra effort for banks to hunt for biases and actively work on mitigating these in their credit-line approval models.

A discipline that helps organize this effort and helps make this study part of model development is called Responsible AI.

Just as DevOps and MLOps focus on making development agile, collaborative and automated, Responsible AI focuses on the ethics and bias issues of ML and helps actively address these problems in all aspects of the ML development lifecycle. Working on bias early can help save the exponential effort required to hunt for bias as OpenAI had to do after DALL-E’s release. Also, a Responsible AI strategy gives customers much more confidence in an organization’s ethical standards.

A responsible AI strategy

Every company building AI today needs a Responsible AI strategy. It should cover all aspects including:

  • Checking training data for bias
  • Evaluating algorithms for levels of interpretability
  • Building explanations for ML models
  • Reviewing deployment strategy for models
  • Monitoring for data and concept drift

Attention to these aspects will ensure that the AI systems developed are built with reproducibility, transparency and accountability. Even though all issues cannot be mitigated, a model card should be released to document the AI’s limitations. My experimentation with DALL-E showed an example that was seemingly benign. However, unchecked image bias in ML models applied practically across a variety of industries can have significant negative consequences. Mitigating these risks is definitely no joke.

Dattaraj Rao is chief data scientist with Persistent Systems.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Dattaraj Rao, Persistent Systems
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!