AI & RoboticsNews

How to approach AI more responsibly, according to a top AI ethicist

All the sessions from Transform 2021 are available on-demand now. Watch now.


Enterprise AI platform DataRobot says it builds 2.5 million models a day, and Haniyeh Mahmoudian is personally invested in making sure they’re all as ethically and responsibly built as possible. Mahmoudian, a winner of VentureBeat’s Women in AI responsibility and ethics award, literally wrote the code for it.

An astrophysicist turned data science researcher turned the company’s first “global AI ethicist,” she has raised awareness about the need for responsible AI in the broader community. She also speaks on panels, like at the World Economic Forum, and has been driving change within her organization.

“As a coworker, I cannot stress how impactful her work has been in advancing the thinking of our engineers and practitioners to include ethics and bias measures in our software and client engagements,” said DataRobot VP of trusted AI Ted Kwartler, who was among those who nominated her for the award.

In this past year of crisis, Mahmoudian’s work found an even more relevant avenue. The U.S. government tapped her research into risk level modeling to improve its COVID-19 forecasting, and Moderna used it for vaccine trials. Eric Hargan, the U.S. Department of Health and Human Services’ deputy secretary at the time, said “Dr. Mahmoudian’s work was instrumental in assuring that the simulation was unbiased and fair in its predictions.” He added that the impact statement her team created for the simulation “broke new ground in AI public policy” and is being considered as a model for legislation.

For all that she has accomplished, VentureBeat is pleased to honor Mahmoudian with this award. We recently sat down (virtually) to further discuss her impact, as well as AI regulation, “ethics” as a buzzword, and her advice for deploying responsible AI.

VentureBeat: How would you describe your approach to AI? What drives your work?

Haniyeh Mahmoudian: For me, it’s all about learning new things. AI is more and more becoming a part of our day-to-day lives. And when I started working as a data scientist, it was always fascinating to me to learn new use cases and ideas. But at the same time, it kind of gave me the perspective that this area is very great. There’s a lot of potential in there. But at the same time, there are certain areas that you need to be cautious about.

VentureBeat: You wrote the code for statistical parity in DataRobot’s platform, as well as natural language explanations for users. These have helped companies in sectors from banking and insurance to tech, manufacturing, and CPG root out bias and improve their models. What does this look like and why is it important?

Mahmoudian: When I started my journey toward responsible AI, one of the things I noticed was that generally, you can’t really talk to non-technical people about the technical aspects of how the model behaves. They need to have a language that they understand. But just telling them “your model is biased” doesn’t solve anything either. And that’s what the natural language aspect of it helps with — not only telling them the system exhibits some level of bias, but helping navigate that. “Look at the data XYZ. Here is what we found.”

This is at the case level, as well as at the general level. There are many various definitions for bias and fairness. It can be really hard to navigate which one you should be using, so we want to make sure you’re using the most relevant definition. In hiring use cases, for example, you’d probably be more interested in having a diverse workforce, so equal representation is what you’re looking for. But in a health care scenario, you probably don’t care about representation as much as you do making sure the model isn’t wrongfully denying access for the patients.

VentureBeat: Aside from your work helping mitigate algorithmic bias in models, you’ve also briefed dozens of Congressional offices on the issues and are committed to helping policymakers get AI regulations right. How important do you believe regulation is in preventing harm caused by AI technologies?

Mahmoudian: I would say that regulations are definitely important. Companies are trying to deal with AI bias specifically, but there are gray areas. There’s no standardization and it’s uncertain. For these types of things, having clarification would be helpful. For example, in the new EU regulations, they tried to clarify what it means to have a high-risk use case and, in those use cases, what the expectations are (having confirmatory test assessments, auditing, things like that). So these are the type of clarifications regulations can bring, which would really help companies understand the processes and also reduce risk for them as well.

VentureBeat: There’s so much talk about responsible AI and AI ethics these days, which is great because it’s really, really important. But do you fear — or already feel like — it’s becoming a buzzword? How do we make sure this work is real and not a facade or box to check off?

Mahmoudian: To be honest, it is used as a buzzword in industry. But I would also say that as much as it’s used in a marketing aspect, companies are genuinely starting to think about it. And this is because it’s actually benefiting them. When you’re looking at the surveys around AI bias, one of the fears they have is that they’re going to lose their customers. If a headline about their company were to come out, it’s their brand that would be jeopardized. These types of things are on their minds. So they’re also thinking that having a responsible AI system and framework can actually prevent them from having this type of risk for their business. So I would give them the benefit of the doubt. They are thinking about it and they are working on it. You could say it’s a little bit late, but it’s never too late. So it is a buzzword, but there’s a lot of genuine effort as well.

VentureBeat: What often gets overlooked in the conversations about ethical and responsible AI? What needs more attention?

Mahmoudian: Sometimes when you’re talking with people about ethics, they directly link it to bias and fairness. And sometimes it might be viewed as one group trying to push their ideas onto others. So I think we need to remove this from the process and make sure that ethics is not necessarily about bias; it’s about the whole process. If you’re putting out a model that just doesn’t perform well and your customers are using that, that can affect the people. Some might consider that unethical. So there are many different ways you can include ethics and responsibility in various aspects of the AI and machine learning pipeline. So it’s important for us to have that conversation. It’s not just about the endpoint of the process; responsible AI should be embedded throughout the whole pipeline.

VentureBeat: What advice do you have for enterprises building or deploying AI technologies about how to approach it more responsibly?

Mahmoudian: Have a good understanding of your process, and have a framework in place. Each industry, each company may have its own specific criteria and the type of projects they’re working on. So pick the kind of the processes and dimensions that are relevant to your work and can guide you throughout the process.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Sage Lazzaro
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!