AI & RoboticsNews

How to operationalize AI ethics

Last week, I moderated a panel at TWIMLcon about how teams and organizations can operationalize responsible AI that combined perspectives from three people from different corners of the tech and AI community.

Rachel Thomas is best known as cofounder of fast.ai, a popular free online deep learning course. In recent months, Thomas was named director of a new organization that mixes research, policy, and education called the Center for Applied Data Ethics at University of San Francisco.

Guillaume Saint-Jacques is a senior software engineer at LinkedIn’s Fairness Project, an applied research team that assesses the performance of the company’s AI systems.

Parinaz Sobhani is director of machine learning at Georgian Partners, an investor in SaaS startups with its own applied research lab to help portfolio companies apply machine learning.

The 30-minute discussion ventured between the exploration of power dynamics in AI ethics to how the “move fast and break things” ethos in Silicon Valley has evolved in the age of AI.

To set the table and help a team or organization incorporate more AI ethics into operations, Saint-Jacques said, it helps to begin by forging a company culture that puts the user first.

At LinkedIn, “We found that we can’t really have just one central team that writes one piece of code and fixes fairness for the whole company, and so it’s really everyone’s problem,” he said. “Everyone is involved in this, and so what that means is that you really have to have the right alignment in terms of values and culture.

“Of course, to get started practically you also have the right tools and the ways to measure and the right processes, but without the culture and values, it’s actually very hard to do anything given the fact that it’s a broad effort that everyone should take ownership of.”

Sobhani believes businesses have to begin with defining the vision of the company you want to become, examining the company culture, and questioning whether you have the right team in place — but AI ethics also involves diversity.

Diversity is a word that comes up so often in AI ethics conversations that it’s probably an obvious answer, but Sobhani is talking about diversity that serves your users best; that can go beyond ethnic or gender diversity.

“Maybe you need a sociologist, maybe you need someone with a legal and compliance background, so it’s not only about. ‘OK, let’s have an equal number of women and men or from different ethnicities,’ but also about diversity in the background of the team. And at the end, there’s the question of what kind of processes need to be in place to be able and enable a team to produce responsible, ethical AI products,” she said.

Sobhani’s urging echoes calls for a broader definition of AI teams this year from groups like OpenAI, whose researchers earlier this year penned a paper titled “AI Safety Needs Social Scientists.“ 

One example Sobhani offered of an AI ethics success story is when she was working with a business whose AI identifies plagiarism in text. Due to her background as a non-native English speaker, she was able to identify potential bias in the system toward labeling ESL writers as plagiarists.

Citing an idea from Medium Trust and Safety team lead Alex Feerst, Thomas thinks some organizations can embed their trust and safety team with engineering and product design teams.

“Trust and safety is kind of seeing what can go wrong and what happens when it does, and then engineering, product, and design tend to live in a bit more optimistic world, and having those communication channels open between groups and also having all the necessary stakeholders and everyone’s going to be impacted downstream involved,” she said.

One of Thomas’ favorite AI ethics resources comes from the Markkula Center for Data Ethics at Santa Clara University: a toolkit that recommends a number of processes to implement.

“The key one is ethical risk sweeps, periodically scheduling times to really go through what could go wrong and what are the ethical risks. Because I think a big part of ethics is thinking through what can go wrong before it does and having processes in place around what happens when there are mistakes or errors,” she said.

To root out bias, Sobhani recommends the What-If visualization tool from Google’s People + AI Research (PAIR) initiative as well as FairTest, a tool for “discovering unwarranted association within data-driven applications” from academic institutions like EPFL and Columbia University. She also endorses privacy-preserving AI techniques like federated learning to ensure better user privacy.

In addition to resources recommended by panelists, Algorithm Watch maintains a running list of AI ethics guidelines. Last week, the group found that guidelines released in March 2018 by IEEE, the world’s largest association for professional engineers, have seen little adoption at Facebook and Google.

Algorithmic bias by facial recognition software against women and people of color have attracted a lot of attention in recent years, but Saint-Jacques thinks people should focus more on harm and less on bias.

“I think sometimes we get focused on algorithms and whether there’s bias, and I think there’s a sense in which every algorithm can be biased somewhat, but you might see very different types of harm,” he said.

Thomas agrees that, while bias is an important issue, it’s one factor among many in overall harm, particularly considering the kind of things that are already happening, like how an algorithm incorrectly cut off Medicaid benefits or fired teachers.

As more businesses incorporate AI into their operations, as it scales, it may reshape how employees think from a deterministic mindset to a probabilistic one.

While AI at scale may begin to change how businesses operate, a general lack of understanding that AI isn’t magic — just statistics and math and predictions — can hamper reasonable expectations.

In a VentureBeat article published last week, AI practitioner Anna Metsäranta from Finnish company Solita expressed concern that poor understanding of what’s possible may be misshaping the points of view of investors and driving which companies get funded.

Perfection doesn’t exist, Sobhani said, and conceptions about what AI systems can actually accomplish today should be fact-based for end users, product leads, and the general ecosystem.

“One of the problems I have so far is some people who believe these systems are perfect or they can be perfect down the road, but I personally believe because they are probabilistic systems and we are using them for deterministic decision making, we might never reach 100% performance. So it’s really important to have guardrails,” she said.

Saint-Jacques agrees the perception that machine intelligence can be perfect presents a problem.

“Seems like maybe part of the process as well is teaching, or at least having public education stuff out there. so that people understand that these are probabilistic systems and they’re making predictions based on data,” he said.

In some instances Thomas believes tech companies bear responsibility for overhyping what they’re selling, giving people unrealistic expectations.

“Often people purchasing these products don’t have the understanding of probability that they need or have these misconceptions that that AI is 100% accurate, but I think we also have a real responsibility to not overpromise or oversell the capabilities of what we’re doing,” she said.

The notion of a checklist like the kind Microsoft introduced this spring has drawn criticism from some in the AI ethics community who feel that a step-by-step document can lead to a lack of thoughtful thinking or analysis for specific use cases.

In a response delivered by panelists offstage, Sobhani thinks checklists work when used in context of the product you’re making and when protected classes are defined.

Thomas says checklists can be one part in a larger, ongoing process, and pointed to a data ethics checklist released earlier this year by former White House chief data scientist DJ Patil and Cloudera general manager of machine learning Hilary Mason. The list contains questions like “Have we explained clearly what users are consenting to?” and “Have we listed how this technology can be attacked or abused?”

“I understand the concern that ethics can never be reduced to just a checklist — and I agree that ethics must be implemented within a thoughtful and more comprehensive framework — but I think it could serve as a useful reminder. However, it is important that everyone realize that even if every box is checked, we are never done iterating on and considering the ethical impact of our work,” Thomas said.

Saint-Jacques believes establishing values is more important than a list of necessary questions to answer, proclaiming that nobody should be overconfident that their technical solutions address fairness.

“A checklist may give you a sense of false confidence, which is actually why we also emphasize monitoring post deployment. So even if we’ve done everything, we still monitor, and also experimentation, because as I mentioned, you may think you’re fair, and your users might think otherwise, and so you want to know,” he said.

The “move fast and break things” ethos used to be the standard for Silicon Valley. When I asked how startups that once embraced that ethos have evolved in the age of AI, Sobhani recognized that startups still face short-term demands to demonstrate fast growth.

Incorporating AI ethics as part of a long-term strategy can be a profitable decision, however, since systems that perform at lower rates for some groups of people can miss out on a business opportunity, Saint-Jacques said.

“If you’re very biased, you might only cater to one population, and eventually that limits the growth of your user base, so from a business perspective you actually want to have everyone come on board, so it’s actually a good business decision in the long run,” he said.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!