AI & RoboticsNews

AI needs systemic solutions to systemic bias, injustice, and inequality

Watch all the Transform 2020 sessions on-demand right here.


At the Diversity, Equity, and Inclusion breakfast at VentureBeat’s AI-focused Transform 2020 event, a panel of AI practitioners, leaders, and academics discussed the changes that need to happen in the industry to make AI safer, more equitable, and more representative of the people to whom AI is applied.

The wide-ranging conversation was hosted by Krystal Maughan, a Ph.D. candidate at the University of Vermont, who focuses on machine learning, differential privacy, and provable fairness. The group discussed the need for higher accountability from tech companies, inclusion of multiple stakeholders and domain experts in AI decision making, practical ways to adjust AI project workflows, and representation at all stages of AI development and at all levels — especially where the power brokers meet. In other words, although there are systemic problems, there are systemic solutions as well.

Tech company accountability

The old Silicon Valley mantra “move fast and break things” has not aged well in the era of AI. It presupposes that tech companies exist in some sort of amoral liminal space, apart from the rest of the world where everything exists in social and historical contexts.

“We can see all around the world that tech is being deployed in a way that’s pulling apart the fabric of our society. And I think the reason why is because … tech companies historically don’t see that they’re part of that social compact that holds society together,” said Will Griffin, chief ethics officer at Hypergiant.

Justin Norman, vice president of data science at Yelp, agreed, pointing out the power that tech companies wield because they possess tools that can be incredibly dangerous. “And so not only do they have an ethical responsibility, which is something they should do before they’ve done anything wrong, they also have a responsibility to hold themselves accountable when things go wrong.”

But, Norman added, we — all of us, the global community — have a responsibility here as well. “We don’t want to simply accept that any kind of corporation has unlimited power against us, any government has unlimited power over us,” he said, asserting that people need to educate themselves about these technologies so when they encounter something dubious, they know when to push back.

Both Griffin and Ayodele Odubela, a data scientist at SambaSafety, pointed out the strength of the accountability that communities can bring to bear on seemingly immovable institutions. Griffin called Black Lives Matter activists “amazing.” He said, “Those kids are right now the leaders in AI as well, because they’re the ones who identified that law enforcement was using facial recognition, and through that pressure on institutional investors — who were the equity holders of these large corporations — it forced IBM to pull back on facial recognition, and that forced Microsoft and Amazon to follow suit.” That pressure, which surged in the wake of the police killing of George Floyd, has apparently also begun to topple the institution of law enforcement as we know it by amplifying the movement to defund the police.

Odubela sees the specter of law enforcement’s waning power as an opportunity for good. Defunding the police actually means funding things like social services, she argues. “One of the ideas I really like is trying to take some of these biased algorithms and really repurpose them to understand the problems that we may be putting on the wrong kind of institutions,” she said. “Look at the problems we’re putting on police forces, like mental illness. We know that police officers are not trained to deal with people who have mental illnesses.”

These social and political victories should ideally lead to policy changes. In response to Maughan’s question about what policy changes could encourage tech companies to get serious about addressing bias in AI, Norman pulled it right back to the responsibility of citizens in communities. “Policy and law tell us what we must do,” he said. “But community governance tells us what we do, and that’s largely an ethical practice.”

“I think that when people approach issues of diversity, or they approach issues of ethics in the discipline, they don’t appreciate the challenge that we’re up against, because … engineering and computer science is the only discipline that has this much impact on so many people that does not have any ethical reasoning, any ethical requirements,” Griffin added. He contrasted tech with fields like medicine and law, which have made ethics a core part of their educational training for centuries, and where practitioners are required to hold a license issued by a governing body.

Where it hurts

Odubela took these thoughts a step beyond the need for policy work by saying, “Policy is part of it, but a lot of what will really force these companies into caring about this is if they see financial damages.”

For businesses, their bottom line is where it hurts. One could argue that it’s almost crass to think about effecting change through capitalist means. On the other hand, if companies are profiting from questionable or unjust artificial intelligence products, services, or tools, it follows that justice could come by eliminating that incentive.

Griffin illustrated this point by talking about facial recognition systems that big tech companies have sold, especially to law enforcement agencies — how none of them were vetted, and now the companies are pulling them back. “If you worked on computer vision at IBM for the last 10 years, you just watched your work go up in smoke,” he said. “Same at Amazon, same at Microsoft.”

Another example Griffin gave: A company called Practice Fusion digitizes electronic health records (EHR) for smaller doctors’ offices and medical practices and runs machine learning on those records, as well as other outside data, and helps provide prescription recommendations to caregivers. AllScripts bought Practice Fusion for $100 million in January 2018. But a Department of Justice (DoJ) investigation discovered that Practice Fusion was getting kickbacks from a major opioid company in exchange for recommending those opioids to patients. In January 2020, the DoJ levied a $145 million fine in the case. On top of that, as a result of the scandal, “AllScripts’ market cap dropped in half,” Griffin said.

“They walked themselves straight into the opioid crisis. They used AI really in the worst way you can use AI,” he added.

He said that although that’s one specific case that was fully litigated, there are more out there. “Most companies are not vetting their technologies in any way. There are land mines — AI land mines — in use cases that are currently available in the marketplace, inside companies, that are ticking time bombs waiting to go off.”

There’s a reckoning growing on the research side, too, as in recent weeks both the ImageNet and 80 Million Tiny Images data sets have been called to account over bias concerns.

It takes time, thought, and expense to ensure that your company is building AI that is just, accurate, and as free of bias as possible, but the “bottom line” argument for doing so is salient. Any AI system failures, especially around bias, “cost a lot more than implementing this process, I promise you,” Norman said.

Practical solutions: workflows and domain experts

These problems are not intractable, much as they may seem. There are practical solutions companies can employ, right now, to radically improve the equity and safety in the ideation, design, development, testing, and deployment of AI systems.

A first step is bringing in more stakeholders to projects, like domain experts. “We have a pretty strong responsibility to incorporate learnings from multiple fields,” Norman said, noting that adding social science experts is a great complement to the skill sets that practitioners and developers possess. “What we can do as a part of our own power as people who are in the field is incorporate that input into our designs, into our code reviews,” he said. At Yelp, they require that a project passes an ethics and diversity check at all levels of the process. Norman said that as they go, they’ll pull in a data expert, someone from user research, statisticians, and those who work on the actual algorithms to add some interpretability. If they don’t have the right expertise in-house, they’ll work with a consultancy.

“From a developer standpoint, there actually are tools available for model interpretability, and they’ve been around for a long time. The challenge isn’t necessarily always that there isn’t the ability to do this work — it’s that it’s not emphasized, invested in, or part of the design development process,” Norman said. He added that it’s important to make space for the researchers who are studying the algorithms themselves and are the leading voices in the next generation of design.

Griffin said that Hypergiant has a heuristic for its AI projects called “TOME,” for “top of mind ethics,” which they break down by use case. “With thus use case, is there a positive intent behind the way we intend to use the technology? Step two is where we challenge our designers, our developers, [our] data scientists … to broaden their imaginations. And that is the categorical imperative,” he said. They ask what the world would look like if everyone in their company, the industry, and in the world used the technology for this use case — and they ask if that is desirable. “Step three requires people to step up hardcore in their citizenship role, which is [asking the question]: Are people being used as a means to an end, or is this use case designed to benefit people?”

Yakaira Núñez, a senior director at Salesforce, said there’s an opportunity right now to change the way we do software development. “That change needs to consider the fact that anything that involves AI is now a systems design problem,” she said. “And when you’re embarking upon a systems design problem, then you have to think of all of the vectors that are going to be impacted by that. So that might be health care. That might be access to financial assistance. That might be impacts from a legal perspective, and so on and so forth.”

She advocates to “increase the discovery and the design time that’s allocated to these projects and these initiatives to integrate things like consequence scanning, like model cards, and actually hold yourself accountable to the findings … during your discovery and your design time. And to mitigate the risks that are uncovered when you’re doing the systems design work.”

Odubela brought up the issue of how to uncover the blind spots we all have. “Sometimes it does take consulting with people who aren’t like us to point these [blind spots] out,” she said. “That’s something that I’ve personally had to do in the past, but taking that extra time to make sure we’re not excluding groups of people, and we’re not baking these prejudices that already exist in society straight into our models — it really does come [down] to relying on other people, because there are some things we just can’t see.”

Núñez echoed Odubela, noting that “As a leader you’re responsible for understanding and reflecting, and being self aware enough to know that you have your biases. It’s also your responsibility to build a board of advisors that keeps you in check.”

“The key is getting it into the workflows,” Griffin noted. “If it doesn’t get into the workflow, it doesn’t get into the technology; if it doesn’t get into the technology, it won’t change the culture.”

Representation

Not much of this is possible, though, without improved representation of underrepresented groups in critical positions. As Griffin pointed out, this particular panel comprises leaders who have the decision-making power to implement practical changes in workflows right away. “Assuming that [the people on this panel] are in a position to flat-out stop a use case, and say ‘Listen, nope, this doesn’t pass muster, not happening’ — when developers, designers, data scientists know that they can’t run you over, they think differently,” he said. “All of a sudden everyone becomes a brilliant philosopher. Everybody’s a social scientist. They figure out how to think about people when they know their work will not go forward.”

But that’s not the case within enough companies, even though it’s critically important. “The subtext here is that in order to execute against this, this also means that you have to have a very diverse team applying the lens of the end user, the lens of those impacted into that development lifecycle. Checks and balances have to be built in from the start,” Núñez said.

Griffin offered an easy-to-understand benchmark to aim for: “For diversity and inclusion, when you have African Americans who have equity stakes in your company — and that can come in the form of founders, founding teams, C-suite, board seats, allowed to be investors — when you have diversity at the cap table, you have success.”

And that needs to happen fast. Griffin said that although he’s seeing lots of good programs and initiatives coming out of the companies whose boards he sits on, like boot camps, college internships, and mentorship programs, they’re not going to be immediately transformative. “Those are marathons,” he said. “But nobody on these boards I’m with got into tech to run a marathon — they got in to run a sprint. … They want to raise money, build value, and get rewarded for it.”

But we are in a unique moment that portends a wave of change. Griffin said, “I have never in my lifetime seen a time like the last 45 days, where you can actually come out, use your voice, have it be amplified, without the fear that you’re going to be beaten back by another voice saying, ‘We’re not thinking about that right now.’ Now thinking about it.”


Author: Seth Colaner.
Source: Venturebeat

Related posts
DefenseNews

UK Navy to buy six vessels, enters new ‘golden age’ of shipbuilding

DefenseNews

House bill would block F-22 retirements, keep buying Air Force F-15EXs

DefenseNews

House panel takes aim at Navy size, new capabilities in defense bill

Cleantech & EV'sNews

VW just released details of the 2025 VW ID. Buzz's US trims

Sign up for our Newsletter and
stay informed!

Worth reading...
Female leaders talk ethics, representation, and more at Transform 2020’s Women in AI breakfast