AI & RoboticsNews

AI Weekly: Surveillance, structural racism, and the Biden 2020 presidential campaign

In the United Kingdom there’s been some landmark AI news recently involving government use of the technology. First, use of facial recognition by South Wales Police was ruled unlawful by a Court of Appeal judge in part for violating privacy, human rights, and failure by police to verify the tech did not exhibit race or gender bias. How the U.K. treats facial recognition is important since London has more CCTV cameras than any major city outside of China.

Then, U.K. government officials used an algorithm that ended up benefiting kids who go to private schools and downgrading students from disadvantaged backgrounds. Prime Minister Boris Johnson defended the algorithm grading results as “robust” and “dependable for employers.” Students burned exam results in Parliament Square and chanted “Fuck the algorithm.” Ultimately, the government gave students the choice to pick between a teacher assessment or algorithm score.

In the U.S., government use of AI was also called into question as police in Miami and New York used facial recognition to identify protestors at Black Lives Matter protests. Both Democrats and Republicans in Congress have talked at length about the notion that facial recognition shouldn’t be used at protests because it can chill people’s constitutional right to freedom of speech. Around the time that news emerged that Clearview AI is working with ICE last week, the Government Accountability Office (GAO) released a report that found an increase in facial recognition use by businesses and recommended Congress take action. A prominent group representing facial recognition companies also put forward its own ethical guidelines, which have already been violated by some companies.

Amid all these controversies came this week’s Democratic National Convention, where technology was both a topic of policy discussion and the enabler of the event itself. It was an entirely virtual convention, where roll call votes to nominate the candidates, musical performances, and major speeches were recorded or performed live around the country.

In his acceptance speech Thursday, Biden talked about his plan to address COVID-19 if elected and said the U.S. faces four historic crises: COVID-19, the worst economic crisis since the Great Depression, calls for racial justice reform, and climate change. He talked about the idea of this generation wiping out the stains of racism from the American character. He also talked about a plan to create 5 million new jobs in manufacturing and technology. A plan laid out last month includes a $300 billion research and development investment in areas like 5G and artificial intelligence.

In her acceptance speech, Democratic vice presidential candidate Kamala Harris introduced herself to the country, the first Black woman and Indian woman vice presidential candidate in U.S. history. She mentioned the 100th anniversary of the passage of the 19th amendment granting some women the right to vote; talked about how President Trump is unfit for office; and acknowledged that people who are Black, Latinx, and indigenous are dying of COVID-19 at disproportionately higher rates.

“This is not a coincidence. It is the effect of structural racism, of inequities in education and technology, health care and housing, job security and transportation,” Harris said.

A closer look at the Biden 2020 campaign plan to address racial inequity finds a plan of action that touches on a range of areas, from fair government contracting and opportunities for entrepreneurs and small businesses to closing the racial wealth gap, ending housing discrimination, and eliminating cash bail systems.

Artificial intelligence is also in multiple parts of the platform, like the Biden education plan to bring computing education to grade schools so children can grow up and get jobs in AI, and the Biden foreign policy plan, which says, “We must ensure the technologies of the future like AI are bound by laws and ethics and promote greater shared prosperity and democracy.”

The foreign policy plan continues to say “Technology companies — which benefit from the fruits of democracy — should make concrete pledges for how they can ensure their algorithms and platforms are not empowering the surveillance state, facilitating repression in China and elsewhere, spreading hate, spurring people to violence, and remaining susceptible to misuse.”

The platform also pledges that as president Joe Biden would charge the Consumer Finance Protection Bureau to try to ensure algorithms used for credit scores are free of discrimination.

Algorithmic bias is mentioned in the context of credit scores and social media platforms. However, the Biden plans to address racial inequity and criminal justice do not appear to include support for or plans to limit surveillance technology or acknowledge a history of cooperation between big tech companies and police.

The Democratic Party 2020 platform mentions artificial intelligence four times, primarily in the context of research and development, but also mentions the importance of military investments necessary to “meet the threats of the future.” It also says, “Democrats believe that algorithms and platforms should empower people instead of the surveillance state.”

By contrast, addressing surveillance is a part of the Movement for Black Lives policy platform. Organizers created the first platform in 2016, and updated it earlier this month ahead of the Black National Convention, an approach akin to the creation of a Black national agenda drawn from a similar gathering first held in 1972. More than 50 organizations that are part of the Black community endorsed the plan.

Like the Biden 2020 campaign, the Movement for Black Lives also calls for an end to the cash bail system, but it also calls for the end of pretrial detention and pretrial risk assessment, something for which the Partnership on AI said last year algorithms that automate bail processes are not yet fit for use.

Based on the Biden 2020 plan and Democratic Party platform, we get an idea of what Harris meant when she said technology plays a role in structural racism. VentureBeat reached out to the Biden 2020 campaign for more details.

It’s been supposed in the past that if another AI winter comes along, it would be the result of something like compute restraints or research progress hitting a wall. What if the winter comes about because the wider public’s negative interactions with or negative perception of tech like facial recognition sours a willingness to accept or trust the results? To some degree, it seems a good thing for people to be reminded that algorithms make mistakes, but when people in power use algorithms to make decisions about people’s lives, what happens is that mistrust settles in, like the kind a recent NYU-Stanford University study of U.S. federal government of AI alluded to.

The Biden-Harris campaign has shared some of its viewpoints on AI; there’s certainly an awareness of algorithmic bias present in the platform, and Kamala Harris has a history of questioning racial bias built into AI, including facial recognition. She also has an extensive history with Silicon Valley and has proposed legislation like the AI in Government Act to remove hurdles of government adoption of the technology.

Depending on how Election Day goes on November 3, laws like a facial recognition moratorium, facial recognition ban, or national biometric law like the kind recently introduced in the U.S. Senate could be on the way from Congress or the next presidential administration. There’s a lot that has to be decided between now and then, but no matter who wins the next presidential election, the use of predictive policing, pretrial risk assessments, facial recognition, and other AI applications will remain a controversial part of racial justice reform. Justified fear that these systems will only exacerbate and amplify existing injustices means even if such AI is extensively tested before being deployed, it raises the risk of reducing trust in AI and governments alike.

For AI coverage, send news tips to Kyle Wiggers and Khari Johnson — and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson

Senior AI Staff Writer


Author: Khari Johnson.
Source: Venturebeat

Related posts
DefenseNews

After Army canceled helo program, industry had to pivot

DefenseNews

Here’s when the US Army will pick next long-range spy plane

DefenseNews

Raytheon picks Spain’s Sener to make Patriot interceptor parts

Cleantech & EV'sNews

Gogoro announces major partnership to help accelerate global expansion

Sign up for our Newsletter and
stay informed!