AI & RoboticsNews

Government audit of AI with ties to white supremacy finds no AI

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


In April 2020, news broke that Banjo CEO Damien Patton, once the subject of profiles by business journalists, was previously convicted of crimes committed with a white supremacist group. According to OneZero’s analysis of grand jury testimony and hate crime prosecution documents, Patton pled guilty to involvement in a 1990 shooting attack on a synagogue in Tennessee.

Amid growing public awareness about algorithmic bias, the state of Utah halted a $20.7 million contract with Banjo, and the Utah attorney general’s office opened an investigation into matters of privacy, algorithmic bias, and discrimination. But in a surprise twist, an audit and report released last week found no bias in the algorithm because there was no algorithm to assess in the first place.

“Banjo expressly represented to the Commission that Banjo does not use techniques that meet the industry definition of artificial Intelligence. Banjo indicated they had an agreement to gather data from Twitter, but there was no evidence of any Twitter data incorporated into Live Time,” reads a letter Utah State Auditor John Dougall released last week.

The incident, which VentureBeat previously referred to as part of a “fight for the soul of machine learning,” demonstrates why government officials must evaluate claims made by companies vying for contracts and how failure to do so can cost taxpayers millions of dollars. As the incident underlines, companies selling surveillance software can make false claims about their technologies’ capabilities or turn out to be charlatans or white supremacists — constituting a public nuisance or worse. The audit result also suggests a lack of scrutiny can undermine public trust in AI and the governments that deploy them.

Dougall carried out the audit with help from the Commission on Protecting Privacy and Preventing Discrimination, a group his office formed weeks after news of the company’s white supremacist associations and Utah state contract. Banjo had previously claimed that its Live Time technology could detect active shooter incidents, child abduction cases, and traffic accidents from video footage or social media activity. In the wake of the controversy, Banjo appointed a new CEO and rebranded under the name safeXai.

“The touted example of the system assisting in ‘solving’ a simulated child abduction was not validated by the AGO and was simply accepted based on Banjo’s representation. In other words, it would appear that the result could have been that of a skilled operator as Live Time lacked the advertised AI technology,” Dougall states in a seven-page letter sharing audit results.

According to Vice, which previously reported that Banjo used a secret company and fake apps to scrape data from social media, Banjo and Patton had gained support from politicians like U.S. Senator Mike Lee (R-UT) and Utah State Attorney General Sean Reyes. In a letter accompanying the audit, Reyes commended the results of the investigation and said the finding of no discrimination was consistent with the conclusion the state attorney general’s office reached because there simply wasn’t any AI to evaluate.

“The subsequent negative information that came out about Mr. Patton was contained in records that were sealed and/or would not have been available in a robust criminal background check,” Reyes said in a letter accompanying the audit findings. “Based on our first-hand experience and close observation, we are convinced the horrible mistakes of the founder’s youth never carried over in any malevolent way to Banjo, his other initiatives, attitudes, or character.”

Alongside those conclusions are a series of recommendations for Utah state agencies and employees involved in awarding such contracts. Recommendations for anyone considering AI contracts include questions they should be asking third-party vendors and the need to conduct an in-depth review of vendors’ claims and the algorithms themselves.

“The government entity must have a plan to oversee the vendor and vendor’s solution to ensure the protection of privacy and the prevention of discrimination, especially as new features/capabilities are included,” reads one of the listed recommendations. Among other recommendations are the creation of a vulnerability reporting process and evaluation procedures, but no specifics were provided.

While some cities have put surveillance technology review processes in place, local and state adoption of private vendors’ surveillance technology is currently happening in a lot of places with little scrutiny. This lack of oversight could also become an issue for the federal government. The Government by Algorithm report Stanford University and New York University jointly published last year found that roughly half of algorithms used by federal government agencies come from third-party vendors.

The federal government is currently funding an initiative to create tech for public safety, like the kind Banjo claimed to have developed. The National Institute of Standards and Technology (NIST) routinely assesses the quality of facial recognition systems and has helped assess the role the federal government should play in creating industry standards. Last year, it introduced ASAPS, a competition in which the government is encouraging AI startups and researchers to create systems that can tell if an injured person needs an ambulance, whether the sight of smoke and flames requires a firefighter response, and whether police should be alerted in an altercation. These determinations would be based on a dataset incorporating data ranging from social media posts to 911 calls and camera footage. Such technology could save lives, but it could also lead to higher rates of contact with police, which can also cost lives. It could even fuel repressive surveillance states like the kind used in Xinjiang to identify and control Muslim minority groups like the Uyghurs.

Best practices for government procurement officers seeking contracts with third parties selling AI were introduced in 2018 by U.K. government officials, the World Economic Forum (WEF), and companies like Salesforce. Hailed as one of the first such guidelines in the world, the document recommends defining public benefit and risk and encourages open practices as a way to earn public trust.

“Without clear guidance on how to ensure accountability, transparency, and explainability, governments may fail in their responsibility to meet public expectations of both expert and democratic oversight of algorithmic decision-making and may inadvertently create new risks or harms,” the British-led report reads. The U.K. released official procurement guidelines in June 2020, but weeks later a grading algorithm scandal sparked widespread protests.

People concerned about the potential for things to go wrong have called on policymakers to implement additional legal safeguards. Last month, a group of current and former Google employees urged Congress to adopt strengthened whistleblower protections in order to give tech workers a way to speak out when AI poses a public harm. A week before that, the National Security Commission on Artificial Intelligence called on Congress to give federal government employees who work for agencies critical to national security a way to report misuse or inappropriate deployment of AI. That group also recommends tens of billions of dollars in investment to democratize AI and create an accredited university to train AI talent for government agencies.

In other developments at the intersection of algorithms and accountability, the documentary Coded Bias, which calls AI part of the battle for civil rights in the 21st century and examines government use of surveillance technology, started streaming on Netflix today.

Last year, the cities of Amsterdam and Helsinki created public algorithm registries so citizens know which government agency is responsible for deploying an algorithm and have a mechanism for accountability or reform if necessary. And as part of a 2019 symposium about common law in the age of AI, NYU professor of critical law Jason Schultz and AI Now Institute cofounder Kate Crawford called for businesses that work with government agencies to be treated as state actors and considered liable for harm the way government employees and agencies are.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Khari Johnson
Source: Venturebeat

Related posts
DefenseNews

Navy, senators argue over who is to blame for a too-small fleet

DefenseNews

To expand the US Navy’s fleet, we must contract

DefenseNews

Ellis to succeed Rey as director of Army Network Cross-Functional Team

Cleantech & EV'sNews

Tesla asks shareholders to move to Texas and re-pass Elon Musk's massive compensation plan

Sign up for our Newsletter and
stay informed!