Where does your enterprise stand on the AI adoption curve? Take our AI survey to find out.
In early June, border officials “quietly deployed” the mobile app CBP One at the U.S.-Mexico border to “streamline the processing” of asylum seekers. While the app will reduce manual data entry and speed up the process, it also relies on controversial facial recognition technologies and stores sensitive information on asylum seekers prior to their entry to the U.S. The issue here is not the use of artificial intelligence per se, but what it means in relation to the Biden administration’s pre-election promise of civil rights in technology, including AI bias and data privacy.
When the Democrats took control of both House and Senate in January, onlookers were optimistic that there was an appetite for a federal privacy bill and legislation to stem bias in algorithmic decision-making systems. This is long overdue, said Ben Winters, Equal Justice Works Fellow of the Electronic Privacy Information Center (EPIC), who works on matters related to AI and the criminal justice system. “The current state of AI legislation in the U.S. is disappointing, [with] a majority of AI-related legislation focused almost solely on investment, research, and maintaining competitiveness with other countries, primarily China,” Winters said.
Legislation moves forward
But there is some promising legislation waiting in the wings. The Algorithmic Justice and Online Platform Transparency bill, introduced by Sen. Edward Markey and Rep. Doris Matsui in May, clamps down on harmful algorithms, encourages transparency of websites’ content amplification and moderation practices, and proposes a cross-government investigation into discriminatory algorithmic processes throughout the economy.
Local bans on facial recognition are also picking up steam across the U.S. So far this year, bills or resolutions related to AI have been introduced in at least 16 states. They include California and Washington (accountability from automated decision-making apps); Massachusetts (data privacy and transparency in AI use in government); Missouri and Nevada (technology task force); and New Jersey (prohibiting “certain discrimination” by automated decision-making tech). Most of these bills are still pending, though some have already failed, such as Maryland’s Algorithmic Decision Systems: Procurement and Discriminatory Acts.
The Wyden Bill from 2019 and more recent proposals, such as the one from Markey and Matsui, provide much-needed direction, said Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “Companies are looking to the federal government for guidance and standards-setting,” Lin said. “Likewise, AI laws can protect technology developers in the new and tricky cases of liability that will inevitably arise.”
Transparency is still a huge challenge in AI, Lin added: “They’re black boxes that seem to work OK even if we don’t know how … but when they fail, they can fail spectacularly, and real human lives could be at stake.”
Compliance standards and policies expand
Though the Wyden Bill is a good starting point to give the Federal Trade Commission broader authority, requiring impact assessments that include considerations about data sources, bias, fairness, privacy, and more, it would help to expand compliance standards and policies, said Winters. “The main benefit to [industry] would be some clarity about what their obligations are and what resources they need to devote to complying with appropriate regulations,” he said. But there are drawbacks too, especially for companies that rely on fundamentally flawed or discriminatory data, as “it would be hard to accurately comply without endangering their business or inviting regulatory intervention,” Winters added.
Another drawback, Lin said, is that even if established players support a law to prevent AI bias, it isn’t clear what bias looks like in terms of machine learning. “It’s not just about treating people differently because of their race, gender, age, or whatever, even if these are legally protected categories,” Lin said. “Imagine if I were casting for a movie about Martin Luther King, Jr. I would reject every actor who is a teenage Asian girl, even though I’m rejecting them precisely because of age, ethnicity, and gender.” Algorithms, however, don’t understand context.
The EU’s General Data Protection Regulation (GDPR) is a good example to emulate, even though it’s aimed not at AI specifically, but on underlying data practices. “GDPR was fiercely resisted at first … but it’s now generally regarded as a very beneficial regulation for individual, business, and societal interests,” Lin said. “There is also the coercive effect of other countries signing an international law, making a country think twice or three times before it acts against the treaty and elicits international condemnation. … Even if the US is too laissez-faire in its general approach to embrace guidelines [like the EU’s], they still will want to consider regulations in other major markets.”
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Payal Dhar
Source: Venturebeat