Elevate your enterprise data technology and strategy at Transform 2021.
According to a new report released by the Pew Research Center and Elon University’s Imaging the Internet Center, experts doubt that ethical AI design will be broadly adopted within the next decade. In a survey of 602 technology innovators, business and policy leaders, researchers, and activists, a majority worried that the evolution of AI by 2030 will continue to be primarily focused on optimizing profits and social control and that stakeholders will struggle to achieve a consensus about ethics.
Implementing AI ethically means different things to different companies. For some, “ethical” implies adopting AI — which people are naturally inclined to trust even when it’s malicious — in a manner that’s transparent, responsible, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “ethical AI” promises to guard against the use of biased data or algorithms, providing assurance that automated decisions are justified and explainable.
Pew and Elon University asked survey-takers “By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?” Sixty-eight percent predicted ethical principles intended to support the public good won’t be employed in most AI systems by 2030, while only 32% believed these principles will be incorporated into systems by 2030.
“These systems are … primarily being built within the context of late-stage capitalism, which fetishizes efficiency, scale, and automation,” Danah Boyd, a principal researcher at Microsoft, told Pew and Elon University. “A truly ethical stance on AI requires us to focus on augmentation, localized context, and inclusion, three goals that are antithetical to the values justified by late-stage capitalism. We cannot meaningfully talk about ethical AI until we can call into question the logics of late-stage capitalism.”
Internet pioneer Vint Cerf, who participated in the survey, anticipates that while there will be a “good-faith effort” to adopt ethical AI design, good intentions won’t necessarily result in the desired outcomes. “Machine learning is still in its early days, and our ability to predict various kinds of failures and their consequences is limited,” he said. “The machine learning design space is huge and largely unexplored. If we have trouble with ordinary software whose behavior is at least analytic, machine learning is another story.”
Uphill battle
The respondents’ sentiments reflect the slow progress of industry and regulators to curtail the use of harmful AI. Key federal legislation in the U.S. remains stalled, including prohibitions on facial recognition and discriminatory social media algorithms. Less than half of organizations have fully mature, responsible AI implementations, according to a recent Boston Consulting Group survey. And 65% of companies can’t explain how AI predictions are made, while just 38% have bias mitigation steps built into their model development processes, a FICO report found.
“For AI, just substitute ‘digital processing.’ We have no basis on which to believe that the animal spirits of those designing digital processing services, bent on scale and profitability, will be restrained by some internal memory of ethics, and we have no institutions that could impose those constraints externally,” Susan Crawford, a professor at Harvard Law School and former special assistant in the Obama White House for Science Technology and Innovation Policy, noted in the Pew and Elon University report.
Despite the setbacks, recent developments suggest the tide may be shifting — at least in certain areas. In April, the European Commission, the executive branch of the European Union, announced regulations on the use of AI, including strict safeguards on recruitment, critical infrastructure, credit scoring, migration, and law enforcement algorithms. Cities like Amsterdam and Helsinki have launched AI registries that detail how each city government uses algorithms to deliver services. And the National Institute of Standards and Technology (NIST), a U.S. Department of Commerce agency that promotes measurement science, has proposed a method for evaluating user trust in AI systems.
But experts like Douglas Rushkoff believe it will be an uphill battle. “Why should AI become the very first technology whose development is dictated by moral principles? We haven’t done it before, and I don’t see it happening now,” the media theorist and professor at City University of New York told Pew and Elon University. “Most basically, the reasons why I think AI won’t be developed ethically is because AI is being developed by companies looking to make money — not to improve the human condition. So, while there will be a few simple AIs used to optimize water use on farms or help manage other limited resources, I think the majority is being used on people.”
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat