Our 2020 presidential candidates will be questioned about their stance on artificial intelligence (AI) policy, especially with regard to the job displacement AI could cause in manufacturing, transportation, and other industries. An over-regulation of AI could hand technical superiority to countries like China and Russia, leading to a ripple effect on America’s GDP and even threatening national security. But under-regulation could lead to a massive consolidation of power among a handful of American technology companies, millions of jobs lost without replacement planning, and algorithms that show bias based on age, race, gender, and more.
We’re certain to hear statements about upskilling — the process of helping displaced workers acquire new skills so they can find other employment — and about taxing robots to slow down job loss. But the candidates will need to offer up more than a few soundbites. They’ll need to truly think through the complex, long-tail consequences of an AI policy that will be very much needed in the next administration. The following five areas are ones that candidates will have to understand thoroughly before framing new policy:
1. Without increased R&D investment, the US is likely to fall behind China in AI leadership during the next administration
China is moving very aggressively to lead the world in AI, and new research shows it will be a certainty within the next few years that China will produce more AI research papers than the US. While research papers may seem like a soft metric for gauging the loss of technological leadership, the country producing the most cutting-edge research in AI will have advantages in military application, government application, and private sector innovation.
The National Security Commission on Artificial Intelligence (NSCAI), created by Congress in 2018, raised concerns about the progress China has made in this area, noting “we are concerned that America’s role as the world’s leading innovator is threatened. We are concerned that strategic competitors and non-state actors will employ AI to threaten Americans, our allies, and our values.”
Another group, The Center for New American Security (CNAS), reported in December of 2019 that advances in AI technology are enabling future malign uses, such as launching sophisticated influence attacks against democratic nations. The CNAS recommended the US boost government funding of AI R&D to $25 billion by 2025. The CNAS concluded that “on its current trajectory, with a shrinking share of global R&D spending, human capital shortfalls, and the rapid rise of a near-peer competitor, the United States cannot continue to coast. America’s ability to harness AI to the fullest extent possible is at stake. Falling short would squander economic and societal benefits and expose the United States to avoidable risks and challenges.”
2. Students and highly-skilled workers immigrating to the US can provide an advantage
One area where the US could have a strong advantage over China and other countries is using immigration to attract more students and highly-skilled workers to boost AI leadership. Immigration in America has always been a scientific and economic catalyst, and candidates need to consider increasing visas for AI researchers and workers in the next administration. Three key points here:
-
Sciences: Immigrants are responsible for 39% of the Nobel Prizes won by Americans in chemistry, medicine, and physics.
-
Economy: According to the Center for American Entrepreneurship, while immigrants account for less than 14% of the population, nearly a quarter of all new businesses — nearly one-third of venture-backed companies and half of Silicon Valley high-tech startups — are started by immigrants.
-
Immigrant Founders at American Universities: Satya Nadella, CEO of Microsoft, was born in India and studied at the University of Wisconsin-Milwaukee. Sergey Brin, co-founder of Google, was born in Russia and studied at Stanford. Dara Khosrowshahi, CEO of Uber, was born in Iran and studied at Brown University. According to the Kauffman Foundation, more than half of America’s “unicorn” companies have an immigrant cofounder.
In 2017, 79% of full-time graduate students in computer science programs at U.S. universities were international students. From my own personal experience, I witnessed many of my Ph.D. classmates at UC Berkeley complete advanced degrees in the hottest fields and receive government grants and fellowships, paid for partly by American taxpayers, only to have to leave the country after graduation because our immigration policies wouldn’t allow them to gain employment here.
Think about that: Our UC system is educating some of the finest minds of our generation and turning them away when they try to contribute to the economy in some of the hardest-to-staff roles on the planet. My view is we should be stapling green cards to diplomas and begging them to stay. The presidential candidates have a big opportunity to change this nonsensical cycle.
3. “Upskilling” has to be more than a buzzword. It has to be a detailed plan.
The 2016 election saw incredibly broad strokes being touted as AI policy. Upskilling is in fact a strong policy idea, but just saying the word does not mean a policy has been formed. Similarly, noting that “we need to work together,” whether addressing American businesses or the entire American population, is not a plan.
Nearly the only thing AI technologists agree on is that job displacement is inevitable. Some will argue that the jobs we’ll lose are not fulfilling for humans anyway or that advancements in AI will create millions of other, new jobs. According to a Mckinsey report on Workforce Transition in a Time of Automation, an estimated 375 million workers globally will be displaced by automation by 2030. At the same time, the World Economic Forum predicts that up to 133 million new roles may emerge as companies embrace automation and uncover new opportunities for humans to work alongside machines.
Regardless of how, when, and which exact jobs we lose due to AI and automation, our 2020 candidates will have to dive deeper into the issue than they have in the past. Taxing robots and pushing universal basic income (UBI) are viable, albeit incomplete, policy ideas; they don’t explicitly address retraining programs. We need policies that make clear who is responsible for training a human that has lost a job due to automation. The government? The company that displaced the worker? The worker him or herself that can, for example, enroll in a local community college for free? Who will be paying for it? We can all work together better to help the displaced when these answers are clear.
4. We need more nuanced policies around averting bias
America’s biggest banks, credit firms, mortgage loan businesses, insurance companies, and more will use AI to process information about people that can lead to biased outcomes based on age, race, sex, geography, education level, and more.
Just in the last few years there have been incidents of extreme bias by AI algorithms. Amazon used an AI system to process applications before finding it favored applicants that used the words “captured” and “executed” — words that were predominantly found on men’s resumes. And in November, Apple and Goldman Sachs came under fire after the Apple Card showed favoritism toward several men in a household vs. their wives. In one instance, David Heinemeier Hannson, the CTO of Basecamp, received 20 times the credit limit of his wife, even though they filed joint tax returns and she had a higher credit score. Goldman Sachs said that gender wasn’t an input that applicants had to provide before the algorithm assigned a credit limit. However, many speculated that not including gender as an input doesn’t mean the algorithm couldn’t have picked up on proxies for gender in the data that still allowed it to use gender as a deciding factor.
Bias is a problem that goes deeper than not understanding the role of proxies in algorithms, it brings up larger questions about what to do when current policies don’t take into account the effects of AI. For example, The Equal Credit Opportunity Act was put in place to avert discrimination. It barred financial businesses in the US from collecting information on race or gender. In the age of AI, how should candidates think about combating such bias or adjusting policies?
The big problem is, there is no simple answer because even the definition of AI bias is the subject of heavy debate. A recent survey paper released by researchers at USC’s Information Sciences Institute identified 23 types of bias in machine learning systems ranging from data bias to algorithmic bias to funding bias and many more. In 2018, Princeton’s Arvind Narayanan identified 21 types of fairness at the predecessor to the ACM’s Fairness, Accountability and Transparency Conference.
Blanket regulation of fairness in AI is likely to either do nothing or stifle innovation. Instead, an outcome-oriented approach to regulation, rather than regulating the use of specific techniques or algorithms must be considered.
5. Autonomous vehicles and healthcare need special consideration
Not all AI is created equal, and not all AI regulations will work for each industry they are applied to. Two industries that need particularly careful regulation are autonomous vehicles and healthcare.
Autonomous vehicles (AVs) have not delivered on their promise to date, but that doesn’t mean we won’t soon be approaching a self-driving world. AVs could provide profound benefits to automotive safety, city traffic, carbon emissions, and US manufacturing. Recently, the U.S. Department of Transportation published “Ensuring American Leadership in Automated Vehicle Technologies: Automated Vehicles 4.0.” The report was created to promote American innovation and provide a standardized federal approach to AVs. While the report was exciting for many in the AV industry, currently the Senate has not passed federal legislation to create national parameters for testing and deploying AVs. The next administration will have to be more aggressive in pushing federal regulations for AVs for America to continue to be on the cutting-edge of AI and transportation.
The danger of misregulated AVs is extreme. Under-regulated AVs could easily lead to dangerous systems being deployed on America’s roadways that create massive, avoidable fatalities. On the other hand, the risks in over-regulating AVs are just as grave. If AVs get to a point where they create fewer fatalities than human drivers, we’ll have a moral imperative to deploy them widely. Of course, understanding when AVs have crossed that threshold is very difficult, and will require deep examination, testing, and standards.
Healthcare is perhaps the industry where AI can add the most value over the next decade, economically speaking. The US healthcare system is now 18% of GDP, and an estimated 25% of spend — $760 to $935 billion — is waste. Regulators are right not to open the floodgates on AI in healthcare as obviously there is sensitive personal data to consider. However, responsible AI innovation is needed somehow, someway from the private sector. Today, the single biggest factor contributing to this massive waste in the US healthcare system is administrative complexity ($265.6 billion). AI alone is unlikely to fix these problems, but it can certainly help contribute to the automation and streamlining of some of these processes.
AI can also revolutionize patient treatment and is already assisting in some hospitals by helping radiologists spot anomalies in X-rays or scans for early detection of cancer or other diseases. Other hospitals use AI to help predict when rooms will open up and better patient management. Healthcare is always a big topic of discussion in presidential elections, but in 2020 it should not be all about whether universal healthcare is the answer because, even if it is, we are dealing with one of the most economically inefficient industries in history.
Wrapping up
The issues I’ve raised here are not the only AI-related considerations US presidential candidates must address, but if they understand these five issues they’ll have a solid foundation from which to form smart policy ideas that keep American leadership in AI and innovation strong for decades to come.
Evan Sparks is CEO of Determined AI. As a member of the AMPLab while at UC Berkeley, he contributed to the design and implementation of much of the large-scale machine learning ecosystem around Apache Spark, including MLlib and KeystoneML.
Author: Evan Sparks, Determined AI.
Source: Venturebeat