- Brazil, India, Canada, Singapore, and South Africa saw the highest levels of AI hiring from 2016 to 2020, according to data provided by LinkedIn.
- Total global investment, like private investment and mergers and acquisitions, grew 40% in 2020. But for the third year in a row, AI startup funding is going to fewer startups.
- In 2019, about 2 out of 3 graduates with a Ph.D. in AI in North America went into industry, up from 44% in 2010.
- The majority of AI Ph.D. graduates come from outside the United States, and 4 out of 5 stay in the country after graduating.
- A news analysis of 500,000 blogs and 60,000 English language news stories found that AI ethics stories were among the most popular AI-related stories in 2020, including coverage of topics like Google firing Timnit Gebru and ethics initiatives introduced by the European Commission, the United Nations, and the Vatican.
- Attendance at major AI research conferences doubled in 2020 as most groups chose to hold virtual gatherings.
- Women made up 18% of AI Ph.D. graduates, according to a 2020 Computing Research Association survey.
- China overtook the U.S in total paper citations, but the U.S. continued a two-decade lead in citations at AI research conferences.
- Based on total number of GitHub Stars, TensorFlow is the most popular AI software library, followed by Keras and PyTorch.
- AI-related papers on arXiv grew from roughly 5,500 in 2015 to nearly 35,000 in 2020.
- A Queer in AI 2020 member survey found that roughly half of respondents have experienced harassment or discrimination and encountered issues around inclusiveness.
- Academic researchers lead in total papers published worldwide. But in the U.S., corporate research ranks second, while government research ranks second in Europe and China.
- From 2004 to 2019, Carnegie Mellon University (16), Georgia Institute of Technology (14), and the University of Washington (12) have lost the largest number of faculty members to industry.
The portion of the report dedicated to progress toward technical challenges highlights advances in computer vision systems and language models, as well as AI for tasks like drug discovery or effective chemical and molecular synthesis.
The AI Index report shows progress in AI systems that can be used for surveillance, like object detection system YOLO. Considerable progress has also been made with VoxCeleb, which measures the ability to identify a voice from a dataset containing 6,000 people. The AI Index charts a decline in equal error rate of about 8% in 2017 to less than 1% in 2020.
“This metric is telling us that AI systems have gone from having an 8% equal error rate to about 0.5%, which tells you that this capability is going to be being deployed quietly across the world,” Clark said.
A panel of experts on technical progress cited AlphaFold’s ability to predict how proteins fold and GPT-3 as two of the most talked-about AI systems of 2020. Though the AI Index acknowledges few- and zero-shot learning gains made by GPT-3, it cites a paper by former Ethical AI team co-lead Timnit Gebru and others that takes a critical look at large language models and their ability to perpetuate bias. It also mentions a paper published last month by OpenAI and Stanford on the need to address large language models’ societal impact before it’s too late. In an interview with VentureBeat in 2019, AI Index founding director Yoav Shoham expressed doubts about the value of judging language models based on performance on narrow tasks.
VentureBeat has reported extensively on both of the research papers named in the index. Other reports VentureBeat has covered that were cited include McKinsey’s State of AI report that found little progress among business leaders when it comes to addressing risks associated with deploying AI. Another warned about the de-democratization of AI in the age of deep learning, which the coauthors say can perpetuate inequality.
The AI Index report included a call for more benchmarks and testing in the fields of computer vision, ethics, and NLP. As demonstrated by benchmarks like GLUE and SuperGLUE, Clark said, “We’re running out of tests as fast as we can build them.” The creation of new benchmarks and testing is also an opportunity to make metrics that reflect people’s values and measure progress toward addressing grand challenges, like deforestation.
“I think one of the ways to get holistic accountability in a space is to have the same test that you run everything against, or the same set of tests. And until we have that, it’s going to be really fuzzy to talk about biases and other ethical issues in these systems, which I think would just hold us back as a community and also make it easier for people who want to pretend these issues don’t exist to continue to pretend they don’t exist or not mention them,” he said.
In previous years, the AI Index expanded to include tools like an arXiv monitor for searching preprint papers. The AI Index’s Global Vibrancy Tool, which serves up comparisons between national AI initiatives, now works for 26 countries across 23 categories.
Perhaps as interesting as what’s included in the report is what’s missing. This year, the report removed data related to progress on self-driving cars, while Clark said the report does not include information about fully autonomous weaponry, due to a lack of data.