AI & RoboticsNews

Stanford report shows that ethics challenge continue to dog AI field as funding climbs

Did you miss a session at the Data Summit? Watch On-Demand Here.


Private investors are pouring more money into AI startups than ever before. At the same time, AI systems are becoming more affordable to train — at least when it comes to certain tasks, like object classification. Troubling, though, language models in the same vein as OpenAI’s GPT-3 are exhibiting greater bias and generating more toxic text than the simpler models that preceded them.

Those are the top-level findings of the 2022 AI Index report out of Stanford’s Institute for Human-Centered AI (HAI), an academic research center focused on the human impact of AI technologies. Now in its fifth year, the AI Index highlights major developments in the AI industry from 2020 to 2021, paying special attention to R&D, technical performance, technical AI ethics, the economy and education, and policy and governance. 

“This year’s report shows that AI systems are starting to be deployed widely into the economy, but at the same time they are being deployed, the ethical issues associated with AI are becoming magnified,” the coauthors wrote. “This is bound up with the broad globalization and industrialization of AI — a larger range of countries are developing, deploying, and regulating AI systems than ever before, and the combined outcome of these activities is the creation of a broader set of AI systems available for people to use, and reductions in their prices.”

This year’s edition of the AI Index shows that private investment in AI soared while investment concentration intensified. Private investment in AI in 2021 totaled around $93.5 billion, more than double the total private investment in 2020, while the number of newly-funded AI companies continued to drop — from 1051 companies in 2019 and 762 companies in 2020 to 746 companies in 2021. In 2020, there were four funding rounds worth $500 million or more versus 15 In 2021.

“Among companies that disclosed the amount of funding, the number of AI funding rounds that ranged from $100 million to $500 million more than doubled in 2021 compared to 2020, while funding rounds that were between $50 million and $100 million more than doubled as well,” the Stanford coauthors note. “In 2020, there were only four funding rounds worth $500 million or more; in 2021, that number grew to 15. Companies attracted significantly higher investment in 2021, as the average private investment deal size in 2021 was 81.1% higher than in 2020.”

The 2022 AI Index’s findings align with recent report from consulting firm Forrester, which pegged the size of the AI market as lower than many analysts previously estimated. According to Forrester, as AI is increasingly considered essential to enterprise software and large tech companies add AI to their product portfolios, startups will lose market share — and could end up being the target of mergers and acquisitions. For example, last year, PayPal snapped up AI-powered payment startup Paidy for $2.7 billion, while Microsoft acquired voice recognition company Nuance for nearly $20 billion.

As shown by the 2022 AI Index, companies specializing in data management, processing, and cloud technologies received the greatest amount of investment in 2021, followed by medical and fintech startups. Broken down geographically, in 2021, the U.S. led the world in both total private investment in AI and the number of newly funded AI companies — three and two times higher than China, respectively, the next country on the ranking.

Decreasing training costs

This year’s AI Index pushes back against the notion that AI systems remain expensive to train — at least depending on the domain. The coauthors found that the cost to train a basic image classification model has decreased by 63.6% while training times for AI systems have improved by 96.3%.

The report isn’t the first to assert that costs for certain AI development tasks are coming down, thanks in part to improvements in hardware and architectural design approaches. A 2020 OpenAI survey found that since 2012, the amount of compute needed to train a model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of two every 16 months. Alphabet-backed research lab DeepMind’s recent language model — RETRO — can beat others 25 times its size.

But many state-of-the-art AI systems remain too costly for all but the best-funded labs and companies to train, much less deploy into production. DeepMind is estimated to have spent $35 million training a model to learn chess, shogi, and the Chinese board game Go. Meanwhile, a 2020 study from startup AI21 Labs pegged the cost of training a text-generating system roughly 116 times smaller than GPT-3 at between $80,000 to $1.6 million.

The AI Index’s coauthors acknowledge the advantages wielded by large private sector actors, including access to enormous, terabyte-scale datasets for AI training. (AI models learn to perform tasks by processing large numbers of examples.) In fact, they say, top results across technical benchmarks are increasingly relying on extra, difficult-to-obtain training data to set new state-of-the-art results. As of 2021, nine out of 10 state-of-the-art AI systems in the 2022 AI Index were trained with extra data. One recently-published study estimated that only a dozen universities and corporations are responsible for creating the datasets used more than 50% of the time in machine learning.

“The use of extra training data has taken over object detection, much as it has with other domains of computer vision,” the coauthors write. “This … implicitly favors private sector actors with access to vast datasets.”

The rise of regulation — and ethics

In a brighter shift, the 2022 AI Index reported evidence that AI ethics — the study of the fairness of and bias in AI systems, among other aspects — is entering the mainstream. Researchers with industry affiliations contributed 71% more publications year-over-year at fairness-focused conferences and workshops recently, while research on AI fairness and transparency increased fivefold in publications on related topics over the past four years, the coauthors say.

While the trend is encouraging, it’s worth noting that companies like Google — which infamously dissolved an AI advisory board in 2019 just one week after forming it — have attempted to limit other internal research that might portray its technologies in a bad light. And reports have described many AI ethics teams at large corporations, like Meta (formerly Facebook), as largely toothless and ineffective. IBM, for example — which heavily promotes its “fairness” tools designed to check for “unwanted bias” in AI — once secretly collaborated with the New York Police Department to train facial recognition and racial classification models for video surveillance systems.

As Leiden University assistant professor Rodrigo Ochigame, who studies the intersection of science, technology, and science, explained in a 2019 piece for The Intercept, corporations generally support two kinds of regulatory possibilities for a technology: (1) No legal regulation at all, leaving ethical principles as merely voluntary; or (2) moderate regulation encouraging — or requiring — technical adjustments that don’t conflict significantly with profits. Most oppose the third option: restrictive legal regulation curbing or banning deployment of the technology.

“The corporate-sponsored discourse of ‘ethical AI’ enables precisely this position,” Ochigame writes. “Some big firms may even prefer … mild legal regulation over a complete lack thereof, since larger firms can more easily invest in specialized teams to develop systems that comply with regulatory requirements.”

Indeed, efforts to address ethical concerns associated with using AI in industry remain limited. According to a McKinsey survey, while 29% and 41% of respondents companies recognize “equity and fairness” and “explainability” as risks while adopting AI, only 19% and 27% are taking steps to mitigate those risk while adopting AI.

This bodes poorly for efforts to address the growing bias problems with AI systems. While labs like OpenAI claim to have made progress in reducing bias, the 2022 AI Index shows that there’s much to do: A state-of-the-art language-generating model in 2021 was 29% more likely to output toxic text versus a smaller, simpler model considered state-of-the-art in 2018. This suggests that the increase in toxicity corresponds with the increase in general capabilities.

“Larger and more complex and capable AI systems can generally do better on a broad range of tasks while also displaying a greater potential for ethical concerns,” the Stanford coauthors wrote. “[R]esearchers and practitioners are reckoning with [the] real-world harms, [including] commercial facial recognition systems that discriminate on race, resume screening systems that discriminate on gender, and AI-powered clinical health tools that are biased along socioeconomic and racial lines … As startups and established companies race to make language models broadly available through platforms and APIs, it becomes critical to understand how the shortcomings of these models will affect safe deployment.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: Kyle Wiggers
Source: Venturebeat

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!