AI & RoboticsNews

A look back at recent AI trends  — and what 2022 might hold

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


2021 was an eventful year for AI. With the advent of new techniques, robust systems that can understand the relationships not only between words but words and photos, videos, and audio became possible. At the same time, policymakers — growing increasingly wary of AI’s potential harm — proposed rules aimed at mitigating the worst of AI’s effects, including discrimination.

Meanwhile, AI research labs — while signaling their adherence to “responsible AI” — rushed to commercialize their work, either under pressure from corporate parents or investors. But in a bright spot, organizations ranging from the U.S. National Institutes of Standards and Technology (NIST) to the United Nations released guidelines laying the groundwork for more explainable AI, emphasizing the need to move away from “black-box” systems in favor of those whose reasoning is transparent.

As for what 2022 might hold, the renewed focus on data engineering — designing the datasets used to train, test, and benchmark AI systems — that emerged in 2021 seems poised to remain strong. Innovations in AI accelerator hardware are another shoo-in for the year to come, as is a climb in the uptake of AI in the enterprise.

Looking back at 2021

Multimodal models

In January, OpenAI released DALL-E and CLIP, two multimodal models that the research lab claims are “a step toward systems with [a] deeper understanding of the world.” Its name, inspired by Salvador Dalí, DALL-E was trained to generate images from simple text descriptions, while CLIP (for “Contrastive Language-Image Pre-training”) was taught to associate visual concepts with language.

DALL-E and CLIP turned out to be the first in a series of increasingly capable multimodal models in 2021. Beyond reach a few years ago, multimodal models are now being deployed in production environments, improving everything from hate speech detection to search relevancy.

Google in June introduced MUM, a multimodal model trained on a dataset of documents from the web that can transfer knowledge between different languages. MUM, which doesn’t need to be explicitly taught how to complete a task, is able to answer questions in 75 languages, including “I want to hike to Mount Fuji next fall — what should I do to prepare?” while realizing that “prepare” could encompass things like fitness as well as weather.

Not to be outdone, Nvidia recently released GauGAN2, the successor to its GauGAN model, which lets users create lifelike landscape images that don’t actually exist. Combining techniques like segmentation mapping, inpainting, and text-to-image generation, GauGAN2 can create photorealistic art from a mix of words and drawings.

Large language models

Large language models (LLM) came into their own in 2021, too, as interest in AI for workloads like generating marketing copy, processing documents, translation, conversation, and other text tasks grew. Previously the domain of well-resourced organizations like OpenAI, Cohere, and AI21 Labs, LLMs were suddenly within reach of startups to commercialize, thanks partially to the work of volunteer efforts like EleutherAI. Corporations like DeepMind still muscled their way to the top of benchmarks, but a growing cohort of companies — among them CoreWeave, NLP Cloud, and Neuro — began serving models with features akin to OpenAI’s GPT-3 to customers.

Motivated in equal parts by sovereignty and competition, large international organizations took it upon themselves to put massive computational resources toward LLM training. Former OpenAI policy director Jack Clark, in an issue of his Import AI newsletter, said that these models are a part of a general trend of “different nations asserting their own AI capacity [and] capability via training frontier models like GPT-3.”

Naver, the company behind the South Korean search engine Naver, created a Korean-language equivalent to GPT-3 called HyperCLOVA. For their parts, Huawei and Baidu developed PanGu-Alpha (stylized PanGu-α) and PCL-BAIDU Wenxin (Ernie 3.0 Titan), respectively, which were trained on terabytes of Chinese-language ebooks, encyclopedias, and social media.

Pressure to commercialize

In November, Google’s parent company, Alphabet, established a subsidiary focused on AI-powered drug discovery called Isomorphic Labs, helmed by DeepMind cofounder Demis Hassabis. The launch of Isomorphic underlined the increasing pressure in 2021 on corporate-backed labs to pursue research with commercial, as opposed to purely theoretical, applications.

For example, while DeepMind remains engaged in prestigious projects like systems that can beat champions at StarCraft II and Go, the lab has turned its attention to more practical domains in recent years, like weather forecastingmaterials modelingatomic energy computationapp recommendations, and datacenter cooling optimization. Similarly, OpenAI —  which started as a nonprofit in 2016 but transitioned to a “capped-profit” in 2019 — made GPT-3 generally available through its paid API in late November following the launch of Codex, its AI-powered programming assistant, in August.

The emphasis on commercialization in 2021 is at least partially attributable to the academic “brain drain” in AI throughout the last decade. One paper found that ties to corporations — either funding or affiliation — in AI research doubled to 79% from 2008 and 2009 to 2018 and 2019. And from 2006 to 2014, the proportion of AI publications with a corporate-affiliated author increased from about 0% to 40%, reflecting the growing movement of researchers from institution to an enterprise.

The academic process isn’t without flaws of its own. There’s a concentration of compute power at elite universities, for example. But it’s been shown that commercial projects unsurprisingly tend to underemphasize values such as beneficence, justice, and inclusion.

Increased funding — and compute

Dovetailing with the commercialization trend, investors poured more money into AI startups than at any point in history. According to a November 2021 report from CB Insights, AI startups around the world raised $50 billion across more than 2,000 deals — surpassing 2020 levels by 55%.

Cybersecurity and processor companies led the wave of newly minted unicorns (companies with valuations over $1 billion), with finance, insurance, retail, and consumer packaged goods following close behind. Health care AI continued to have the largest deal share, which isn’t surprising considering that the AI in the health care market is projected to grow from $6.9 billion to $67.4 billion by 2027.

Driving the funding, in part, is the rising cost of state-of-the-art AI systems. DeepMind reportedly set aside $35 million to train an AI system to learn Go; OpenAI is estimated to have spent $4.6 million to $12 million training GPT-3. Meanwhile, companies developing autonomous vehicle technologies have spun off, merged, agreed to be acquired, or raised hundreds of millions in venture capital to cover operating and R&D costs.

While relatively few startups are developing their own massive, costly AI models, running models can be equally as expensive. One estimate pegs the price of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year. APIs can be cheaper than self-hosted options, but not dramatically so. A hobbyist site powered by OpenAI’s GPT-3 API was forced to consider shutting down after estimating that it would have to pay a minimum of $4,000 monthly.

Regulation and guidelines

In light of the accelerating commercialization of AI, policymakers have responded with rules to reign in — and make more transparent — AI systems. Corporate disregard for ethics forced regulators’ hands in some cases. After firing high-profile ethicists Timnit Gebru and Margaret Mitchell, Google tried to reassure employees that it remained committed to its AI ethics principles while at the same time attempting to limit internal research that showed its technologies in a bad light. Reports have described Meta’s (formerly Facebook’s) AI ethics team, too, as largely toothless and ineffective.

In April, the European Union proposed regulations to govern the use of AI across the bloc’s 27 member states. They impose bans on the use of biometric identification systems in public, like facial recognition (with some exceptions). And they prohibit AI in social credit scoring, the infliction of harm (such as in weapons), and subliminal behavior manipulation.

Following suit, the U.N.’s Educational, Scientific, and Cultural Organization (UNESCO) in November approved a series of recommendations for ethics that aim to recognize that AI can “be of great service” while raising “fundamental … concerns.” UNESCO’s 193 member countries, including Russia and China, agreed to conduct AI impact assessments and place “strong enforcement mechanisms and remedial actions” to protect human rights.

While the policy is nonbinding, China’s support is significant because of the country’s stance on the use of AI surveillance technologies. According to the New York Times, the Chinese government has piloted the use of predictive technology to sweep a person’s transaction data, location history, and social connections to determine whether they’re violent. Chinese companies such as Dahua and Huawei have developed facial recognition technologies, including several designed to target Uighurs, an ethnic minority widely persecuted in China’s Xinjiang province.

Spurred by vendors like Clearview AI, bans on technologies like facial recognition also picked up steam across the U.S. in 2021 — at least at the local level. California lawmakers passed a law that will require warehouses to disclose the algorithms that they use to track workers. And NYC recently banned employers from using AI hiring tools unless a bias audit can show that they won’t discriminate.

Elsewhere, the U.K.’s Centre for Data Ethics and Innovation (CDEI) recommended this year that public sector organizations using algorithms be mandated to publish information about how the algorithms are being applied, including the level of supervision. Even China has tightened its oversight of the algorithms that companies use to drive certain parts of their business.

Rulemaking in AI for defense remains murkier territory. For some companies, like Oculus cofounder Palmer Luckey’s Anduril and Peter Thiel’s Palantir, military AI contracts have become a top revenue source. While the U.S., France, the U.K., and others have developed autonomous defense technologies, countries like Belgium and Germany have expressed concerns about the implications.

Staking out its position, the U.S. Department of Defense published a whitepaper in December — circulated among the National Oceanic and Atmospheric Administration, the Department of Transportation, ethics groups at the Department of Justice, the General Services Administration, and the Internal Revenue Service — outlining “responsible … guidelines” that establish processes intended to “avoid unintended consequences” in AI systems. NATO also this year released an AI strategy listing the organization’s principles for “responsible use [of] AI,” as the U.S. National Institute of Standards and Technology began working with academia and the private sector to develop AI standards.

“[R]egulators are unlikely to step completely aside” anytime soon, analysts at Deloitte wrote in a recent report examining trends in the AI industry. “It’s a nearly foregone conclusion that more regulations over AI will be enacted in the very near term. Though it’s not clear exactly what those regulations will look like, it is likely that they will materially affect AI’s use.”

Predictions for 2022

Milestone moments

Looking ahead to 2022, technical progress is likely to accelerate in the multimodal and language domains. Funding, too, could climb exponentially as investors’ appetites grow for commercialized AI in call center analytics, personalization, and cloud usage optimization.

“This past year, we saw AI capable of generating its own code to construct increasingly complex AI systems. We will continue to see growth in both AI that can write its own code in different programming languages, as well as AI that allows people to simply speak their instructions,” Salesforce ethical AI practice Yoav Schlesinger said in a recent blog post. “These speech-to-code engines will generate images, video, and code using natural commands without worrying about syntax, formatting, or symbols. Say “I’d like an image of a purple giraffe with orange spots, wings, and wheels instead of legs and watch what the AI generates.”

Os Keyes, an AI ethics researcher at the University of Washington, believes that the pandemic has brought attention to the broader implications of AI on working conditions and inequality. That includes the conditions underpinning much of AI’s development, Keyes says, which often depends on low-paid, low-skilled, and crowdsourced work. For example, a growing body of research points to the many problems with datasets and benchmarks in machine learning, including sociocultural and institutional biases — in addition to missteps introduced by human annotators.

“I think there’s a real opportunity here to push for changes in how we conceive of automation and the deployment of technology in daily life, as we’re pushing for changes in how that life is financed,” Keyes told VentureBeat via email.

At the same time, Keyes cautions that the pandemic and its effects “[have] been a godsend” to companies that see new opportunities to “monetize the rot.” Keyes points to the spread of facial recognition for social distancing and opportunities to exploit organizations’ desires to be lean, efficient, and low in headcount, like workplace monitoring software.

“There are a ton of places where half-baked tools — which describes both the software and its developers — are being dangled in front of finance folk[s]. Algorithms, after all, don’t ask for pension contributions,” Keyes added. “I worry that without sustained attention, we’ll flub the opportunities for regulation, ethical standards, and reimagining technology that this crisis moment has catalyzed. It’s all too easy for those who already have money to ‘ethics wash’ practices, and to a degree, we can see that already happening with the nonsensical NIST work on AI trust.

Mike Cook, an AI researcher, and game designer believes that 2022 might see bigger research labs like DeepMind and OpenAI look for a new “milestone moment” in AI. He also thinks that AI will continue to pop up more in everyday products, especially photography and video and that some companies will try to blend NFTs, the metaverse, and AI into the same product

“It’s been a while since something truly headline-grabbing happened from the pure AI labs, especially on a par with the AlphaGo and Lee Sedol match in 2016, for instance … [We could see] AI that can invent a cure for something, synthesize a new drug to treat an illness, or prove a mathematical conjecture, for example,” Cook said. “[Elsewhere, if] we look at what Photoshop, TikTok and other image-driven apps are using AI for currently, we can see we’re not too far off the ability to have AI insert our friends into group photographs that they missed out on, or change the pose and expression of people in selfies … I can [also] imagine us seeing some pitches for metaverse-ready AI companions that follow us from one digital world to the next, like if Alexa could play Mario Kart with you.”

Continuous learning

Joelle Pineau, the managing director at Meta AI Research, Meta’s (formerly Facebook’s) AI research division, says that 2022 will bring new AI datasets, models, tasks, and challenges that “embrace the rich nature” of the real world, as well as augmented and virtual reality. (It should be noted that Meta has a vested interest in the success of augmented and virtual reality technologies, having pledged to spend tens of billions of dollars on their development in its quest to create the metaverse.)

“[I foresee new] work on AI for new modalities, including touch, which enables our richer sensory interaction with the world,” Pineau told VentureBeat via email. “[I also expect] new work embracing [the] use of AI for creativity that enhances and amplifies human expression and experience; advances in egocentric perception to build more useful AI assistants and home robots of the future; and advances in new standards for responsible deployment of AI technology, which reflects better alignment with human values and increased attention to safety, fairness, [and] transparency.”

More sophisticated multimodal systems could improve the quality of AI-generated videos for marketing purposes, for example, along the lines of what startups like Synthesia, Soul Machines, and STAR Labs currently offer. They could also serve as artistic tools, enabling users in industries such as film and game design to iterate and refine artwork before sending it to production.

Pineau also anticipates an increased focus on techniques like few-shot learning and continual learning, which he believes will enable AI to quickly adapt to new tasks. It could result in more systems like the recent language models from OpenAI and Meta, WebGPT, and BlenderBot 2.0, which surf the web to retrieve up-to-date answers to questions posed to them.

“[Most work] remains focused on passive data, collected in large (relatively) homogeneous and stable batches. This approach may be suitable for internet-era AI models, but will need to be rethought as we look to continue to bring the power of AI to the metaverse in support of the fast-changing societies in which we live,” he said. 

Indeed, many experts believe 2022 will see a heightening shift in focus from modeling to the underlying data used to develop AI systems. As the spotlight turns to the lack of open data engineering tools for building, maintaining, and evaluating datasets, a growing movement — data-centric AI — aims to address the lack of best practices and infrastructure for managing data in AI systems. Data-centric AI consists of systematically changing and enhancing datasets to improve the accuracy of an AI system, a task that has historically been overlooked or treated as a one-off task.

Tangibly, this might mean more compute-efficient approaches to LLM development (such as a mixture of experts) or the use of synthetic datasets. Despite its drawbacks, synthetic data — AI-generated data that can stand in for real-world data — is already coming into wider use, with 89% of tech execs in a recent survey saying that they believe it’s the key to staying ahead.

Gartner has predicted that by 2024, synthetic data will account for 60% of all data used in AI development.

“While AI has transformed the software internet industry, much work remains to be done to have it similarly help other industries,” Andrew Ng, the founder of Landing AI and cofounder of Coursera, told VentureBeat in a recent interview. “Data-centric AI — the discipline of systematically engineering the data used to build AI systems — is a rapidly rising technology that will be key to democratizing access to cutting edge AI systems.”

Enterprise uptake

Eric Boyd, a corporate VP at Microsoft’s Azure AI platform, thinks that the data-centric AI movement will bolster the demand for managed solutions in 2022 among businesses that lack data expertise. O’Reilly’s latest AI Adoption in the Enterprise report found that a lack of skilled people, difficulty hiring, and a lack of quality data topped the list of challenges in AI, with 19% of companies citing the skills gap as a “significant” barrier in 2021.

“Demand for AI solutions is increasing faster now than ever before, as businesses from retail to healthcare tap data to unlock new insights. Businesses are eager to apply AI across workloads to improve operations, drive efficiencies, and reduce costs,” Boyd told VentureBeat.

Rob Gibbon, a product manager at Canonical, expects that AI will play a larger role this year in supporting software development at the enterprise level. Extending beyond code generation and autocompletion systems like Copilot and Salesforce’s CodeT5, Gibbon says that AI will be — and has been, in fact — applied to tasks like app performance optimization, adaptive workload scheduling, performance estimation and planning, automation, and advanced diagnostics. Supporting Gibbon’s assertion, 50% of companies responding to a January 2021 Algorithmia survey said that they planned to spend more on AI for these purposes, with 20% saying they would be “significantly” increasing their budgets.

The uptake, along with growing recognition of AI’s large ecological footprint, could spur new hardware (along with software) to accelerate AI workloads along the lines of Amazon’s Trainium and Graviton3 processors, Google’s fourth-generation tensor processing units, Intel-owned Habana’s Gaudi, Cerebras’ Cs-2, various accelerator chips at the edge, and perhaps even photonics components. The edge AI hardware market alone is expected to grow to $38.87 billion by 2030, growing at a compound annual growth rate of 18.8%, according to Valuates Reports.

“AI will play an increasing role in both the systems software engineers create and in the process of creation,” Gibbon said. “AI has finally come of age, and that’s down in no small part to collaborative open source initiatives like the [Google’s] TensorFlow, Keras, [Meta’s] PyTorch and MXNet deep learning projects. Continuing into 2022, we will see ever broader adoption of machine learning and AI in the widest variety of applications imaginable — from the most trivial and mundane to those that are truly transformative.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!