Big tech companies and venture capitalists are in the midst of a gold rush, investing astronomical sums into leading AI labs that are creating generative models. Last week, Amazon announced a $4 billion investment in AI lab Anthropic. Earlier this year, Microsoft invested a staggering $10 billion in OpenAI, which is now reportedly in discussions with investors to sell shares at a valuation of $80-90 billion.
Large language models (LLM) and generative AI have become hot areas of competition, prompting tech giants to strengthen their talent pool and gain access to advanced models through partnerships with AI labs. These partnerships and investments bear mutual benefits for both the AI labs and the tech companies that invest in them. However, they also have other less savory implications for the future of AI research that are worth exploring.
LLMs require substantial computational resources to train and run, resources that most AI labs don’t have access to. Partnerships with big tech companies provide these labs with the cloud servers and GPUs they need to train their models.
OpenAI, for instance, has been leveraging Microsoft’s Azure cloud infrastructure to train and serve its models, including ChatGPT, GPT-4, and DALL-E. Anthropic will now have access to Amazon Web Services (AWS) and its special Trainium and Inferentia chips for training and serving its AI models.
The impressive advances in LLMs in recent years owe a great deal to the investments of big tech companies in AI labs. In return, these tech companies can integrate the latest models into their products at scale, bringing new experiences to users. They can also provide tools for developers to use the latest AI models in their products without the technical overhead of setting up large compute clusters.
This feedback cycle will help the labs and companies navigate the challenges of these models and address them at a faster pace.
However, as AI labs become embroiled in the competition between big tech companies for a larger share of the generative AI market, they may become less inclined to share knowledge.
Previously, AI labs would collaborate and publish their research. Now, they have incentives to keep their findings secret to maintain their competitive edge.
This shift is evident in the change from releasing full papers with model architectures, weights, data, code, and training recipes to releasing technical reports that provide little information about the models. Models are no longer open-sourced but are instead released behind API endpoints. Very little is made known about the data used to train the models.
The direct effect of less transparency and more secrecy is a slower pace of research. Institutions may end up working on similar projects in secret without building on each other’s achievements — needlessly duplicating work.
Diminished transparency also makes it more difficult for independent researchers and institutions to audit models for robustness and harmfulness, as they can only interact with the models through black-box API interfaces.
As AI labs become beholden to the interests of investors and big tech companies, they may be incentivized to focus more on research with direct commercial applications. This focus could come at the expense of other areas of research that might not yield commercial results in the short term, yet could provide long-term breakthroughs for computing science, industries, and humanity.
The commercialization of AI research is evident in the news coverage of research labs, which is becoming increasingly focused on their valuations and revenue generation. This is a far cry from their original mission to advance the frontiers of science in a way that serves humanity and reduces the risks and harms of AI.
Achieving this goal requires research across a range of fields, some of which might take years or even decades of effort. For example, deep learning became mainstream in the early 2010s, but was the culmination of decades of efforts by several generations of researchers who persisted in an idea that was, until recently, mostly ignored by investors and the commercial sector.
The current environment risks overshadowing these other areas of research that might provide promising results in the longer term. Big tech companies are also more likely to fund research on AI techniques that rely on huge datasets and compute resources, which will give them a clear advantage over smaller players.
The growing interest in commercial AI will push big tech companies to leverage their wealth to draw the limited AI talent pool toward their own organizations. Big tech companies and the AI labs they fund can offer stellar salaries to top AI researchers, a luxury that non-profit AI labs and academic institutions can’t afford.
While not every researcher is interested in working with for-profit organizations, many will be drawn to these organizations, which will again come at the cost of AI research that has scientific value but little commercial use. It will also centralize power within a few very wealthy companies and make it very difficult for startups to compete for AI talent.
As the AI arms race between big tech reshapes the AI research landscape, not everything is gloomy. The open-source community has been making impressive progress in parallel with closed-source AI services. There is now a full range of open-source language models that come in different sizes and can run on custom hardware, from cloud-hosted GPUs to laptops.
Techniques such as parameter-efficient fine-tuning (PEFT) enable organizations to customize LLMs with their own data with very small budgets and datasets. There is also promising research in areas other than language models, such as liquid neural networks by MIT scientists, which provide promising solutions to some of the fundamental challenges of deep learning, including lack of interpretability and the need for huge training datasets. At the same time, the neuro-symbolic AI community continues to work on new techniques that might provide promising results in the future.
It will be interesting to see how the research community adapts to the shifts caused by the accelerating generative AI gold rush of big tech.
VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
Big tech companies and venture capitalists are in the midst of a gold rush, investing astronomical sums into leading AI labs that are creating generative models. Last week, Amazon announced a $4 billion investment in AI lab Anthropic. Earlier this year, Microsoft invested a staggering $10 billion in OpenAI, which is now reportedly in discussions with investors to sell shares at a valuation of $80-90 billion.
Large language models (LLM) and generative AI have become hot areas of competition, prompting tech giants to strengthen their talent pool and gain access to advanced models through partnerships with AI labs. These partnerships and investments bear mutual benefits for both the AI labs and the tech companies that invest in them. However, they also have other less savory implications for the future of AI research that are worth exploring.
Accelerated research and product integration
LLMs require substantial computational resources to train and run, resources that most AI labs don’t have access to. Partnerships with big tech companies provide these labs with the cloud servers and GPUs they need to train their models.
OpenAI, for instance, has been leveraging Microsoft’s Azure cloud infrastructure to train and serve its models, including ChatGPT, GPT-4, and DALL-E. Anthropic will now have access to Amazon Web Services (AWS) and its special Trainium and Inferentia chips for training and serving its AI models.
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
The impressive advances in LLMs in recent years owe a great deal to the investments of big tech companies in AI labs. In return, these tech companies can integrate the latest models into their products at scale, bringing new experiences to users. They can also provide tools for developers to use the latest AI models in their products without the technical overhead of setting up large compute clusters.
This feedback cycle will help the labs and companies navigate the challenges of these models and address them at a faster pace.
Less transparency and more secrecy
However, as AI labs become embroiled in the competition between big tech companies for a larger share of the generative AI market, they may become less inclined to share knowledge.
Previously, AI labs would collaborate and publish their research. Now, they have incentives to keep their findings secret to maintain their competitive edge.
This shift is evident in the change from releasing full papers with model architectures, weights, data, code, and training recipes to releasing technical reports that provide little information about the models. Models are no longer open-sourced but are instead released behind API endpoints. Very little is made known about the data used to train the models.
The direct effect of less transparency and more secrecy is a slower pace of research. Institutions may end up working on similar projects in secret without building on each other’s achievements — needlessly duplicating work.
Diminished transparency also makes it more difficult for independent researchers and institutions to audit models for robustness and harmfulness, as they can only interact with the models through black-box API interfaces.
Less diversity in AI research
As AI labs become beholden to the interests of investors and big tech companies, they may be incentivized to focus more on research with direct commercial applications. This focus could come at the expense of other areas of research that might not yield commercial results in the short term, yet could provide long-term breakthroughs for computing science, industries, and humanity.
The commercialization of AI research is evident in the news coverage of research labs, which is becoming increasingly focused on their valuations and revenue generation. This is a far cry from their original mission to advance the frontiers of science in a way that serves humanity and reduces the risks and harms of AI.
Achieving this goal requires research across a range of fields, some of which might take years or even decades of effort. For example, deep learning became mainstream in the early 2010s, but was the culmination of decades of efforts by several generations of researchers who persisted in an idea that was, until recently, mostly ignored by investors and the commercial sector.
The current environment risks overshadowing these other areas of research that might provide promising results in the longer term. Big tech companies are also more likely to fund research on AI techniques that rely on huge datasets and compute resources, which will give them a clear advantage over smaller players.
Brain drain toward big tech
The growing interest in commercial AI will push big tech companies to leverage their wealth to draw the limited AI talent pool toward their own organizations. Big tech companies and the AI labs they fund can offer stellar salaries to top AI researchers, a luxury that non-profit AI labs and academic institutions can’t afford.
While not every researcher is interested in working with for-profit organizations, many will be drawn to these organizations, which will again come at the cost of AI research that has scientific value but little commercial use. It will also centralize power within a few very wealthy companies and make it very difficult for startups to compete for AI talent.
Silver linings
As the AI arms race between big tech reshapes the AI research landscape, not everything is gloomy. The open-source community has been making impressive progress in parallel with closed-source AI services. There is now a full range of open-source language models that come in different sizes and can run on custom hardware, from cloud-hosted GPUs to laptops.
Techniques such as parameter-efficient fine-tuning (PEFT) enable organizations to customize LLMs with their own data with very small budgets and datasets. There is also promising research in areas other than language models, such as liquid neural networks by MIT scientists, which provide promising solutions to some of the fundamental challenges of deep learning, including lack of interpretability and the need for huge training datasets. At the same time, the neuro-symbolic AI community continues to work on new techniques that might provide promising results in the future.
It will be interesting to see how the research community adapts to the shifts caused by the accelerating generative AI gold rush of big tech.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Author: Ben Dickson
Source: Venturebeat
Reviewed By: Editorial Team