AI & RoboticsNews

Why the open source community might be the biggest winner of the OpenAI fallout

This morning, Microsoft made the surprise move of hiring Sam Altman and Greg Brockman, the former CEO and President of OpenAI, respectively. This strategic decision appears to be Microsoft’s attempt to salvage what it can from the chaos that engulfed the leading AI research lab just before the weekend, when the board of directors of the non-profit overseeing OpenAI decided to fire Altman.

However, the final chapter of the OpenAI coup has yet to be written. Several researchers have already quit, and hundreds of employees as well as top executives of OpenAI are in revolt against the board’s decision. The relationship between Microsoft and OpenAI is also uncertain, as Microsoft plans to launch an internal research arm with Altman and Brockman, which will undoubtedly compete with OpenAI.

One thing is clear, however: OpenAI will never be the same. The same can be said for its products, including ChatGPT and its API platform. This upheaval serves as a reminder of the fluid state of the bleeding-edge AI industry. Scientists, engineers, and philosophers will continue to argue over the risks of advanced AI systems and the existential threats of artificial general intelligence (AGI).

Such clashes will likely occur again, particularly in AI labs that attempt to balance the dual mission of research and product development.

Therefore, enterprises that have built products and applications on top of OpenAI’s platform will need to reassess their strategies as the future of the company hangs in the balance.

In this context, the market for open source models may be the biggest winner. Unlike closed-source systems like OpenAI’s platform, open source models give full control and responsibility to those who use them in their products. They do not have a single point of failure, such as an API server or a feuding board that can’t decide whether to accelerate product shipment or hit the brakes and measure x-risks.

More than 100 OpenAI customers have reached out to OpenAI competitors such as Anthropic, Google Cloud, Cohere and Microsoft Azure over the weekend, according to the Information.

Enterprises can decide where and how to run open source models, whether it’s on their own servers, in a public cloud, or on a model-serving platform. Most major cloud platforms provide ready-made access to open source models such as Llama 2, Mistral, Falcon, and MPT. This includes Microsoft Azure AI Studio and Amazon Bedrock, as well as numerous startups that offer easy access to hosted versions of open source models. This wide range of options allows enterprises to run models according to their existing infrastructure.

Furthermore, open source models typically have more stable performance compared to private models. In the past year, there have been numerous reports of OpenAI model performance degrading (or more precisely, changing) as the company continues to retrain, tweak, and alter safeguard measures. These models are effectively black boxes within black boxes, making it challenging to obtain stable outputs.

In contrast, open source models offer stable performance, with enterprises deciding when they are updated, what the safeguards are, and avoiding panic lockdowns due to random users posting jailbreaks online. The open source model landscape is also progressing rapidly, thanks to the sharing of knowledge between researchers and developers.

There are now many tools and techniques available to customize open source LLMs for specific applications, which are not available for private models. Enterprises can use quantization to cut the costs of running models or employ low-rank adaptation to fine-tune them at a fraction of the real cost, allowing thousands of models to run on a single GPU. Open source models can be fitted to all kinds of applications and budgets.

The issue with companies like OpenAI is that they are trying to tackle two things simultaneously: achieve AGI and deliver profitable products to fund their research. These two goals can sometimes be diametrically opposed, as the OpenAI saga demonstrates.

In reality, most enterprises don’t want AGI. And in most cases, they don’t need state-of-the-art models with trillions of parameters. What they need is a solid foundation on which they can build stable LLM applications, even if it is a large language model (LLM) with a few billion parameters. This is the opportunity that the open source ecosystem provides. As the fallout from OpenAI continues to unfold, more enterprises will likely flock to open source LLMs.

Platforms like ChatGPT will remain useful for fast prototyping and exploring the capabilities of cutting-edge AI technology. But once they find the right application, enterprises will be better served by investing in a technology that will remain regardless of the politics of the company that develops it.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


This morning, Microsoft made the surprise move of hiring Sam Altman and Greg Brockman, the former CEO and President of OpenAI, respectively. This strategic decision appears to be Microsoft’s attempt to salvage what it can from the chaos that engulfed the leading AI research lab just before the weekend, when the board of directors of the non-profit overseeing OpenAI decided to fire Altman.

However, the final chapter of the OpenAI coup has yet to be written. Several researchers have already quit, and hundreds of employees as well as top executives of OpenAI are in revolt against the board’s decision. The relationship between Microsoft and OpenAI is also uncertain, as Microsoft plans to launch an internal research arm with Altman and Brockman, which will undoubtedly compete with OpenAI.

One thing is clear, however: OpenAI will never be the same. The same can be said for its products, including ChatGPT and its API platform. This upheaval serves as a reminder of the fluid state of the bleeding-edge AI industry. Scientists, engineers, and philosophers will continue to argue over the risks of advanced AI systems and the existential threats of artificial general intelligence (AGI).

Such clashes will likely occur again, particularly in AI labs that attempt to balance the dual mission of research and product development.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

Learn More

Therefore, enterprises that have built products and applications on top of OpenAI’s platform will need to reassess their strategies as the future of the company hangs in the balance.

In this context, the market for open source models may be the biggest winner. Unlike closed-source systems like OpenAI’s platform, open source models give full control and responsibility to those who use them in their products. They do not have a single point of failure, such as an API server or a feuding board that can’t decide whether to accelerate product shipment or hit the brakes and measure x-risks.

More than 100 OpenAI customers have reached out to OpenAI competitors such as Anthropic, Google Cloud, Cohere and Microsoft Azure over the weekend, according to the Information.

Enterprises can decide where and how to run open source models, whether it’s on their own servers, in a public cloud, or on a model-serving platform. Most major cloud platforms provide ready-made access to open source models such as Llama 2, Mistral, Falcon, and MPT. This includes Microsoft Azure AI Studio and Amazon Bedrock, as well as numerous startups that offer easy access to hosted versions of open source models. This wide range of options allows enterprises to run models according to their existing infrastructure.

Furthermore, open source models typically have more stable performance compared to private models. In the past year, there have been numerous reports of OpenAI model performance degrading (or more precisely, changing) as the company continues to retrain, tweak, and alter safeguard measures. These models are effectively black boxes within black boxes, making it challenging to obtain stable outputs.

In contrast, open source models offer stable performance, with enterprises deciding when they are updated, what the safeguards are, and avoiding panic lockdowns due to random users posting jailbreaks online. The open source model landscape is also progressing rapidly, thanks to the sharing of knowledge between researchers and developers.

There are now many tools and techniques available to customize open source LLMs for specific applications, which are not available for private models. Enterprises can use quantization to cut the costs of running models or employ low-rank adaptation to fine-tune them at a fraction of the real cost, allowing thousands of models to run on a single GPU. Open source models can be fitted to all kinds of applications and budgets.

The issue with companies like OpenAI is that they are trying to tackle two things simultaneously: achieve AGI and deliver profitable products to fund their research. These two goals can sometimes be diametrically opposed, as the OpenAI saga demonstrates.

In reality, most enterprises don’t want AGI. And in most cases, they don’t need state-of-the-art models with trillions of parameters. What they need is a solid foundation on which they can build stable LLM applications, even if it is a large language model (LLM) with a few billion parameters. This is the opportunity that the open source ecosystem provides. As the fallout from OpenAI continues to unfold, more enterprises will likely flock to open source LLMs.

Platforms like ChatGPT will remain useful for fast prototyping and exploring the capabilities of cutting-edge AI technology. But once they find the right application, enterprises will be better served by investing in a technology that will remain regardless of the politics of the company that develops it.

Author: Ben Dickson
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!