AI & RoboticsNews

The missing link of the AI safety conversation

In a Nutshell

The conversation around AI development has shifted to focus on the cost of AI and the need for democratization. While advancements in AI have been made through increased scale and computing power, this has created a barrier to entry for many individuals and companies due to the high costs involved. Democratizing AI is crucial for promoting safety and maximizing its positive impact. Centralization also poses risks, as reliance on a single model can lead to product outages or governance failures. To improve accessibility, investments should be diversified, data ownership should be considered, and open-source models should be utilized more effectively.

In light of recent events with OpenAI, the conversation on AI development has morphed into one of acceleration versus deceleration and the alignment of AI tools with humanity.

The AI safety conversation has also quickly become dominated by a futuristic and philosophical debate: Should we approach artificial general intelligence (AGI), where AI will become advanced enough to perform any task the way a human could? Is that even possible?

While that aspect of the discussion is important, it is incomplete if we fail to address one of AI’s core challenges: It’s incredibly expensive. 

The internet revolution had an equalizing effect as software was available to the masses and the barriers to entry were skills. These barriers got lower over time with evolving tooling, new programming languages and the cloud.

When it comes to AI and its recent advancements, however, we have to realize that most of the gains have so far been made by adding more scale, which requires more computing power. We have not reached a plateau here, hence the billions of dollars that the software giants are throwing at acquiring more GPUs and optimizing computers. 

To build intelligence, you need talent, data and scalable compute. The demand for the latter is growing exponentially, meaning that AI has very quickly become the game for the few who have access to these resources. Most countries cannot afford to be a part of the conversation in a meaningful way, let alone individuals and companies. The costs are not just from training these models, but deploying them too. 

According to Coatue’s recent research, the demand for GPUs is only just beginning. The investment firm is predicting that the shortage may even stress our power grid. The increasing usage of GPUs will also mean higher server costs. Imagine a world where everything we are seeing now in terms of the capabilities of these systems is the worst they are ever going to be. They are only going to get more and more powerful, and unless we find solutions, they will become more and more resource-intensive. 

With AI, only the companies with the financial means to build models and capabilities can do so, and we have only had a glimpse of the pitfalls of this scenario. To truly promote AI safety, we need to democratize it. Only then can we implement the appropriate guardrails and maximize AI’s positive impact. 

From a practical standpoint, the high cost of AI development means that companies are more likely to rely on a single model to build their product — but product outages or governance failures can then cause a ripple effect of impact. What happens if the model you’ve built your company on no longer exists or has been degraded? Thankfully, OpenAI continues to exist today, but consider how many companies would be out of luck if OpenAI lost its employees and could no longer maintain its stack. 

Another risk is relying heavily on systems that are randomly probabilistic. We are not used to this and the world we live in so far has been engineered and designed to function with a definitive answer. Even if OpenAI continues to thrive, their models are fluid in terms of output, and they constantly tweak them, which means the code you have written to support these and the results your customers are relying on can change without your knowledge or control. 

Centralization also creates safety issues. These companies are working in the best interest of themselves. If there is a safety or risk concern with a model, you have much less control over fixing that issue or less access to alternatives. 

More broadly, if we live in a world where AI is costly and has limited ownership, we will create a wider gap in who can benefit from this technology and multiply the already existing inequalities. A world where some have access to superintelligence and others do not assumes a completely different order of things and will be hard to balance. 

One of the most important things we can do to improve AI’s benefits (and safely) is to bring the cost down for large-scale deployments. We have to diversify investments in AI and broaden who has access to compute resources and talent to train and deploy new models.

And, of course, everything comes down to data. Data and data ownership will matter. The more unique, high quality and available the data, the more useful it will be.

While there are current gaps in the performance of open-source models, we are going to see their utilization take off, assuming the White House enables open source to truly remain open. 

In many cases, models can be optimized for a specific application. The last mile of AI will be companies building routing logic, evaluations and orchestration layers on top of different models, specializing them for different verticals.

With open-source models, it’s easier to take a multi-model approach, and you have more control. However, the performance gaps are still there. I presume we will end up in a world where you will have junior models optimized to perform less complex tasks at scale, whereas larger super-intelligent models will act as oracles for updates and will increasingly spend computing on solving more complex problems. You do not need a trillion-parameter model to respond to a customer service request. 

We have seen AI demos, AI rounds, AI collaborations and releases. Now we need to bring this AI to production at a very large scale, sustainably and reliably. There are emerging companies that are working on this layer, making cross-model multiplexing a reality. As a few examples, many businesses are working on reducing inference costs via specialized hardware, software and model distillation. As an industry, we should prioritize more investments here, as this will make an outsized impact. 

If we can successfully make AI more cost-effective, we can bring more players into this space and improve the reliability and safety of these tools. We can also achieve a goal that most people in this space hold — to bring value to the greatest amount of people. 

Naré Vardanyan is the CEO and co-founder of Ntropy.

In light of recent events with OpenAI, the conversation on AI development has morphed into one of acceleration versus deceleration and the alignment of AI tools with humanity.

The AI safety conversation has also quickly become dominated by a futuristic and philosophical debate: Should we approach artificial general intelligence (AGI), where AI will become advanced enough to perform any task the way a human could? Is that even possible?

While that aspect of the discussion is important, it is incomplete if we fail to address one of AI’s core challenges: It’s incredibly expensive. 

AI needs talent, data, scalability

The internet revolution had an equalizing effect as software was available to the masses and the barriers to entry were skills. These barriers got lower over time with evolving tooling, new programming languages and the cloud.

When it comes to AI and its recent advancements, however, we have to realize that most of the gains have so far been made by adding more scale, which requires more computing power. We have not reached a plateau here, hence the billions of dollars that the software giants are throwing at acquiring more GPUs and optimizing computers. 

To build intelligence, you need talent, data and scalable compute. The demand for the latter is growing exponentially, meaning that AI has very quickly become the game for the few who have access to these resources. Most countries cannot afford to be a part of the conversation in a meaningful way, let alone individuals and companies. The costs are not just from training these models, but deploying them too. 

Democratizing AI

According to Coatue’s recent research, the demand for GPUs is only just beginning. The investment firm is predicting that the shortage may even stress our power grid. The increasing usage of GPUs will also mean higher server costs. Imagine a world where everything we are seeing now in terms of the capabilities of these systems is the worst they are ever going to be. They are only going to get more and more powerful, and unless we find solutions, they will become more and more resource-intensive. 

With AI, only the companies with the financial means to build models and capabilities can do so, and we have only had a glimpse of the pitfalls of this scenario. To truly promote AI safety, we need to democratize it. Only then can we implement the appropriate guardrails and maximize AI’s positive impact. 

What’s the risk of centralization?

From a practical standpoint, the high cost of AI development means that companies are more likely to rely on a single model to build their product — but product outages or governance failures can then cause a ripple effect of impact. What happens if the model you’ve built your company on no longer exists or has been degraded? Thankfully, OpenAI continues to exist today, but consider how many companies would be out of luck if OpenAI lost its employees and could no longer maintain its stack. 

Another risk is relying heavily on systems that are randomly probabilistic. We are not used to this and the world we live in so far has been engineered and designed to function with a definitive answer. Even if OpenAI continues to thrive, their models are fluid in terms of output, and they constantly tweak them, which means the code you have written to support these and the results your customers are relying on can change without your knowledge or control. 

Centralization also creates safety issues. These companies are working in the best interest of themselves. If there is a safety or risk concern with a model, you have much less control over fixing that issue or less access to alternatives. 

More broadly, if we live in a world where AI is costly and has limited ownership, we will create a wider gap in who can benefit from this technology and multiply the already existing inequalities. A world where some have access to superintelligence and others do not assumes a completely different order of things and will be hard to balance. 

One of the most important things we can do to improve AI’s benefits (and safely) is to bring the cost down for large-scale deployments. We have to diversify investments in AI and broaden who has access to compute resources and talent to train and deploy new models.

And, of course, everything comes down to data. Data and data ownership will matter. The more unique, high quality and available the data, the more useful it will be.

How can we make AI more accessible?

While there are current gaps in the performance of open-source models, we are going to see their utilization take off, assuming the White House enables open source to truly remain open. 

In many cases, models can be optimized for a specific application. The last mile of AI will be companies building routing logic, evaluations and orchestration layers on top of different models, specializing them for different verticals.

With open-source models, it’s easier to take a multi-model approach, and you have more control. However, the performance gaps are still there. I presume we will end up in a world where you will have junior models optimized to perform less complex tasks at scale, whereas larger super-intelligent models will act as oracles for updates and will increasingly spend computing on solving more complex problems. You do not need a trillion-parameter model to respond to a customer service request. 

We have seen AI demos, AI rounds, AI collaborations and releases. Now we need to bring this AI to production at a very large scale, sustainably and reliably. There are emerging companies that are working on this layer, making cross-model multiplexing a reality. As a few examples, many businesses are working on reducing inference costs via specialized hardware, software and model distillation. As an industry, we should prioritize more investments here, as this will make an outsized impact. 

If we can successfully make AI more cost-effective, we can bring more players into this space and improve the reliability and safety of these tools. We can also achieve a goal that most people in this space hold — to bring value to the greatest amount of people. 

Naré Vardanyan is the CEO and co-founder of Ntropy.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Naré Vardanyan, Ntropy
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

RIZON class 4 and 5 electric MD trucks arrive in Canada

Cleantech & EV'sNews

777 hp electric overland concept from Italdesign bows in Beijing [video]

CryptoNews

Does Money Transmitting Require Control? DOJ Says No in Tornado Cash Litigation – Legal Bitcoin News

CryptoNews

Veteran Trader Peter Brandt Suggests BTC May Have Topped, Predicts a Decline to Mid-$30K – Featured Bitcoin News

Sign up for our Newsletter and
stay informed!