AI & RoboticsNews

Nvidia VP Manuvir Das says enterprise AI is at ‘an inflection point’ as GTC gets underway

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Amid the dizzying array of product announcements at Nvidia GTC today, I was particularly interested in finding out how the news could impact the adoption and application of AI in the enterprise in 2023.

I was pleased to get the scoop straight from Manuvir Das, vice president of enterprise computing at Nvidia, who I spoke with last month for my deep dive into Nvidia’s rise to AI dominance over the past decade. This one-on-one interview has been edited and condensed for clarity.

>>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<<

VentureBeat: How would you characterize the GTC announcements in terms of how they’re going to impact AI in the enterprise? 

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

Manuvir Das: I would say that enterprise AI is at an inflection point. Generative AI and ChatGPT have made it so obvious now that AI can transform every business. We’ve seen this coming for some time, and [have pushed] companies to adopt AI, but I think ChatGPT has really created that iPhone moment, if you will, for enterprise companies to say ‘I need to adopt this into my business and my products, otherwise I’m going to be left behind.’ 

If you look at many of our announcements, they are in line with this moment. DGX Cloud [which democratizes Nvidia DGX AI supercomputers and is optimized to run Nvidia AI Enterprise] is a place where you can get the capacity to do your training.

Nvidia AI Foundations is a place where you can customize generative AI for your company and then deploy it. Nvidia Enterprise provides the software, the NeMo framework that you can use to train your own models in the clouds.

The new portfolio of chips that we’ve announced are all about the inference for generative AI. Our Hopper H100 was designed in large part specifically to do a really good job on these Transformer models that are behind generative AI, and Hopper is fully in production and beginning to become available through various channels like different clouds and OEMs. 

So on the enterprise AI front, the whole theme of GTC is to say this is the moment for enterprises to really embrace AI. For all those enterprises who are on the fence dabbling and thinking, is there a compelling business use case to justify the investment in AI? I think it’s now staring them in the face.  

VB: What were the biggest pain points for enterprises that these different announcements addressed? 

Das: The first pain point is that AI is a complex thing. For the very small company or the single data scientist, there are very nice turnkey solutions. You can use SageMaker on AWS or Vertex AI on Azure and get going pretty quickly. But the moment you go beyond like a single GPU, or a single node, the solutions don’t really scale.

On the other hand, you can be a really large practitioner of AI that’s consuming 1000s of GPUs, like a Meta or OpenAI and you have your own team of hundreds of engineers that can deal with all the complexities and do all the engineering work that is needed to actually use that large scale environment. 

The gap is for a typical enterprise company trying to adopt AI. You need a solution in the middle, like 50 or 100 GPUs, so it scales but it’s relatively turnkey. So you can just do your data science but you don’t have to do engineering. That’s what we addressed with DGX Cloud. 

The second pain point is that generative AI sounds great. ChatGPT is awesome. It’s just one giant model for the whole world. But an enterprise company needs a model that is useful in their context, with their data, their customer information and their employees. So they need a way to build their own custom models, but they can’t just start from scratch.

The kind of model they need requires the information from that company so they can answer questions with the right answers, but it also needs the general intelligence of how to converse that ChatGPT has learned from the internet. 

So what was needed in the market, we believe, is a system for enterprise customers. We’ve got some pretrained models that we’ve trained on enough of the internet so they have the general intelligence and conversational ability to go back and forth. And then we give you a way to customize this stuff with your data, so that the model can actually be used in your processes to provide the right answers, and express it properly, intelligently. That’s why we built Nvidia Foundations.

The third pain point was that for enterprise companies in general, if they’re adopting AI in production, they need some reliable platforms. For me as an enterprise customer, I use SAP HANA or VMware or Microsoft Office 365 or Windows Server or what have you. These are platforms that I can rely on — the bugs will be fixed, there’s a release cadence, things are certified. Whereas today, a lot of it is I pick up this from open source, that from open source, tie it all together and kind of hope for the best. 

We address that by taking our Nvidia Enterprise software platform and putting it into the cloud, so that you can consume it when you spin up instances — Google, Azure or AWS or wherever you choose to do your work. So you can actually be running a reliable piece of software that has done all the work of bringing all the parts together. 

VB: Which of today’s Nvidia GTC announcements are you most excited about personally? 

Das: It’s hard to play favorites, but I definitely think that DGX Cloud is going to be a big benefit to enterprise customers because it’s going to be much easier for them to actually do the work at that middle-ground scale. Nobody’s really paying attention to them because they are not like a unicorn that is renting 10,000 GPUs and everybody’s focused on them. But I think enterprise customers are the ones who will really bring the benefits of AI to bear for their customers. They need some love, they need better solutions.

VB: Obviously GTC is historically about talking to the millions of developers that are going to be using these tools. But I’m curious — if you had a group of C-suite leaders in the room, people who know more or less about Nvidia, what would you say to them about what they need to know about enterprise AI and Nvidia right now?

Das: I think what they need to know is, number one, it’s a full-stack problem. It’s not just about the hardware. You require the hardware and the right software all optimized together to make it work. And I will tell them that, unknown to them, Nvidia has spent the last decade exploring all of the different use cases that AI can enable, and building the software stacks to enable all these use cases. 

So if they’re looking for a one-stop-shop company to talk to or to work with to figure out how to conduct the AI journey, there’s no better bet for them than Nvidia, because we’re the ones who’ve done it all. You talk about a company like OpenAI, which has ChatGPT, and kudos to them. How did they get that? Because they worked very closely with Nvidia to design the right architecture that could scale, so that they could do all of their training and build their models.

VB: What about the people out there melting their GPUs right now? I mean, are there enough GPUs to make this all happen?

Das: It’s our job to provide lots of GPUs. But it’s an interesting thing you say about the melting, because I think another thing that is different in traditional enterprise computing over the years is there’s this general conventional wisdom that you over-provision. You don’t want your CPU to be running more than 50 to 60% of its capacity. You leave plenty of headroom there. And just think about the waste that comes from that. 

But because GPUs were designed from the beginning to solve a performance problem, we always put the GPUs in a mode where they’re designed for people to really utilize them. When we do all our burden testing for every new GPU that we come out with, before we put it into production, we subject those things to intense load, where it’s just running at max load for long periods of time, because we know that’s how people are going to use them. So I think that’ll be the other benefit as GPUs come in. I’m sure people will be trying to melt them down but I think we’re good. 

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!