AI & RoboticsNews

Jensen Huang interview — Nvidia can shake off rivals that have complicated and untested AI solutions

Intel just announced its first graphics processing unit (GPU), the Ponte Vecchio, to compete with Nvidia for handling AI in supercomputers. But Nvidia CEO Jensen Huang isn’t worried.

I spoke with Huang on a brief call after his company posted $3.01 billion in revenue for the quarter ended October 31. Both gaming graphics chips and hyperscale datacenter GPUs are going strong, enabling the company to beat analyst expectations this week.

But as the Intel news (confirmed this afternoon) suggests, there’s still plenty of competition. Huang shook that off, saying he wants to see the Intel design but “we have our own tricks up our sleeves.” He said it’s tough for competitors to come into markets such as AI and try to make a solution that works, isn’t too complicated, and uses the software developers want.

Here’s an edited transcript of our interview.

Jensen Huang, CEO of Nvidia, at CES 2019.

Above: Jensen Huang, CEO of Nvidia, at CES 2019. | Image Credit: Dean Takahashi

GamesBeat: You had another good quarter.

Jensen Huang: We did. I’m so happy, because we’re now officially back. It looks pretty promising going forward.

GamesBeat: What was surprising about it? Anything unusually strong that you got excited about?

Huang: RTX is doing great. RTX notebooks are doing great. The thing that’s great about the notebook, the gaming notebook, is between RTX and Max-Q, we’ve defined a new category. Gaming notebooks aren’t that abundant. Not that many gamers are able to play on notebooks. Now they have the power and the lightness to enjoy that notebook.

This last quarter we had strong sales in gaming, and we also had strong sales in hyperscale. The standout was inference. Over this last quarter, we sold a record number of V100s and T4s, our hyperscale GPUs. This is also the first quarter where the T4 sales exceeded V100, and the quarter where we swept the ML benchmarks. We’ve shown people that not only are we good at training, but we’re good at inference as well.

The underlying dynamics–hundreds of millions of gamers are going to upgrade to raytracing. Gaming notebooks is the new category. It’s very clear that conversational AI, deep learning recommender, natural language understanding breakthroughs, all that is encouraging hyperscalers to deploy accelerators into their data center for deep learning. The future of inference requires acceleration. If you look at those dynamics, they’re pretty fundamental. We’re enjoying the success. These foundational trends are clear.

Nvidia Jetson Xavier NX

Above: Nvidia Jetson Xavier NX | Image Credit: Nvidia

GamesBeat: What do you think of the rumor that Intel’s GPU is surfacing?

Huang: I’m anxious to see it, just like you probably are. I enjoy looking at other people’s products and learning from them. We take all of our competitors very seriously, as you know. You have to respect Intel. But we have our own tricks up our sleeves. We have a fair number of surprises for you guys as well. I look forward to seeing what they’ve got.

GamesBeat: The other strange thing out there is the Cerebras, the 1.2 trillion transistor wafer? Andrew Feldman’s company. That seems like new competition for Nvidia as well.

Huang: Oh, boy. The list of competitors is really deep. We have a fair number of competitors. The part that I think people misunderstand, where they don’t have the runway we’ve had–the software stack on top of these chips is really complicated.

It’s not logical that a program that was written by a supercomputer, that took a week to write, and it’s the largest single program in the world, so large and so complicated that no one can read it, that somehow a compiler would then compile the software, this computational graph, keep its accuracy, prune it down, and in fact improve its performance dramatically on top of a GPU, on top of any chip–that that piece of software would be easy to write. It’s just not sensible.

The challenge for what everybody’s seeing in deep learning, the software richness is really quite high. In training, you have to wait days and weeks before it comes back to tell you whether your model works or not. And in the beginning, they all don’t work. You have to iterate on these things hundreds of thousands of times as you search through the hyperparameters and you tune your network. You keep feeding it with more data. The data has to be of certain types. You’re learning about all this. You’re not sure if you have the right data or the right model or the right hyperparameters. The last thing you need is to also not know whether the computer underneath is doing its job.

That’s the challenge in deep learning today. In the area of training, the barrier to adoption is very high. People just don’t–why would they trust the solution? Why would they trust a system to scale out to 100 million, 200 million, sight unseen? It’s just too complicated, I think. In inference, the challenge is there’s so many different types of models, so many species, and they’re coming out of everywhere. First we had a breakthrough in image recognition, and now we have a breakthrough in natural language understanding. The next generation is multimodality and multidomain learning. It’s going to get harder and harder.


Author: Dean Takahashi
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!