Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.
Nvidia reported this week that its earnings for the fourth quarter ended January 29 were $6.05 billion, down 21% from a year ago. But Jensen Huang, CEO of Nvidia, said that generative AI is expected to create a significant opportunity that will accelerate later this year.
Nvidia’s stock price rose on the opportunity, and Huang noted how generative AI will be front and center at its GTC event the week of March 20. (I will moderate a panel on the enterprise metaverse at Nvidia’s GTC event on March 22).
VentureBeat’s Sharon Goldman wrote a long piece on how Nvidia dominated the AI market. Goldman said that “the 2023 AI hype explosion, as large language models like ChatGPT and DALL-E 2 have launched generative AI into the public’s consciousness in a way not seen, perhaps, since the dawn of the iPhone in 2007.”
I spoke with Huang this week and he said that the combination of user-generated content and generative AI will help create content for the metaverse at an even faster pace than previously expected. We talk about that and other gaming trends in our talk. I also asked him if he has a shot at landing Nvidia chips in a future Nintendo game console.
Event
GamesBeat Summit 2023
Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.
Here’s an edited transcript of our interview.
GamesBeat: Generative AI is having its moment. You’ve been talking about this for a relatively short time. What’s convinced you that it’s more important?
Jensen Huang: Of course, ChatGPT has been here for just a short time, about three months. But as you know, when it comes to large language models, the industry has been playing with them for some time. If you look at some of the breakthroughs from the last year, whether it’s image generative models, which started with our ProGAN work and now our GauGAN work, all of the GAN work we did, and the variable auto-encoders that we did. A cousin of the variable auto-encoder became diffusion models. The stability of it, the scalability, it’s turned out to be incredible. All of that added together last year.
Now we have generative models for proteins. We have generative models for chemicals. We have generative models for language, for text. We have generative models for images and video. As you know, we’re working on generative models for 3D. You won’t be able to populate the world’s Omniverse, the metaverse, with human-engineered content. It has to either be perceived through computer vision, or generated, or a combination of both.
GamesBeat: What I’ve seen in the last month or so are five different startups combining generative AI and user-generated content in games. Even Roblox, last Friday, showed their demo of that. It seems like users are not terribly talented professionals, but if you give them generative AI, they become able to create things that are usable or playable.
Huang: It’s exactly like you say. You can generate the first version and then let me modify it. I don’t think I can create a model from the ground up, but I bet I can modify one you put in front of me. It’s no different from people who use clip art in Powerpoint slides. They’re always combining other people’s work and modifying. It’s a lot easier to create something around that. I think it’s going to turbocharge content creation.
GamesBeat: It’s more believable now that we have these different things. We have Omniverse. We have generative AI. We have UGC. All of it contributes to the metaverse.
Huang: That’s right, exactly. The pieces are coming together. It’s very exciting. The characters–you can ask them questions, right? These are characters you can really converse with. They can talk in different languages. They can understand what you mean. Using retrieval models, a company building a medieval game or a sci-fi game or a Battlefield game, they can take the entire knowledge base of the story, the narrative of that game and teach an AI only that. The entire worldview of the game AI could be completely medieval and narrowly focused on the gameplay. It’ll be safe. It’ll be game-specific. We now have all the necessary pieces of technology to help people do that.
GamesBeat: Do you think we’re getting close to having a gaming metaverse and an enterprise metaverse benefiting each other through something like Omniverse?
Huang: I’m less certain about that. We don’t spend as much time on the consumer side. But on the industrial side, the energy is really high. Now, with the combination of generative models and proprietary models coming together, you can get working really fast. The industrial metaverse time is right around the corner.
GamesBeat: For the gaming results today, you noted that gaming is in a recovery. It’s up 16 percent from the previous quarter, but still down 46 percent. What is the pattern that we’ve seen happen in gaming with respect to the results you report?
Huang: We subtract out the entire COVID time. Say, from the end of 2019, 2020, 2021, and 2022. The business of gaming sell-through was averaging around $1.5 billion a quarter going into COVID. We’re in a world today where the sell-out is likely to be $2.5 billion in the coming year. A little lower in the beginning as we continue to normalize the channel, and then probably higher than that in the second half of the year as we have seasonality. The difference between $1.5 billion versus $2.5 billion, that’s basically, if you will, subtracting out the pandemic.
If you look at what’s going on at Steam, it’s approximately that. The growth of Steam, the number of active players on Steam, is kind of reflective of that. And then of course China is going to recover again, which we’re quite excited about. New games are being approved now. China is back in recovery. We’ll see how it turns out. I think the gaming market has undoubtedly grown in the last three years.
GamesBeat: Do you think there’s an opportunity to get into the next Nintendo console?
Huang: We’ll keep our fingers crossed. We’re very good at building energy-efficient gaming systems, number one. Number two, we’re quite convinced that the next generation of video games is heavily raytraced if not fully raytraced, and based on generative AI. You know we’re doing a lot of work–RTX is really based on two fundamental technologies: raytracing and AI. I think you’ll find that the next generation of video games is going to increasingly use those two technologies. We’re incredibly good at that. I hope it happens.
GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.
Author: Dean Takahashi
Source: Venturebeat