AI & RoboticsNews

Nvidia’s Omniverse adds AR/VR viewing, AI training, and AI avatar creation

Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next. 


Nvidia’s Omniverse, billed as a “metaverse for engineers,” has grown to more than 700 companies and 70,000 individual creators that are working on projects to simulate digital twins that replicate real-world environments in a virtual space.

The Omniverse is Nvidia’s simulation and collaboration platform delivering the foundation of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. Omniverse is now moving from beta to general availability, and it has been extended to software ecosystems that put it within reach of 40 million 3D designers.

And today during Nvidia CEO Jensen Huang’s keynote at the Nvidia GTC online conference, Nvidia said it has added features such as Omniverse Replicator, which makes it easier to train AI deep learning neural networks, and Omniverse avatar, which makes it simple to create virtual characters that can be used in the Omniverse or other worlds.

“You will see Omniverse emerge as a foundational platform for digital twins across all of our different industries,” said Richard Kerris, vice president of Omniverse platform development, in a press briefing. “We’re seeing and hearing lots about virtual worlds and things like that. But really what’s going to bring this all together is the connectivity that brings these worlds to have a consistent foundation, or consistent plumbing to make them all work together. And what’s making that happen is Omniverse, our platform that enhances and extends our existing workflows.”

Webinar

Three top investment pros open up about what it takes to get your video game funded.


Watch On Demand

Among the companies using Omniverse are BMW, Ericsson, SHoP Architects, South Park, Adobe, BMW Group, Bentley, Esri, CannonDesign, Epigraph, architectural firms HKS and KPF, Sony Pictures, Substance 3D, Clo Virtual FashionGolaem, Maxon, Notch, Wacom, and Lockheed Martin are evaluating the platform.

Since the launch of its open beta in December 2020, Omniverse has widened its reach from engineers to just about anyone who can use the Blender open source 3D software, which has more than three million users. Other new extensions for it now include Daz3D, eon software’s PlantFactory, PlantCatalog and VUE, Radical, Reallusion, Replica, Style3D, and TwinBru.

“We believe that these virtual worlds are going to be the thing that enables the next era of innovation, whether it’s doing visualization and layouts for cities, doing earth simulations for weather patterns, digital production, or synthetic data generation for autonomous vehicles,” Kerris said. “Virtual worlds are essential for the next generation of innovation. And we built Omniverse to serve this, this opportunity.”

Omniverse is adding new features such as augmented reality, virtual reality, and multi-GPU (graphics processing unit) rendering, as well as integrations for infrastructure and industrial digital-twin applications with software from Bentley Systems and Esri. Nvidia has launched Nvidia CloudXR so developers can make it easy for users to interactively stream Omniverse experiences to their mobile AR and VR devices.

And Omniverse VR introduces the world’s first full-image, real-time ray-traced VR — enabling developers to build their own VR-capable tools on the platform, and end users to enjoy VR capabilities directly. Omniverse Remote provides AR capabilities and virtual cameras, enabling designers to view their assets fully ray traced through iOS and Android devices. Omniverse Farm lets teams use multiple workstations or servers together to power jobs like rendering, synthetic data generation or file conversion.

The Omniverse Showroom, available as an app in Omniverse Open Beta, lets non-technical users play with Omniverse tech demos that showcase the platform’s real-time physics and rendering technologies.

“Showroom is really important because it brings the capabilities of Omniverse to 10s of millions of RTX users,” Kerris said. “Everybody out there that has an RTX GeForce card or gaming card will be able to tap into the power and easily understand what Omniverse can do for them. We think that’s going to inspire the next generation of developers.”

The Omniverse Enterprise subscription is available at $9,000 a year through global computer makers Boxx Technologies, Dell, HP, Lenovo, and Supermicro, and distribution partners Arrow, Carahsoft Technology Corp, ELSA, Ingram-China, Leadtek, OCS, PNY and SB C&S. Customers can purchase Nvidia Omniverse Enterprise subscriptions directly from these resellers or via their preferred business partners.

Omniverse background

Above: Nvidia Omniverse Drive can test self-driving cards in a virtual world.

Image Credit: Nvidia

Omniverse is based on Pixar’s widely adopted Universal Scene Description (USD), the leading format for universal interchange between 3D applications. Pixar used it to make animated movies. The platform also uses Nvidia technology, such as real-time photorealistic rendering, physics, materials, and interactive workflows between industry-leading 3D software products.

“We think of us USD as the HTML of 3D,” said Kerris. “And that’s an important element because it allows for all of these existing software products to take advantage of the virtual worlds that we’re talking about. “You can see the output capabilities of Omniverse, whether you’re streaming to an AR or VR device, or you’re allowing me to go out to a tablet that allows you to peer in into the world into the 3D world.”

Omniverse enables collaboration and simulation that could become essential for Nvidia customers working in robotics, automotive, architecture, engineering, construction, manufacturing, media, and entertainment.

The Omniverse makes it possible for designers, artists, and reviewers to work together in real time across leading software applications in a shared virtual world from anywhere. Kerris said Nvidia is taking input from developers, partners, and customers and is advancing the platform so everyone from individuals to large enterprises can work with others to build amazing virtual worlds that look, feel, and behave just as the physical world does.

Nvidia defines the metaverse as an immersive and connected shared virtual world. Here, artists can create one-of-a-kind digital scenes,  architects can create beautiful buildings, and engineers can design new products for homes. These creations — often called digital twins — can then be taken into the physical world after they have been perfected in the digital world.

“Digital twins essentially are a way to take stuff that’s in the real world, real portions of the real world and represent them in the virtual world,” said Rev Lebaredian, vice president of simulation technology and Omniverse engineering at Nvidia, in a press briefing. “Once you have an accurate representation of something that’s in the real world inside the virtual world, and you have a simulator that can accurately simulate how the world behaves, you can do some pretty amazing things.”

He added, “Inside the virtual world, you can jump to any point in that world, and you can feel it and perceive it as if you were there using the new technologies that being developed to make us more immersed inside these virtual worlds from VR to AR and or the more traditional ways of viewing it through our screens. With simulation, we also have the potential of time travel inside these worlds. We can record what has happened in the past inside the virtual world. And rewind to play back what happened in your factory.”

Key to Omniverse’s industry adoption is Pixar’s open source USD — the foundation of Omniverse’s collaboration and simulation platform — enabling large teams to work simultaneously across multiple software applications on a shared 3D scene. Engineers can work on the same part of the simulated imagery at the same time. The USD open standard foundation gives software partners multiple ways to extend and connect to Omniverse, whether through USD adoption and support or building a plugin or via an Omniverse Connector.

Apple, Pixar, and Nvidia collaborated to bring advanced physics capabilities to USD, embracing open standards to provide 3D workflows to billions of devices. Lockheed Martin is working with Omniverse for wildfire simulation, prediction, and suppression. Its AI center of excellence has launched an initiative with Nvidia called Cognitive Mission Management to develop a strategy for emergency response and fire suppression efforts. The aim is to predict the ways wildfires will spread based on variables like wind, moisture, and ground cover. That could be pretty useful to the communities in my part of California.

And South Park, the long-running, Emmy Award-winning animated television series, is exploring Omniverse to enable several of its artists to collaborate on scenes and optimize their extremely limited production time.

Omniverse Avatar

Above: Omniverse Showroom

Image Credit: Nvidia

The Omniverse Avatar can generate interactive AI avatars that combines technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation. This was used to create an avatar of Huang, which is featured in the keynote.

Omniverse Avatar opens the door to the creation of AI assistants that are easily customizable for virtually any industry. These could help with the billions of daily customer service interactions — restaurant orders, banking transactions, making personal appointments and reservations, and more — leading to greater business opportunities and improved customer satisfaction. Of course, that means these AI assistants may be contending with humans to do jobs in the future. That’s something we should all be concerned about.

“The dawn of intelligent virtual assistants has arrived,” said Huang, in the speech. “Omniverse Avatar combines Nvidia’s foundational graphics, simulation, and AI technologies to make some of the most complex real-time applications ever created. The use cases of collaborative robots and virtual assistants are incredible and far-reaching.”

Huang showed off Project Tokkio for customer support and Project Maxine for video conferencing.

In the first demonstration of Project Tokkio, Huang showed colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself — conversing on such topics as health care diagnosis and climate science.

In the next Project Tokkio demo, he highlighted a customer-service avatar in a restaurant kiosk, able to see, converse with and understand two customers as they ordered veggie burgers, fries, and drinks. The demonstrations were powered by Nvidia AI software and Megatron-Turing NLG 530B, Nvidia’s generative language model, which is currently the largest in the world.

Separately, he showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and content creation applications. An English-language speaker is shown on a video call in a noisy cafe but can be heard clearly without background noise. As she speaks, her words are both transcribed and translated in real time into French, German, and Spanish with her same voice and intonation.

Omniverse Replicator

Ericsson is testing how 5G network signals bounce around a city.

Above: Ericsson is testing how 5G network signals bounce around a city.

Image Credit: Ericsson/Nvidia

And the Omniverse Replicator is a synthetic data-generation engine that produces 3D and 4D data to accelerate the training of deep neural networks.

In its first implementations of the engine, the company introduced two applications for generating synthetic data: one for Nvidia Drive Sim, a virtual world for hosting the digital twin of self-driving cars, and another for Nvidia Isaac Sim, a virtual world for the digital twin of manipulation robots.

These two physically-based Omniverse virtual worlds allow developers to bootstrap AI models, fill real-world data gaps, and label the ground truth in ways humans cannot. Data generated in these virtual worlds can cover a broad range of diverse scenarios, including rare or dangerous conditions that can’t regularly or safely be experienced in the real world.

AVs and robots built using this data can master skills across an array of virtual environments before applying them in the physical world.

Omniverse Replicator augments costly, laborious human-labeled real-world data, which can be error prone and incomplete, with the ability to create large and diverse physically accurate data tailored to the needs of AV and robotics developers. It also enables the generation of ground truth data that is difficult or impossible for humans to label, such as velocity, depth, occluded objects, adverse weather conditions, or tracking the movement of objects across sensors.

Lebaredian said that engineers often need data on “corner cases” that are very infrequent but are needed for design, like data on accidents. But you can’t capture that, at least not in an ethical way. So the simulation can be used to capture improbable situations, like with self-driving car simulations.

In addition to Nvidia Drive and Isaac, Omniverse Replicator will be made available next year to developers to build domain-specific data-generation engines.

Fighting wildfires

Above: Lockheed Martin is using Omniverse to simulate wildfires.

Image Credit: Nvidia

Nvidia and Lockheed Martin have teamed up to use Nvidia’s DGX supercomputers and Omniverse to fight wildfires, which have caused $20 billion in damage in the western U.S. last year alone.

To better understand wildfires and stop their spread, NVIDIA and Lockheed Martin are announcing today that we are working with the U.S. Department of Agriculture Forest Service and Colorado Division of Fire Prevention & Control (DFPC) using AI and digital-twin simulation.

The companies are also announcing that we are building the world’s first AI-centric lab dedicated to predicting and responding to wildfires. The lab will use Nvidia AI infrastructure and the Nvidia Omniverse advanced visualization and digital-twin simulation platform to process a fire’s magnitude and forecast its progress. By recreating the fire in a physically accurate digital twin, the system will be able to suggest actions to best suppress the blaze.

“We need better tools to predict what’s going to happen when a wildfire starts,” Lebaredian said. “And and we need to equip our firefighters with the means for the most efficient path for suppressing them.”

While wildfires have torn through more than 6.5 million acres in the U.S. so far this year, the problem is far broader than that. Vastly more land has been scorched in Siberia, and epic blazes have swept through the Mediterranean region and Australia.

The AI development lab, to be based in Silicon Valley, will include Lockheed Martin’s Cognitive Mission Manager (CMM) system, end-to-end AI-driven planning, and orchestration platform that combines real-time sensor data about the fire with other data sources on fuel vegetation, topography, wind and more to predict the fire’s spread. The CMM provides course-of-action recommendations to the incident command teams to decrease response time and increase the effectiveness of wildfire suppression and humanitarian actions.

The lab will serve as an open collaboration space for industry and users to concentrate talent and resources for CMM design and rapidly develop prototypes in Nvidia Omniverse. Lockheed Martin is providing domain expertise in applied AI, platform integration, planning, and oversight as well as several dozen application engineers and data scientists. Nvidia is contributing data analysis and simulation expertise to help develop, model, test and deploy in the lab.

To crunch the data and train AI models, Lockheed Martin uses Nvidia DGX systems. To visualize the fires and predict how they might spread, Lockheed Martin uses Nvidia Omniverse Enterprise.

“The combination of Lockheed Martin and Nvidia technology has the potential to help crews respond more quickly and effectively to wildfires while reducing risk to fire crews and residents,” said Shashi Bhushan, principal AI architect at Lockheed Martin, in a statement.

The simulation allows fire behavior analysts to see the predictions in a digital twin of the environment. Using the real-time, multi-GPU scalable simulation platform, Lockheed Martin creates visualizations of the predicted fire movements and studies their flow dynamics across a digital replica of the landscape.

GamesBeat

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it.
How will you do that? Membership includes access to:

  • Newsletters, such as DeanBeat
  • The wonderful, educational, and fun speakers at our events
  • Networking opportunities
  • Special members-only interviews, chats, and “open office” events with GamesBeat staff
  • Chatting with community members, GamesBeat staff, and other guests in our Discord
  • And maybe even a fun prize or two
  • Introductions to like-minded parties

Become a member


Author: Dean Takahashi
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!