MobileNews

Nvidia will bring AI to every industry, says CEO Jensen Huang in GTC keynote: ‘We are at the iPhone moment of AI’

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


As Nvidia’s annual GTC conference gets underway, founder and CEO Jensen Huang, in his characteristic leather jacket and standing in front of a vertical green wall at Nvidia headquarters in Santa Clara, California, delivered a highly-anticipated keynote that focused almost entirely on AI. His presentation announced partnerships with Google, Microsoft and Oracle, among others, to bring new AI, simulation and collaboration capabilities to “every industry.”

“The warp drive engine is accelerated computing, and the energy source is AI,” Huang said. Generative AI capabilities, he said, have “created a sense of urgency for companies to reimagine their products and business models. Industrial companies are racing to digitalize and reinvent into software driven tech companies to be the disrupter and not the disrupted.”

Huang’s keynote kicked off with the iconic “I am AI” opening (that launched in 2017) with music that this time around was apparently composed by AI and arranged by composer John Naesano. Then, Huang launched into a dizzying array of announcements, which included everything from training to deployment for cutting-edge AI services; new semiconductors and software libraries; and a complete set of systems and services for startups and enterprises.

The announcements at GTC, which targets Nvidia’s community of over 4 million developers, come in the context of Nvidia’s continued AI dominance, particularly in the latest era of generative AI.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

 


Register Now

As detailed in VentureBeat’s recent in-depth feature story, Nvidia got a massive AI head start when when the hardware and software company helped power the deep learning “revolution” of a decade ago, and Nvidia shows few signs of losing its lead as generative AI explodes with tools like ChatGPT.

In fact, Nvidia powers ChatGPT: According to UBS analyst Timothy Arcuri, ChatGPT used 10,000 Nvidia GPUs to train the model.

In his keynote, Huang recounted how back in 2016 he hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer — the engine behind the large language model powering ChatGPT.

>>Follow VentureBeat’s ongoing Nvidia GTC spring 2023 coverage<<

Nvidia’s technologies are fundamental to AI, said Huang, recounting how Nvidia was there at the very beginning of the generative AI revolution. Back in 2016, he hand-delivered the first Nvidia DGX AI supercomputer to OpenAI — the engine behind the large language model breakthrough powering ChatGPT.

Nvidia DGX supercomputers, originally used as an AI research instrument, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers. “DGX supercomputers are modern AI factories,” Huang said.

Nvidia calls DGX the blueprint for AI infrastructure

The latest version of DGX features eight Nvidia H100 GPUs linked together to work as one giant GPU. “Nvidia DGX H100 is the blueprint for customers building AI infrastructure worldwide,” Huang said, sharing that Nvidia DGX H100 is now in full production.

H100 AI supercomputers are already coming online, he added. Oracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs. Additionally, Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs. This follows Microsoft Azure’s private preview announcement last week for its H100 virtual machine, ND H100 v5.

Meta has now deployed its H100-powered Grand Teton AI supercomputer internally for its AI production and research teams. And OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research.

Nvidia DGX cloud to bring AI supercomputers “to every company”

To speed DGX capabilities to startups and enterprises building new products and developing AI strategies, Huang announced Nvidia DGX Cloud. Through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure, Nvidia DGX Cloud will bring Nvidia DGX AI supercomputers “to every company, from a browser.”

DGX Cloud is optimized to run Nvidia AI Enterprise, the world’s leading acceleration software suite for end-to-end development and deployment of AI. Nvidia is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud.

This partnership brings Nvidia’s ecosystem to cloud service providers while amplifying Nvidia’s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis.

Custom LLMs and generative AI for enterprises

To accelerate the work of those seeking to harness generative AI, Huang announced Nvidia AI Foundations, a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks.

AI Foundations services include Nvidia NeMo for building custom language text-to-text generative models; Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content; and BioNeMo, to help researchers in the $2 trillion drug discovery industry.

Huang announced an Adobe-Nvidia partnership to build a set of next-generation AI capabilities; Getty Images is collaborating with Nvidia to train responsible generative text-to-image and text-to-video foundation models; and Shutterstock is working with Nvidia to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets.

Nvidia invented accelerated computing for AI, including deep learning

Nvidia invented accelerated computing to solve problems that normal computers can’t, said Huang. “It requires full-stack invention from chips, systems, networking, acceleration libraries, to refactoring the applications.”

Each optimized stack, he explained, accelerates an application domain — from graphics, imaging, and quantum physics to machine learning. “The application can enjoy incredible speed-up as well as scale-up across many computers. This enabled us to achieve a million x for many applications over past decade,” he said.

The most famous application of Nvidia’s accelerated computing, he noted, was deep learning.

In 2012, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton needed an insanely fast computer to train the AlexNet computer vision model. The researchers trained AlexNet, Huang explained, with 14 million images on GeForce GTX 580 processing and 262 quadrillion floating point operations. The trained model won the ImageNet challenge by a wide margin and, Huang said, “ignited the big bang of AI.”

A decade later, the Transformer model was invented and Sutskever, now at OpenAI, trained the GPT-3 large language model to predict the next word. 323 sextillion floating point operations were required to train GPT-3, Huang said — a million times more floating point operations than to train AlexNet.

“The result is ChatGPT, the AI heard around the world,” he said.

Huang and Sutskever will surely discuss it all, and more, at their Fireside Chat, scheduled for tomorrow at 9 am PT.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Sharon Goldman
Source: Venturebeat

Related posts
DefenseNews

Defense Innovation Unit prepares to execute $800 million funding boost

DefenseNews

Army may swap AI bill of materials for simpler ‘baseball cards’

DefenseNews

As the US Air Force fleet keeps shrinking, can it still win wars?

Cleantech & EV'sNews

Tesla skirts Austin's environmental rules at Texas gigafactory

Sign up for our Newsletter and
stay informed!