AI & RoboticsNews

Oracle loops in Nvidia AI for end-to-end model development

Oracle just made another move to simplify AI development and deployment for its customers. This week, the Larry Ellison-founded company announced it is bringing Nvidia AI stack to its marketplace.

The move gives Oracle customers access to the most sought after, top-of-the-line GPUs for training foundation models and building generative applications.

Under the partnership, the company said it is opening access to Nvidia’s DGX Cloud AI supercomputing platform and AI Enterprise software.

This gives eligible enterprises an option to purchase the tools directly from the marketplace and start training models for deployment on the Oracle Cloud Infrastructure (OCI). Both Nvidia AI offerings are now available, along with the choice of private offer, Oracle said.

“We have worked closely with Nvidia for years to provide organizations with an accelerated compute infrastructure to run Nvidia software and GPUs. The addition of Nvidia AI Enterprise and Nvidia DGX Cloud to OCI further strengthens this collaboration and will help more organizations bring AI-fueled services to their customers faster,” Karan Batta, senior vice president for Oracle Cloud Infrastructure, said in a statement.

Today, enterprises across sectors use the Oracle Cloud Infrastructure to build and run business applications and services. The marketplace of OCI gives developers a catalog of add-on solutions and services to enhance their products.

Nvidia DGX Cloud and AI Enterprise software are the latest two additions to this storefront. This way, customers building apps on OCI can use their existing universal cloud credits to integrate Nvidia’s AI supercomputing platform and software into their development and deployment pipelines. 

Nvidia DGX Cloud is an AI-training-as-a-service platform, offering a serverless experience for multi-node training of custom generative AI models. It supports near-limitless scale of GPU resources with an architecture based on Nvidia’s DGX technology (each DGX Cloud instance consists of eight Nvidia Tensor Core GPUs). 

Meanwhile, Nvidia AI Enterprise is the enterprise-grade toolkit that helps teams accelerate the deployment of models to production. It includes the Nvidia NeMo framework to build, customize and deploy generative AI end-to-end, Rapids to accelerate data science, TensorRT LLM open-source library to optimize inference performance and Triton Inference server to standardize AI model deployment and execution. 

Notably, Nvidia AI Enterprise is offered as a separate offering on the marketplace but it also comes included with DGX Cloud. This streamlines the transition from training on DGX Cloud to deploying AI applications into production via Nvidia AI Enterprise on OCI.

Among the “many” companies using Nvidia’s AI stack on OCI are digital engagement company Gemelo.ai and the University at Albany, in upstate New York. 

“We are excited to put the dual resources of OCI and the Nvidia AI Enterprise suite to use in building our next-generation AI-driven applications and ever more useful digital twins,” said Paul Jaski, CEO at Gemelo, said in a statement.

While the addition of Nvidia’s AI stack will accelerate the deployment of generative AI apps on OCI, the question remains where Oracle stands with its own AI efforts — and whether it will make its own LLM to help cloud customers integrate generative AI into applications.

So far, the company, known for its database technology, has been mostly focused on industry partnerships. Back in June, Ellison announced that it is working with Toronto-based AI company Cohere to develop a service that will make it easy for enterprise customers to train their own custom LLMs using private data, while protecting their data privacy and security. He further noted that the company’s internal application development teams are also using the service.

Since then, the company has announced generative AI smarts in many of its products and solutions, including those centered at HR and healthcare professionals.

VentureBeat presents: AI Unleashed – An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More


Oracle just made another move to simplify AI development and deployment for its customers. This week, the Larry Ellison-founded company announced it is bringing Nvidia AI stack to its marketplace.

The move gives Oracle customers access to the most sought after, top-of-the-line GPUs for training foundation models and building generative applications.

Under the partnership, the company said it is opening access to Nvidia’s DGX Cloud AI supercomputing platform and AI Enterprise software.

This gives eligible enterprises an option to purchase the tools directly from the marketplace and start training models for deployment on the Oracle Cloud Infrastructure (OCI). Both Nvidia AI offerings are now available, along with the choice of private offer, Oracle said.

Event

AI Unleashed

An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.

 


Learn More

“We have worked closely with Nvidia for years to provide organizations with an accelerated compute infrastructure to run Nvidia software and GPUs. The addition of Nvidia AI Enterprise and Nvidia DGX Cloud to OCI further strengthens this collaboration and will help more organizations bring AI-fueled services to their customers faster,” Karan Batta, senior vice president for Oracle Cloud Infrastructure, said in a statement.

How Nvidia AI stack will help Oracle Cloud customers?

Today, enterprises across sectors use the Oracle Cloud Infrastructure to build and run business applications and services. The marketplace of OCI gives developers a catalog of add-on solutions and services to enhance their products.

Nvidia DGX Cloud and AI Enterprise software are the latest two additions to this storefront. This way, customers building apps on OCI can use their existing universal cloud credits to integrate Nvidia’s AI supercomputing platform and software into their development and deployment pipelines. 

Nvidia DGX Cloud is an AI-training-as-a-service platform, offering a serverless experience for multi-node training of custom generative AI models. It supports near-limitless scale of GPU resources with an architecture based on Nvidia’s DGX technology (each DGX Cloud instance consists of eight Nvidia Tensor Core GPUs). 

Meanwhile, Nvidia AI Enterprise is the enterprise-grade toolkit that helps teams accelerate the deployment of models to production. It includes the Nvidia NeMo framework to build, customize and deploy generative AI end-to-end, Rapids to accelerate data science, TensorRT LLM open-source library to optimize inference performance and Triton Inference server to standardize AI model deployment and execution. 

Notably, Nvidia AI Enterprise is offered as a separate offering on the marketplace but it also comes included with DGX Cloud. This streamlines the transition from training on DGX Cloud to deploying AI applications into production via Nvidia AI Enterprise on OCI.

Among the “many” companies using Nvidia’s AI stack on OCI are digital engagement company Gemelo.ai and the University at Albany, in upstate New York. 

“We are excited to put the dual resources of OCI and the Nvidia AI Enterprise suite to use in building our next-generation AI-driven applications and ever more useful digital twins,” said Paul Jaski, CEO at Gemelo, said in a statement.

Oracle’s gen AI efforts?

While the addition of Nvidia’s AI stack will accelerate the deployment of generative AI apps on OCI, the question remains where Oracle stands with its own AI efforts — and whether it will make its own LLM to help cloud customers integrate generative AI into applications.

So far, the company, known for its database technology, has been mostly focused on industry partnerships. Back in June, Ellison announced that it is working with Toronto-based AI company Cohere to develop a service that will make it easy for enterprise customers to train their own custom LLMs using private data, while protecting their data privacy and security. He further noted that the company’s internal application development teams are also using the service.

Since then, the company has announced generative AI smarts in many of its products and solutions, including those centered at HR and healthcare professionals.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Shubham Sharma
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!