DataStax has been steadily expanding its data platform in recent years to help meet the growing need of enterprise AI developers. Today the company is taking the next step forward with the launch of the DataStax AI Platform, Built with Nvidia AI. The new platform integrates DataStax’s existing database technology including DataStax Astra for cloud native and the DataStax Hyper-Converged Database (HCD) for self-managed deployments. It also includes the company’s Langflow technology which is used to help build out agentic AI workflows. The Nvidia enterprise AI components include technologies that will help to accelerate and improve organization’s ability to rapidly build and deploy models. Among the Nvidia enterprise components in the stack are NeMo Retriever, NeMo Guardrails and NIM Agent Blueprints.
According to DataStax the new platform can reduce AI development time by 60% and handle AI workloads 19 times faster than current solutions.
“Time to production is one of the things we talk about, building these things takes a bunch of time,” Ed Anuff, Chief Product Officer at DataStax told VentureBeat. “What we’ve seen has been that a lot of folks are stuck in development hell.”
How Langflow enables enterprises to benefit from agentic AI
Langflow, DataStax’s visual AI orchestration tool, plays a crucial role in the new AI platform.
Langflow allows developers to visually construct AI workflows by dragging and dropping components onto a canvas. These components represent various DataStax and Nvidia capabilities, including data sources, AI models and processing steps. This visual approach significantly simplifies the process of building complex AI applications.
“What Langflow allows us to do is surface all of the DataStax capabilities and APIs, as well as all of the Nvidia components and microservices as visual components that can be connected together and run in an interactive way,” Anuff said.
Langflow also is the critical technology that enables agentic AI to the new DataStax platform as well. According to Anuff, the platform facilitates the development of three main types of agents:
Task-oriented agents: These agents can perform specific tasks on behalf of users. For example, in a travel application, an agent could assemble a vacation package based on user preferences.
Automation agents: These agents operate behind the scenes, handling tasks without direct user interaction. They often involve APIs communicating with other APIs and agents, facilitating complex automated workflows.
Multi-agent systems: This approach involves breaking down complex tasks into subtasks handled by specialized agents.
What the Nvidia DataStax combination enables for enterprise AI
The combination of the Nvidia capabilities with DataStax’s data and Langflow will help enterprise AI users in a number of different ways, according to Anuff.
He explained that the Nvidia integration will allow enterprise users to more easily invoke custom language models and embeddings through a standardized NIM microservices architecture. By using Nvidia’s microservices, users can also tap into Nvidia’s hardware and software capabilities to run these models efficiently.
Guardrails support is another key addition that will help DataStax users to prevent unsafe content and model outputs.
“The guardrails capability is one of the features that I think probably has the most developer and end user impact,”Anuff said. “Guardrails are basically a sidecar model, that is able to recognize and intercept unsafe content that is either coming from the user, ingestion or through, stuff retrieved from databases.”
The Nvidia integration also will help to enable continuous model improvement. Anuff explained that the NeMo Curator allows enterprise AI users to be able to determine additional content that can be used for fine tuning purposes.
The overall impact of the integration is to help enterprises benefit from AI faster and in a cost efficient approach. Anuff noted that it’s an approach that doesn’t necessarily have to rely entirely on GPUs either.
“The Nvidia enterprise stack actually is able to execute workloads on CPUs as well as GPUs,” Anuff said. “GPUs will be faster and generally are going to be where you want to put these workloads, but if you want to offload some of the stuff to CPUs for cost savings in areas where, where it doesn’t matter, it lets you do that as well.”
Author: Sean Michael Kerner
Source: Venturebeat
Reviewed By: Editorial Team