AI & RoboticsNews

The secret to making data analytics as transformative as generative AI

Presented by SQream

The challenges of AI compound as it hurtles forward: demands of data preparation, large data sets and data quality, the time sink of long-running queries, batch processes and more. In this VB Spotlight, William Benton, principal product architect at NVIDIA, and others explain how your org can uncomplicate the complicated today.

Watch free on-demand!

The soaring transformative power of AI is hamstrung by a very earthbound challenge: not just the complexity of analytics processes, but the endless time it takes to get from running a query to accessing the insight you’re after.

“Everyone’s worked with dashboards that have a bit of latency built in,” says Deborah Leff, chief revenue officer at SQream. “But you get to some really complex processes where now you’re waiting hours, sometimes days or weeks for something to finish and get to a specific piece of insight.”

In this recent VB Spotlight event, Leff was joined by William Benton, principal product architect at NVIDIA, and data scientist and journalist Tianhui “Michael” Li, to talk about the ways organizations of any size can overcome the common obstacles to leveraging the power of enterprise-level data analytics — and why an investment in today’s powerful GPUs is crucial to enhance the speed, efficiency and capabilities of analytics processes, and will lead to a paradigm shift in how businesses approach data-driven decision-making.

While there’s a tremendous amount of excitement around generative AI, and it’s already having a powerful impact on organizations, enterprise-level analytics have not evolved nearly as much over the same time frame.

“A lot of people are still coming at analytics problems with the same architectures,” Benton says. “Databases have had a lot of incremental improvements, but we haven’t seen this revolutionary improvement that impacts everyday practitioners, analysts and data scientists to the same extent that we see with some of these perceptual problems in AI, or at least they haven’t captured the popular imagination in the same way.”

Part of the challenge is that incredible time sink, Leff says, and solutions to those issues have been prohibitive to this point.

Adding more hardware and compute resources in the cloud is expensive and adds complexity, she says. A combination of brains (the CPU) and brawn (GPUs) is what’s required.

“The GPU you can buy today would have been unbelievable from a supercomputing perspective 10 or 20 years ago,” Benton says. “If you think about supercomputers, they’re used for climate modeling, physical simulations — big science problems. Not everyone has big science problems. But that same massive amount of compute capacity can be made available for other use cases.”

Instead of just tuning queries to shave off a few minutes, organizations can slash the time the entire analytics process takes, start to finish, super-powering the speed of the network, of data ingestion, query and presentation.

“What’s happening now with technologies like SQream that are leveraging GPUs together with CPUs to transform the way analytics are processed, is that it can harness that same immense brute force and power that GPUs bring to the table and apply them to traditional analytics. The impact is an order of magnitude.”

Unstructured and ungoverned data lakes, often built around the Hadoop ecosystem, have become the alternative to traditional data warehouses. They’re flexible and can store large amounts of semi-structured and unstructured data, but they require an extraordinary amount of preparation before the model ever runs. To address the challenge, SQream turned to the power and high throughput capabilities of the GPU to accelerate data processes throughout the entire workload, from data preparation to insights.

“The power of GPUs allows them to analyze as much data as they want,” Leff says. “I feel like we’re so conditioned — we know our system cannot handle unlimited data. I can’t just take a billion rows if I want and look at a thousand columns. I know I have to limit it. I have to sample it and summarize it. I have to do all sorts of things to get it to a size that’s workable. You completely unlock that because of GPUs.”

RAPIDS, Nvidia’s open-source suite of GPU-accelerated data science and AI libraries also accelerates performance by orders of magnitude at scale across data pipelines by taking the massive parallelism that’s now possible and allowing organizations to apply it toward accelerating the Python and SQL data science ecosystems, adding enormous power underneath familiar interfaces.

But it’s not just making these individual steps of the process faster, Benton adds.

“What makes a process slow? It’s communication across organizational boundaries. It’s communication across people’s desks, even. It’s the latency and velocity of feedback loops,” he says. “That’s the exciting benefit of accelerating analytics. If we’re looking at how people interact with a mainframe, we can dramatically improve the performance by reducing the latency when the computer provides responses to the human, and the latency when the human provides instructions to the computer. We get a super linear benefit by optimizing both sides of that.”

Going into sub-second response speeds means answers are returned immediately, and data scientists stay in the flow state, remaining as creative and productive as possible. And if you take that same concept and apply it to the rest of the organization, in which a vast array of business leaders are making decisions every single day, that drive revenue, reduce costs and avoid risks, the impact is profound.

With CPUs as the brain and GPUs as the raw power, organizations are able to realize all the power of their data — queries that were previously too complex, too much of a time sink, are suddenly possible, and from there, anything is possible, Leff says.

“For me, this is the democratization of acceleration that’s such a game changer,” she says. “People are limited by what they know. Even on the business side, a business leader who’s trying to make a decision — if the architecture team says, yes, it will take you eight hours to get this information, we accept that. Even though it could actually take eight minutes.”

“We’re stuck in this pattern with a lot of business analytics, saying, I know what’s possible because I have the same database that I’ve been using for 15 or 20 years,” Benton says. “We’ve designed our applications around these assumptions that aren’t true anymore because of this acceleration that technologies like SQream are democratizing access to. We need to set the bar a little higher. We need to say, hey, I used to think this wasn’t possible because this query didn’t complete after two weeks. Now it completes in half an hour. What should I be doing with my business? What decisions should I be making that I couldn’t make before?”

For more on the transformative power of data analytics, including a look at the cost savings, a dive into the power and insight that’s possible for organizations now and more, don’t miss this VB Spotlight.

Watch on-demand now!

Agenda

Speakers:

Presented by SQream


The challenges of AI compound as it hurtles forward: demands of data preparation, large data sets and data quality, the time sink of long-running queries, batch processes and more. In this VB Spotlight, William Benton, principal product architect at NVIDIA, and others explain how your org can uncomplicate the complicated today.

Watch free on-demand!


The soaring transformative power of AI is hamstrung by a very earthbound challenge: not just the complexity of analytics processes, but the endless time it takes to get from running a query to accessing the insight you’re after.

“Everyone’s worked with dashboards that have a bit of latency built in,” says Deborah Leff, chief revenue officer at SQream. “But you get to some really complex processes where now you’re waiting hours, sometimes days or weeks for something to finish and get to a specific piece of insight.”

In this recent VB Spotlight event, Leff was joined by William Benton, principal product architect at NVIDIA, and data scientist and journalist Tianhui “Michael” Li, to talk about the ways organizations of any size can overcome the common obstacles to leveraging the power of enterprise-level data analytics — and why an investment in today’s powerful GPUs is crucial to enhance the speed, efficiency and capabilities of analytics processes, and will lead to a paradigm shift in how businesses approach data-driven decision-making.

The acceleration of enterprise analytics

While there’s a tremendous amount of excitement around generative AI, and it’s already having a powerful impact on organizations, enterprise-level analytics have not evolved nearly as much over the same time frame.

“A lot of people are still coming at analytics problems with the same architectures,” Benton says. “Databases have had a lot of incremental improvements, but we haven’t seen this revolutionary improvement that impacts everyday practitioners, analysts and data scientists to the same extent that we see with some of these perceptual problems in AI, or at least they haven’t captured the popular imagination in the same way.”

Part of the challenge is that incredible time sink, Leff says, and solutions to those issues have been prohibitive to this point.

Adding more hardware and compute resources in the cloud is expensive and adds complexity, she says. A combination of brains (the CPU) and brawn (GPUs) is what’s required.

“The GPU you can buy today would have been unbelievable from a supercomputing perspective 10 or 20 years ago,” Benton says. “If you think about supercomputers, they’re used for climate modeling, physical simulations — big science problems. Not everyone has big science problems. But that same massive amount of compute capacity can be made available for other use cases.”

Instead of just tuning queries to shave off a few minutes, organizations can slash the time the entire analytics process takes, start to finish, super-powering the speed of the network, of data ingestion, query and presentation.

“What’s happening now with technologies like SQream that are leveraging GPUs together with CPUs to transform the way analytics are processed, is that it can harness that same immense brute force and power that GPUs bring to the table and apply them to traditional analytics. The impact is an order of magnitude.”

Accelerating the data science ecosystem

Unstructured and ungoverned data lakes, often built around the Hadoop ecosystem, have become the alternative to traditional data warehouses. They’re flexible and can store large amounts of semi-structured and unstructured data, but they require an extraordinary amount of preparation before the model ever runs. To address the challenge, SQream turned to the power and high throughput capabilities of the GPU to accelerate data processes throughout the entire workload, from data preparation to insights.

“The power of GPUs allows them to analyze as much data as they want,” Leff says. “I feel like we’re so conditioned — we know our system cannot handle unlimited data. I can’t just take a billion rows if I want and look at a thousand columns. I know I have to limit it. I have to sample it and summarize it. I have to do all sorts of things to get it to a size that’s workable. You completely unlock that because of GPUs.”

RAPIDS, Nvidia’s open-source suite of GPU-accelerated data science and AI libraries also accelerates performance by orders of magnitude at scale across data pipelines by taking the massive parallelism that’s now possible and allowing organizations to apply it toward accelerating the Python and SQL data science ecosystems, adding enormous power underneath familiar interfaces.

Unlocking new levels of insight

But it’s not just making these individual steps of the process faster, Benton adds.

“What makes a process slow? It’s communication across organizational boundaries. It’s communication across people’s desks, even. It’s the latency and velocity of feedback loops,” he says. “That’s the exciting benefit of accelerating analytics. If we’re looking at how people interact with a mainframe, we can dramatically improve the performance by reducing the latency when the computer provides responses to the human, and the latency when the human provides instructions to the computer. We get a super linear benefit by optimizing both sides of that.”

Going into sub-second response speeds means answers are returned immediately, and data scientists stay in the flow state, remaining as creative and productive as possible. And if you take that same concept and apply it to the rest of the organization, in which a vast array of business leaders are making decisions every single day, that drive revenue, reduce costs and avoid risks, the impact is profound.

With CPUs as the brain and GPUs as the raw power, organizations are able to realize all the power of their data — queries that were previously too complex, too much of a time sink, are suddenly possible, and from there, anything is possible, Leff says.

“For me, this is the democratization of acceleration that’s such a game changer,” she says. “People are limited by what they know. Even on the business side, a business leader who’s trying to make a decision — if the architecture team says, yes, it will take you eight hours to get this information, we accept that. Even though it could actually take eight minutes.”

“We’re stuck in this pattern with a lot of business analytics, saying, I know what’s possible because I have the same database that I’ve been using for 15 or 20 years,” Benton says. “We’ve designed our applications around these assumptions that aren’t true anymore because of this acceleration that technologies like SQream are democratizing access to. We need to set the bar a little higher. We need to say, hey, I used to think this wasn’t possible because this query didn’t complete after two weeks. Now it completes in half an hour. What should I be doing with my business? What decisions should I be making that I couldn’t make before?”

For more on the transformative power of data analytics, including a look at the cost savings, a dive into the power and insight that’s possible for organizations now and more, don’t miss this VB Spotlight.

Watch on-demand now!

Agenda

  • Technologies to dramatically shorten the time-to-market for product innovation
  • Increasing the efficiencies of AI and ML systems and reducing costs, without compromising performance
  • Enhancing data integrity, streamlining workflows and extracting maximum value from data assets
  • Strategic solutions to transform data analytics and innovations driving business outcomes

Speakers:

  • William Benton, Principal Product Architect, NVIDIA
  • Deborah Leff, Chief Revenue Officer, SQream
  • Tianhui “Michael” Li, Technology Contributor, VentureBeat (Moderator)


Author: VB Staff
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
DefenseNews

Navy’s next amphibious ship named for Marines’ Helmand province fight

DefenseNews

Navy pauses T-45C Goshawk fleet operations after ‘engine malfunction’

DefenseNews

V-22 Osprey could see second life, with new drive system, wings in 2050s

Cleantech & EV'sNews

Acura ZDX S-Line first drive: A smooth, comfy ride, but it doesn't scream 'performance EV' [Video]

Sign up for our Newsletter and
stay informed!