AI & RoboticsNews

DataRobot aims to accelerate AI delivery and operationalize low-code dev

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


When the management team at DataRobot looks at the future, they see a world where AI is part of every enterprise business decision. If the computer is not operating completely autonomously, it augments the intelligence of the human, whispering advice in their ear. Today, DataRobot releases its DataRobot AI Cloud Platform to run everywhere and handle a diverse collection of roles in classification, detection, and decision making.

The new release enhances the options for creating a pipeline that turns incoming data into business decisions. It runs on premises or in all of the major clouds. Any company that wants a SAAS option can pay by the call.

This 7.2 version adds features to a platform that’s known as a good, low-code way to experiment with artificial intelligence. In goes data and out comes a model that can be deployed as a service module in the Enterprise stack.

DataRobot also expands the options by extending Pathfinder, a collection of new, pre-coded routines that simplify many of the common use cases. Jobs like predicting loan defaults, choosing how much stock to order for a store branch, or flagging insurance fraud are pre-built and ready to run with only a bit of customization.

The new version will also offer more options for composition and deployment. At the same time, it can monitor decisions for potential bias and — perhaps — correct it.

To understand a bit more about this new release, we sat down with Nenshad Bardoliwalla, DataRobot’s senior vice president of product responsible for the new launch.

VentureBeat: Let’s start out with the big picture. First, you’re unfolding a big umbrella, the DataRobot that will create one platform that will unite experimentation with daily deployment on the front lines.

Nenshad Bardoliwalla: We like to talk about the fact that most of the AI projects that people do today are what we call experimental AI. They pull some data sets, they run a few experiments, but they never wind up deploying the model into production or making it part of their business process. There are a lot of failed projects in the wake of AI investments. We think that literally every opportunity in business can be an AI opportunity and that basically every person in a business can have the powers of a data scientist — if you build software, that allows you to democratize these capabilities.

So we’re going to be announcing AI Cloud. It’s a single system for accelerating the delivery of AI into production for every organization. We are a company that started in 2012, and so we’ve spent close to a decade and more than 1.5 million engineering hours bringing this platform to market. We also have a very interesting distinction that we’re one of one of the few, if not only, companies that have helped other companies put many, many solutions into production. Many of the hundreds of DataRobot customers actually have successful AI and ML initiatives. The key tenet that we look for in what we believe is definitional for AI Cloud is you need a single platform for a wide diversity of different user types.

VentureBeat: So what does that mean for the user who wants to turn data into decisions?

Bardoliwalla: The idea behind a singular platform is that we want to make it as easy as possible for each of the constituent parts to flow from one end of the life cycle to the other. If you use our automated machine learning capabilities, then in a single click you can deploy that in our ML OPs capability. There is an end to end life cycle. However, if you choose to use a different solution to build your models — let’s say you’re using the open source library to build your models — you can still deploy and get very good management for those ML for those models and ML ops. But you won’t have the one-click experience that you get from being part of the platform.

VentureBeat: When I think of DataRobot, I think of a low-code tool that offers plenty of hand holding for a desktop system. How is that changing or growing?

Bardoliwalla: Historically, you’re absolutely right that our primary user has been a citizen data scientist in that low code, graphical user experience. We have made substantial investments, especially with this launch of 7.2 in AI Cloud, to close the ability for people to actually use code as well. So with this release, we have three new capabilities that span the spectrum of different ways that coders can actually participate in the platform.

The first is Cloud hosted notebooks. … We believe that the world is polyglot. So from a programming language perspective, we have the ability to stitch together in a single notebook R code, Python code, SQL code, and even Scala. All for different paragraphs inside the notebook. Now you can use whatever the best purpose-built languages for the tasks that you have at hand.

VentureBeat: So that’s at the Notebook level. Can you go deeper?

Bardoliwalla: Yes! In our in our Automated Machine Learning product, we’ve introduced a capability called Composable ML, which allows you to go deeper. DataRobot’s automation will generate a pipeline for you that has feature preprocessing steps as well as the specific algorithm. Because again, we want to mix the best of both humans as well as machines, we now allow you to take any of those blocks inside the platform and replace it with your own code.

So you can say, “Oh, I don’t like the way DataRobot does one hot encoding? I’m gonna click on that block and upload my own R or Python code into the system to substitute it for something that DataRobot had pre-built.”

VentureBeat: And if you still want more control?

Bardoliwalla: We’re introducing the DataRobot pipelines product, which allows you to set up complex, inference, and training pipelines that, again, can span multiple languages from SQL to Python and stitch all of that together into a reproducible, high fidelity, high pipeline environment. We’ve added that to the portfolio as part of AI Cloud and the 7.2 release. It’s a big, big investment for us.

VentureBeat: That’s all during development. Tell me your plans for working with the deployment — and building a feedback loop so your AI can learn from the deployed code.

Bardoliwalla: When the model is deployed, it’s put on production class infrastructure with the web service front end that allows you to send input data and for the system for the model or the deployment. Then we call it to return predictions, right? But where it gets really interesting is that models actually can get stale over time. Just because you train something today, the world changes and the data suddenly changes.

So what we’ve introduced is this really powerful capability that we call Continuous AI, which is also part of this release. And the idea is that when you deploy the model, you actually are monitoring all the different aspects of the model: Is there data drift, is the accuracy changing, is the service latency changing? And you can set thresholds that, at a certain point — let’s say the model starts to produce poor predictions. Then, in continuous AI, we will actually — and this again speaks to the platform story and the integration — we will actually go and automatically launch a new set of training routines so that we can find better models with the freshest data that can then be substituted into the ML ops deployment. So this life cycle of continuously improving the quality of your models or, at the very least, maintaining a certain level of performance is something that’s very unique to DataRobot.

VentureBeat: This feedback can take many forms, right? I notice that you’re starting to talk about monitoring the AI for bias.

Bardoliwalla: So bias monitoring is really, really interesting. What we introduced last year is the capability that allowed you, while training a model, to have the system look for potential bias. So the data scientist end user could actually label protected classes of data — for example, ethnic group or gender — inside of their data set. And then DataRobot would go ahead and say, “You know what, depending on which fairness metric you use — let’s say it’s proportionality– we notice you are disproportionately favoring people in one ethnic group versus another.”

VentureBeat: This becomes part of the business process, and the users can take notice and act upon it, right?

Bardoliwalla: Yes, we want to be able to do this while customers are actually in production, when they’re actively getting new requests for predictions. They want to know that the model is starting to generate biased results. So in this release, we actually have the ability to monitor changes in the model behavior, where the model starts treating some populations unfairly.

The second that we find that and it crosses a certain threshold, we can start sending alerts to the deployment owner saying, “Hey, your models are not behaving in the way that you want them to, based on your policies and ethical regulations in your organization.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Peter Wayner
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!