AI & RoboticsNews

DataRobot exec talks ‘humble’ AI, regulation

Elevate your enterprise data technology and strategy at Transform 2021.


Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don’t know with any certainty whether those AI models will one day run afoul of new AI regulations.

Ted Kwartler, vice president of Trusted AI at DataRobot, talked with VentureBeat about why it’s critical for AI models to make predictions “humbly” to make sure they don’t drift or, one day, potentially run afoul of government regulations.

This interview has been edited for brevity and clarity.

VentureBeat: Why do we need AI to be humble?

Ted Kwartler: An algorithm needs to demonstrate humility when it’s making a prediction. If I’m classifying an ad banner at 50% probability or 99% probability, that’s kind of that middle range. You have one single cutoff threshold above this line and you have one outcome. Below this line, you have another outcome. In reality, we’re saying there’s a space in between where you can apply some caveats, so a human has to go review it. We call that humble AI in the sense that the algorithm is demonstrating humility when it’s making that prediction.

VentureBeat: Do you think organizations appreciate the need for humble AI?

Kwartler: I think organizations are waking up. They’re becoming much more sophisticated in their forethought around brand and reputational risk. These tools have an ability to amplify. The team that I help lead is really focused on what we call applied AI ethics, where we help educate our clients to this kind of phenomenon of thinking about the impacts; not just the math of it. Senior leaders maybe don’t understand the math. Maybe they don’t understand the implementation. But they definitely understand the implications at the strategic level so I do think it’s an emerging field. I think senior leaders are starting to recognize that there’s more reputational and brand risk.

VentureBeat: Do you think government regulatory agencies are starting to figure this out as well? And if so, what are the concerns?

Kwartler: It’s interesting. If you’ve read the AI Algorithmic Accountability Act, it’s written very broadly. That’s tough because you have an evolving technological landscape. And there are thresholds in that bill around $50 million in revenue that require an impact assessment if your algorithm is going to impact people. I like the idea of the high-risk use cases that were clearly defined. That’s a little prescriptive, but in a good way, I also like that it’s collaborative, because this is an evolving space. You want this stuff to be aligned to your societal values, not build tools of oppression. At the same time, you can’t just clamp it all down because it has shown economic progress. We all benefit from AI technology. It’s a balancing act.

VentureBeat: Do business executives have a vested interest in encouraging governments to define the AI rules sooner than later?

Kwartler: Ambiguity is the tough spot. Organizations are willing to write impact assessments. They’re willing to get third-party audits of their models that are in production. They’re willing to have different monitoring tools in place. A lot of monitoring and model risk management already exists but not for AI, so there are mechanisms by which this can happen. As the technology and use cases improve, how do you then adjust what counts or constitutes as high risk? There is both a need to balance economic prosperity and the guardrails that can operate it.

VentureBeat: What do make of the European Union efforts to regulate AI?

Kwartler: I think next-generation technologists welcome that collaboration. It gives us a path forward. The one thing I really liked about it is that it didn’t seem overreaching. It seemed like it was balancing prosperity with security. It seemed like it was trying to be prescriptive enough about high-risk use cases. It seemed like a very reasoned approach. It wasn’t slamming the door and saying “no more AI.” What that does is it leaves AI development to maybe governments and organizations that operate in the dark. You don’t want that either.

VentureBeat: Do you think the U.S. will move in a similar direction?

Kwartler: We will interpret it for our own needs. That’s what we’ve done in the past. In the end, we will have some form of regulation. I think that we can envision a world where some sort of model auditing is a real feature.

VentureBeat: That would be a preferable alternative to 50 different states attempting to regulate AI?

Kwartler: Yes. There are even regulations coming out in New York City itself. There are regulations in California and Washington that by themselves can dictate it for the whole country. I would be in favor of anything that helps clear up ambiguity so that the whole industry can move forward.

VentureBeat: Do you think there’s going to be an entire school of law built around AI regulations and enforcement?

Kwartler: I suspect that there’s an opportunity for good regulation to really help as a protective measure. I’m certainly no legal expert so I wouldn’t know if there’s going to be ambulance chases or not. I do think that there is an existing precedent for good regulation for protecting companies. Once that regulation is in place, you remove the ambiguity. That’s a safer space for organizations that want to do good in the world using this technology.

VentureBeat: Do we have the tools needed to monitor AI?

Kwartler: I would say the technology for monitoring technology and mathematical equations for algorithmic bias exists. You can also apply algorithms to identify the characteristics of data and data quality checks. You can apply methods. You can also apply some heuristics to — or after the model is in production and making predictions — to mitigate biases or risks. Algorithms, heuristics, and mathematical equations can be used throughout that kind of workflow.

VentureBeat: Bias may not be a one-time event. Do we need to continuously evaluate the AI model for bias as new data becomes available? Do we need some sort of set of best practices for evaluating these AI models?

Kwartler: As soon as you build a model, no matter what, it’s going to be wrong. The pandemic has also shown us the input data that you use to train the model does not always equate to the real world. The truth of the matter is that data actually drifts. And once it’s in production, you have data drift, or in the case of language, you have what’s called a concept drift. I do think that there’s a real gap right now. Our AI executive survey showed a very small number of organizations were actually monitoring models in production. I think that is a huge opportunity to help inform these guardrails to get the right behavior. I think the community’s focused a lot on the model behavior, when I think we need to migrate to monitoring and MLOps (machine learning operations) to engender trust way downstream and be less technical.

VentureBeat: Do you think there is a danger a business will evaluate AI models based on their optimal result and then simply work backward to force that outcome?

Kwartler: In terms of model evaluation, I think that’s where a good collaboration for AI regulation can come in and say, for instance, if working in hiring you need to use statistical parity to make sure that you have equal representation by protected classes. That’s a very specific targeted metric. I think that’s where we need to go. The mandated organizations should have a common benchmark. We want this type of speed and this type of accuracy, but how does it deal with outliers? How does it deal with a a set of design choices left to the data scientist as the expert? Let’s bring more people to the table.

VentureBeat: We hear a lot about the AutoML frameworks being employed to make AI more accessible to end users. What is the role of data scientists in an organization that adopts AutoML?

Kwartler: I’m very biased in the sense that I have an MBA in learned data science. I believe that data scientists that are operating in a silo don’t deliver the value because their speed to market with the model is much slower than if you do it with AutoML. Scientists don’t always see the real desire of the business person trying to sponsor the project. They’ll want to build the model and optimize to six decimal points, when in reality it makes no difference unless you’re at some massive scale. I’m a firm believer in AutoML because that allows the data scientists doing a forecast for call centers to go sit with a call center agent and learn from them. I tell data scientists to go see where the data is actually being made. You’ll see all sorts of data integrity issues that will inform your model. That’s harder to do when it takes six months to build a bespoke model. If I can use AutoML to speed up velocity to value then I have this luxury to go deeper into the weeds.

VentureBeat: AI adoption is relatively slow still. Is AutoML about to speed things up?

Kwartler: I’ve worked in large, glacial companies. They move slower. The models themselves took maybe a year plus to move into production. I would say there is going to be data drift if it takes six months to build the model and six months to implement the model. The model is not going to be as accurate as I think it is. We need to increase that velocity to democratize it for people that are closer to the business problem.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Michael Vizard
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!