AI & RoboticsNews

How no-code AI development platforms could introduce model bias

Hear from CIOs, CTOs, and other C-level and senior execs on data and AI strategies at the Future of Work Summit this January 12, 2022. Learn more


AI deployment in the enterprise skyrocketed as the pandemic accelerated organizations’ digital transformation plans. Eighty-six percent of decision-makers told PricewaterhouseCoopers in a recent survey that AI is becoming a “mainstream technology” at their organization. A separate report by The AI Journal finds that most executives anticipate that AI will make business processes more efficient and help to create new business models and products.

The emergence of “no-code” AI development platforms is fueling adoption in part. Designed to abstract away the programming typically required to create AI systems, no-code tools enable non-experts to develop machine learning models that can be used to predict inventory demand or extract text from business documents, for example. In light of the growing data science talent shortage, the usage of no-code platforms is expected to climb in the coming years, with Gartner predicting that 65% of app development will be low-code/no-code by 2024.

But there are risks in abstracting away data science work — chief among them, making it easier to forget the flaws in the real systems underneath.

No-code development

No-code AI development platforms — which include DataRobot, Google AutoML, Lobe (which Microsoft acquired in 2018), and Amazon SageMaker, among others — vary in the types of tools that they offer to end-customers. But most provide drag-and-drop dashboards that allow users to upload or import data to train, retrain or fine-tune a model and automatically classify and normalize the data for training. They also typically automate model selection by finding the “best” model based on the data and predictions required, tasks that would normally be performed by a data scientist.

Using a no-code AI platform, a user could upload a spreadsheet of data into the interface, make selections from a menu, and kick off the model creation process. The tool would then create a model that could spot patterns in text, audio or images, depending on its capabilities — for example, analyzing sales notes and transcripts alongside marketing data in an organization.

No-code development tools offer ostensible advantages in their accessibility, usability, speed, cost and scalability. But Mike Cook, an AI researcher at Queen Mary University of London, notes that while most platforms imply that customers are responsible for any errors in their models, the tools can cause people to de-emphasize the important tasks of debugging and auditing the models.

“[O]ne point of concern with these tools is that, like everything to do with the AI boom, they look and sound serious, official and safe. So if [they tell] you [that] you’ve improved your predictive accuracy by 20% with this new model, you might not be inclined to ask why unless [they tell] you,” Cook told VentureBeat via email. “That’s not to say you’re more likely to create biased models, but you might be less likely to realize or go looking for them, which is probably important.”

It’s what’s known as the automation bias — the propensity for people to trust data from automated decision-making systems. Too much transparency about a machine learning model and people — particularly non-experts — become overwhelmed, as a 2018 Microsoft Research study found. Too little, however, and people make incorrect assumptions about the model, instilling them with a false sense of confidence. A 2020 paper from the University of Michigan and Microsoft Research showed that even experts tend to over-trust and misread overviews of models via charts and data plots — regardless of whether the visualizations make mathematical sense.

The problem can be particularly acute in computer vision, the field of AI that deals with algorithms trained to “see” and understand patterns in the real world. Computer vision models are extremely susceptible to bias — even variations in background scenery can affect model accuracy, as can the varying specifications of camera models. If trained with an imbalanced dataset, computer vision models can disfavor darker-skinned individuals and people from particular regions of the world.

Experts attribute many errors in facial recognition, language and speech recognition systems, too, to flaws in the datasets used to develop the models. Natural language models — which are often trained on posts from Reddit — have been shown to exhibit prejudices along race, ethnic, religious and gender lines, associating Black people with more negative emotions and struggling with “Black-aligned English.”

“I don’t think the specific way [no-code AI development tools] work makes biased models more likely per se. [A] lot of what they do is just jiggle around system specs and test new model architectures, and technically we might argue that their primary user is someone who should know better. But [they] create extra distance between the scientist and the subject, and that can often be dangerous,” Cook continued.

The vendor perspective

Vendors feel differently, unsurprisingly. Jonathon Reilly, the cofounder of no-code AI platform Akkio, says that anyone creating a model should “understand that their predictions will only be as good as their data.” While he concedes that AI development platforms have a responsibility to educate users about how models are making decisions, he puts the onus on understanding the nature of bias, data and data modeling on users.

“Eliminating bias in model output is best done by modifying the training data — ignoring certain inputs — so the model does not learn unwanted patterns in the underlying data. The best person to understand the patterns and when they should be included or excluded is typically a subject-matter expert — and it is rarely the data scientist,” Reilly told VentureBeat via email. “To suggest that data bias is a shortcoming of no-code platforms is like suggesting that bad writing is a shortcoming of word processing platforms.”

No-code computer vision startup Cogniac founder Bill Kish similarly believes that bias, in particular, is a dataset rather than a tooling problem. Bias is a reflection of “existing human imperfection,” he says, that platforms can mitigate but don’t have the responsibility to fully eliminate.

“The problem of bias in computer vision systems is due to the bias in the ‘ground truth’ data as curated by humans. Our system mitigates this through a process where uncertain data is reviewed by multiple people to establish ‘consensus,’” Kish told VentureBeat via email. “[Cogniac] acts as a system of record for managing visual data assets, [showing] … the provenance of all data and annotations [and] ensuring the biases inherent in the data are visually surfaced, so they can be addressed through human interaction.”

It might be unfair to place the burden of dataset creation on no-code tools, considering users often bring their own datasets. But as Cook points out, some platforms specialize in automatically processing and harvesting data, which could cause the same problem of making users overlook data quality issues. “It’s not cut and dry, necessarily, but given how bad people already are at building models, anything that lets them do it in less time and with less thought is probably going to lead to more errors,” he said.

Then there’s the fact that model biases don’t only arise from training datasets. As a 2019 MIT Tech Review piece lays out, companies might frame the problem that they’re trying to solve with AI (e.g., assessing creditworthiness) in a way that doesn’t factor in the potential for fairness or discrimination. They — or the no-code AI platform they’re using — might also introduce bias during the data preparation or model selection stages, impacting prediction accuracy.

Of course, users can always probe the bias in various no-code AI development platforms themselves based on their relative performance on public datasets, like Common Crawl. And no-code platforms claim to address the problem of bias in different ways. For example, DataRobot has a “humility” setting that allows users to essentially tell a model that if its predictions sound too good to be true, they are. “Humility” instructs the model to either alert a user or take corrective action, like overwriting its predictions with an upper or lower bound, if its predictions or if the results land outside certain bounds.

There’s a limit to what these debiasing tools and techniques can accomplish, however. And without an awareness of the potential — and reasons — for bias, the chances that problems crop up in models increases.

Reilly believes that the right path for vendors is improving education, transparency and accessibility while pushing for clear regulatory frameworks. Businesses using AI models should be able to easily point to how a model makes its decisions with backing proof from the AI development platform, he says — and feel confident in the ethical and legal implications of their use.

“How good a model needs to be to have value is very much dependent on the problem the model is trying to solve,” Reilly added. “You don’t need to be a data scientist to understand the patterns in the data the model is using for decision-making.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!