AI & RoboticsNews

How to manage risk as AI spreads throughout your organization

Register now for your free virtual pass to the Low-Code/No-Code Summit this November 9. Hear from executives from Service Now, Credit Karma, Stitch Fix, Appian, and more. Learn more.


As AI spreads throughout the enterprise, organizations are having a difficult time balancing the benefits against the risks. AI is already baked into a range of tools, from IT infrastructure management to DevOps software to CRM suites, but most of those tools were adopted without an AI risk-mitigation strategy in place. 

Of course, it’s important to remember that the list of potential AI benefits is every bit as long as the risks, which is why so many organizations skimp on risk assessments in the first place. 

Many organizations have already made serious breakthroughs that wouldn’t have been possible without AI. For instance, AI is being deployed throughout the health-care industry for everything from robot-assisted surgery to reduced drug dosage errors to streamlined administrative workflows. GE Aviation relies on AI to build digital models that better predict when parts will fail, and of course, there are numerous ways AI is being used to save money, such as having conversational AI take drive-thru restaurant orders.

That’s the good side of AI.

Event

Low-Code/No-Code Summit

Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.


Register Here

Now, let’s take a look at the bad and ugly. 

The bad and ugly of AI: bias, safety issues, and robot wars

AI risks are as varied as the many use cases its proponents hype, but three areas have proven to be particularly worrisome: bias, safety, and war. Let’s look at each of these problems separately. 

Bias

While HR departments originally thought AI could be used to eliminate bias in hiring, the opposite has occurred. Models built with implicit bias baked into the algorithm end up being actively biased against women and minorities. 

For instance, Amazon had to scrap its AI-powered automated résumé screener because it filtered out female candidates. Similarly, when Microsoft used tweets to train a chatbot to interact with Twitter users, they created a monster. As a CBS News headline put it, “Microsoft shuts down AI chatbot after it turned into a Nazi.” 

These problems may seem inevitable in hindsight, but if market leaders like Microsoft and Google can make these mistakes, so can your business. With Amazon, the AI had been trained on résumés that came overwhelmingly from male applicants. With Microsoft’s chatbot, the one positive thing you can say about that experiment is that at least they didn’t use 8chan to train the AI. If you spend five minutes swimming through the toxicity of Twitter, you’ll understand what a terrible idea it was to use that data set for the training of anything. 

Safety issues

Uber, Toyota, GM, Google, and Tesla, among others, have been racing to make fleets of self-driving vehicles a reality. Unfortunately, the more researchers experiment with self-driving cars, the further that fully autonomous vision recedes into the distance. 

In 2015, the first death caused by a self-driving car occurred in Florida. According to the National Highway Traffic Safety Administration, a Tesla in autopilot mode failed to stop for a tractor trailer making a left turn at an intersection. The Tesla crashed into the big rig, fatally injuring the driver. 

This is just one of a long list of errors made by autonomous vehicles. Uber’s self-driving cars didn’t realize that pedestrians could jaywalk. A Google-powered Lexus sideswiped a municipal bus in Silicon Valley, and in April a partially autonomous TruSimple semi-truck swerved into a concrete center divide on I-10 near Tucson, AZ because the driver hadn’t properly rebooted the autonomous driving system, causing the truck to follow outdated commands.  

In fact, federal regulators report that self-driving cars were involved in nearly 400 accidents on U.S. roadways in less than a year (from July 1, 2021 to May 15, 2022). Six people died in those 392 accidents and five were seriously injured. 

Fog of war

If self-driving vehicle crashes aren’t enough of a safety concern, consider autonomous warcraft. 

Autonomous drones powered by AI are now making life and death decisions on the battlefield, and the risks associated with possible mistakes are complex and contentious. According to a United Nations’ report, in 2020 an autonomous Turkish-built quadcopter decided to attack retreating Libyan fighters without any human intervention.

Militaries around the world are considering a range of applications for autonomous vehicles, from fighting to naval transport to flying in formation with piloted fighter jets. Even when not actively hunting the enemy, autonomous military vehicles could still make any number of deadly mistakes similar to those of self-driving cars.

7 steps to mitigate AI risks throughout the enterprise

For the typical business, your risks won’t be as frightening as killer drones, but even a simple mistake that causes a product failure or opens you to lawsuits could drive you into the red. 

To better mitigate risks as AI spreads throughout your organization, consider these 7 steps: 

Start with early adopters

First, look at the places where AI has already gained a foothold. Find out what is working and build on that foundation. From this, you can develop a basic roll-out template that various departments can follow. However, bear in mind that whatever AI adoption plans and roll-out templates you develop will need to gain buy-in throughout the organization in order to be effective. 

Locate the proper beachhead

Most organizations will want to start small with their AI strategy, piloting the plan in a department or two. The logical place to start is where risk is already a top concern, such as Governance, Risk, and Compliance (GRC) and Regulatory Change Management (RCM).

GRC is essential for understanding the many threats to your business in a hyper-competitive market, and RCM is essential for keeping your organization on the right side of the many laws you must follow in multiple jurisdictions. Each practice is also one that includes manual, labor-intensive, and ever-shifting processes.

With GRC, AI can handle such tricky tasks as starting the process of defining hazy concepts like “risk culture,” or it can be used to gather publicly available data from competitors that will help direct new product development in a way that does not violate copyright laws. 

In RCM, handing off things like regulatory change management and the monitoring of the daily onslaught of enforcement actions can give your compliance experts as much as a third of their workdays back for higher-value tasks. 

Map processes with experts

AI can only follow processes that you are able to map in detail. If AI will impact a particular role, make sure those stakeholders are involved in the planning stages. Too often, developers plow ahead without enough input from the end users who will either adopt or reject these tools. 

Focus on workflows and processes that hold experts back

Look for processes that are repetitive, manual, error-prone, and probably tedious to the humans performing them. Logistics, sales and marketing, and R&D are all areas that include repetitive chores that can be handed over to AI. AI can improve business outcomes in these areas by improving efficiencies and reducing errors. 

Thoroughly vet your datasets

University of Cambridge researchers recently studied 400 COVID-19-related AI models and found that every one of them had fatal flaws. The flaws fell into two general categories, those that used data sets that were too small to be valid and those with limited information disclosure, which led to various biases.

Small data sets aren’t the only kind of data that can throw off models. Public data sets may come from invalid sources. For instance, Zillow introduced a new feature last year called Zestimate that used AI to make cash offers for homes in a fraction of the time it usually takes. The Zestimate algorithm ended up making thousands of above-market offers based on flawed Home Mortgage Disclosure Act data, which eventually prompted Zillow to offer a million-dollar prize for improving the model. 

Pick the right AI model

As AI models evolve, only a small subset of them are fully autonomous. In most cases, however, AI models greatly benefit from having active human (or better, expert) input. “Supervised AI” relies on humans to guide machine learning, rather than letting the algorithms figure out everything on their own. 

For most knowledge work, supervised AI will be required to meet your goals. For complicated, specialized work, however, supervised AI still doesn’t get you as far as most organizations would like to go. To level up and unlock the true value of your data, AI needs not just supervision, but expert input. 

The Expert-in-the-Loop (EITL) model can be used to tackle big problems or those that require specialized human judgment. For instance, EITL AI has been used to discover new polymers, improve aircraft safety, and even to help law enforcement plan for how to cope with autonomous vehicles.

Start small but dream big

Make sure to thoroughly test and then continue to vet AI-driven processes, but once you have the kinks worked out, you now have a plan to extend AI throughout your organization based on a template that you have tested and proven already in specific areas, such as GRC and RCM. 

Kayvan Alikhani is cofounder and chief product officer at Compliance.ai. Kayvan previously led the Identity Strategy team at RSA. and was the co-founder and CEO of PassBan (acquired by RSA). 

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Kayvan Alikhani, Compliance.ai
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!