AI & RoboticsNews

How companies can avoid ethical pitfalls when building AI products

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Across industries, businesses are expanding their use of artificial intelligence (AI) systems. AI isn’t just for the tech giants like Meta and Google anymore; logistics firms leverage AI to streamline operations, advertisers use AI to target specific markets and even your online bank uses AI to power its automated customer service experience. For these companies, dealing with ethical risks and operational challenges related to AI is inevitable – but how should they prepare to face them?

Poorly executed AI products can violate individual privacy and in the extreme, even weaken our social and political systems. In the U.S., an algorithm used to predict likelihood of future crime was revealed to be biased against Black Americans, reinforcing racial discriminatory practices in the criminal justice system.

To avoid dangerous ethical pitfalls, any company looking to launch their own AI products must integrate their data science teams with business leaders who are trained to think broadly about the ways those products interact with the larger business and mission. Moving forward, firms must approach AI ethics as a strategic business issue at the core of a project – not as an afterthought.

When assessing the different ethical, logistical and legal challenges around AI, it often helps to break down a product’s lifecycle into three phases: pre-deployment, initial launch, and post-deployment monitoring.

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Pre-deployment

In the pre-deployment phase, the most crucial question to ask is: do we need AI to solve this problem?  Even in today’s “big-data” world, a non-AI solution can be the far more effective and cheaper option in the long run.

If an AI solution is the best choice, pre-deployment is the time to think through data acquisition. AI is only as good as the datasets used to train it. How will we get our data? Will data be obtained directly from customers or from a third party? How do we ensure it was obtained ethically?

While it’s tempting to sidestep these questions, the business team must consider whether their data acquisition process allows for informed consent or breaches reasonable expectations of users’ privacy. The team’s decisions can make or break a firm’s reputation. Case in point: when the Ever app was found collecting data without properly informing users, the FTC forced them to delete their algorithms and data.

Informed consent and privacy are also intertwined with a firm’s legal obligations. How should we respond if domestic law enforcement requests access to sensitive user data? What if it’s international law enforcement? Some firms, like Apple and Meta, deliberately design their systems with encryption so the company cannot access a user’s private data or messages. Other firms carefully design their data acquisition process so that they never have sensitive data in the first place.

Beyond informed consent, how will we ensure the acquired data is suitably representative of the target users? Data that underrepresent marginalized populations can yield AI systems that perpetuate systemic bias. For example, facial recognition technology has regularly been shown to exhibit bias along race and gender lines, mostly because the data used to create such technology is not suitably diverse.

Initial launch

There are two crucial tasks in the next phase of an AI product’s lifecycle. First, assess if there’s a gap between what the product is intended to do and what it’s actually doing. If actual performance doesn’t match your expectations, find out why. Whether the initial training data was insufficient or there was a major flaw in implementation, you have an opportunity to identify and solve immediate issues. Second, assess how the AI system integrates with the larger business.  These systems do not exist in a vacuum – deploying a new system can affect the internal workflow of current employees or shift external demand away from certain products or services. Understand how your product impacts your business in the bigger picture and be prepared: if a serious problem is found, it may be necessary to roll back, scale down, or reconfigure the AI product.

Post-deployment monitoring

Post-deployment monitoring is critical to the product’s success yet often overlooked. In the last phase, there must be a dedicated team to track AI products post-deployment. After all, no product – AI or otherwise – works perfectly forevermore without tune-ups. This team might periodically perform a bias audit, reassess data reliability, or simply refresh “stale” data. They can implement operational changes, such as acquiring more data to account for underrepresented groups or retraining corresponding models.

Most importantly, remember: data informs but doesn’t always explain the whole story. Quantitative analysis and performance tracking of AI systems won’t capture the emotional aspects of user experience. Hence, post-deployment teams must also dive into more qualitative, human-centric research. Instead of the team’s data scientists, seek out team members with diverse expertise to run effective qualitative research. Consider those with liberal arts and business backgrounds to help uncover the “unknown unknowns” among users and ensure internal accountability.

Finally, consider the end of life for the product’s data. Should we delete old data or repurpose it for alternate projects? If it’s repurposed, need we inform users? While the abundance of cheap data warehousing tempts us to simply store all old data and side-step these issues, holding sensitive data increases the firm’s risk to a potential security breach or data leak. One additional consideration is whether countries have established a right to be forgotten.

From a strategic business perspective, firms will need to staff their AI product teams with responsible business leaders who can assess the technology’s impact and avoid ethical pitfalls before, during, and after a product’s launch. Regardless of industry, these skilled team members will be the foundation to helping a company navigate the inevitable ethical and logistical challenges of AI.

Vishal Gupta is an associate professor of data sciences and operations at the University of Southern California Marshall School of Business.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Vishal Gupta, University of Southern California Marshall School of Business
Source: Venturebeat

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!