AI & RoboticsNews

Building responsible AI: 5 pillars for an ethical future

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


For as long as there has been technological progress, there have been concerns over its implications. The Manhattan Project, when scientists grappled with their role in unleashing such innovative, yet destructive, nuclear power is a prime example. Lord Solomon “Solly” Zuckerman was a scientific advisor to the Allies during World War 2, and afterward a prominent nuclear non proliferation advocate. He was quoted in the 1960s with a prescient insight that still rings true today: “Science creates the future without knowing what the future will be.”

Artificial intelligence (AI), now a catch-all term for any machine learning (ML) software designed to perform complex tasks that typically require human intelligence, is destined to play an outsized role in our future society. Its recent proliferation has led to an explosion in interest, as well as increased scrutiny on how AI is being developed and who is doing the developing, casting a light on how bias impacts design and function. The EU is planning new legislation aimed at mitigating potential harms that AI may bring about and responsible AI will be required by law.

It’s easy to understand why such guardrails are needed. Humans are building AI systems, so they inevitably bring their own view of ethics into the design, oftentimes for the worse. Some troubling examples have already emerged – the algorithm for the Apple card and job recruiting at Amazon were each investigated for gender bias, and Google [subscription required] had to retool its photo service after racist tagging. Each company has since fixed the issues, but the tech is moving fast, underscoring the lesson that building superior technology without accounting for risk is like sprinting blindfolded.

Building responsible AI

Melvin Greer, chief data scientist at Intel, pointed out in VentureBeat that “…experts in the area of responsible AI really want to focus on successfully managing the risks of AI bias, so that we create not only a system that is doing something that is claimed, but doing something in the context of a broader perspective that recognizes societal norms and morals.”

Event

Transform 2022

Join us at the leading event on applied AI for enterprise business and technology decision makers in-person July 19 and virtually from July 20-28.

Register Here

Put another way, those designing AI systems must be accountable for their choices, and essentially “do the right thing” when it comes to implementing software.

If your company or team is setting out to build or incorporate an AI system, here are five pillars that should form your foundation:

1. Accountability

You’d think that humans would factor into AI design from the beginning but, unfortunately, that’s not always the case. Engineers and developers can easily get lost in the code. But the big question that comes up when humans are brought into the loop is often, “How much trust do you put in the ML system to start making decisions?”

The most obvious example of this importance is self-driving cars, where we’re “entrusting” the vehicle to “know” what the right decision should be for the human driver. But even in other scenarios like lending decisions, designers need to consider what metrics of fairness and bias are associated with the ML model. A smart best practice to implement would be to create an ongoing AI ethics committee to help oversee these policy decisions, and encourage audits and reviews to ensure you’re keeping pace with modern societal standards.

2. Replicability

Most organizations utilize data from a number of sources (data warehouses, cloud storage providers, etc.), but if that data isn’t uniform (meaning 1:1) it might lead to issues down the road when you’re trying to glean insights to solve problems or update functions. It’s important for companies developing AI systems to standardize their ML pipelines to establish comprehensive data and model catalogues. This will help streamline testing and validation, as well as improve the ability to produce accurate dashboards and visualizations.

3. Transparency

As with most things, transparency is the best policy. When it comes to ML models, transparency equates to interpretability (i.e., ensuring the ML model can be explained). This is especially important in sectors like banking and healthcare, where you need to be able to explain and justify to the customers why you’re building these specific models to ensure fairness against unwanted bias. Meaning, if an engineer can’t justify why a certain ML feature exists for the benefit of the customer, it shouldn’t be there. This is where monitoring and metrics play a big role, and it’s critical to keep an eye on statistical performance to ensure the long-term efficacy of the AI system.

4. Security

In the case of AI, security deals more with how a company should protect their ML model, and usually includes technologies like encrypted computing and adversarial testing – because an AI system can’t be responsible if it’s susceptible to attack. Consider this real-life scenario: There was a computer vision model designed to detect stop signs, but when someone put a small sticker on the stop sign (not even distinguishable by the human eye) the system was fooled. Examples like this can have huge safety implications, so you must be constantly vigilant with security to prevent such flaws.

5. Privacy

This final pillar is always a hot-button issue, especially with so many of the ongoing Facebook scandals involving customer data. AI collects huge amounts of data, and there needs to be very clear guidelines on what it’s being used for. (Think GDPR in Europe.) Governmental regulation aside, each company designing AI needs to make privacy a paramount concern and generalize their data so as not to store individual records. This is especially important in healthcare or any industry with sensitive patient data. For more information, check out technologies like federated learning and differential privacy.

Responsible AI: The road ahead

Even after taking these five pillars into account, responsibility in AI can feel a lot like a whack-a-mole situation – just when you think the technology is operating ethically, another nuance emerges. This is just part of the process of indoctrinating an exciting new technology into the world and, similar to the internet, we’ll likely never stop debating, tinkering with and improving the functionality of AI.

Make no mistake, though; the implications of AI are huge and will have a lasting impact on multiple industries. A good way to start preparing now is by focusing on building a diverse team within your organization. Bringing on people of different races, genders, backgrounds and cultures will reduce your chances of bias before you even look at the tech. By including more people in the process and practicing continuous monitoring, we’ll ensure AI is more efficient, ethical and responsible.

Dattaraj Rao is chief data scientist at Persistent.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Dattaraj Rao, Persistent Systems
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!