AI & RoboticsNews

Examining the ethics of AI development

Did you miss a session at the Data Summit? Watch On-Demand Here.


This article is contributed by Brian Gilmore, director of IoT Product Management at InfluxData.

Artificial intelligence (AI) is a concept that gets batted around freely these days. Generally, the public tends to think of campy popular science fiction depictions of AI. However, even among those working in technology, there’s not always an agreed-upon definition of AI. We tend to use AI as a category heading, an umbrella term that includes various technologies of vast complexity. This underlying, often abstracted complexity is why it’s important to consider how ethics should play into AI development and deployment.

It can be easy to judge technology in simple binary terms; for example, “does it do what I want or not?” Ultimately, that comes down to a yes or no answer. In an emerging field like AI, technologists first explore these “can it” questions. But as we begin to accept autonomous and AI-driven technology into the mainstream, different questions arise. As a society, we can’t solely rely on technologists to make sustainable decisions every time. We need a paradigm shift that incorporates ethicists focused on those important “should it” questions.

AI technologies will undoubtedly influence and impact our lives, beliefs and culture. The role of AI ethicists should be to ensure that these technologies wield that impact in an equitable and benevolent way. Succeeding here demands consideration of AI technologies beyond the limited scope of their primary and planned use cases. It also requires awareness of these technologies’ potential side effects. The ethical development and delivery of AI technologies must balance risks and rewards for everyone, not just the intended beneficiaries.

Nurture vs. nature 

Artificial Intelligence is somewhat different from standard technology in development and deployment. There is almost a parental aspect to building AI, and one can look at it like raising a child. On the one hand, there’s a “nurture” component in the way we design and program algorithms, the examples we choose to train the AI and the testing we do to validate the output in a controlled environment. As humans lead and teach AI through these development steps, they are naturally subject to the same human biases, intentional or not, of the people building them. Failure to take this into account or take measures to mitigate the effects of as many introduced biases as possible poses a significant risk of unintended consequences. 

We also shouldn’t ignore the “nature” component of AI. The underlying technologies used to build AI, including programming languages, algorithms, architectures, deployment models and physical and digital inputs, can create unexpected results when combined and applied to real-world situations. As a result, it’s possible that even despite good design and ethical intentions, AI technology can generate unethical results or output. This challenge forces a requirement for transparency. For example, a good first step could be a comprehensive documentation of code and developer intention, paired with implementing detailed monitoring and alerting systems that drive compliance with those intentions. 

Applying transparency to the nurture and nature aspects of building AI isn’t an easy task. Doing so requires developers and data scientists to balance individual rights and desires with the “greater good.” Of course, this dilemma applies beyond AI or even technology in its broadest sense. These are issues that societies struggle with in many ways, so it comes as little surprise that AI reflects those same problems. Therefore, our ability to develop, implement and practice AI in an ethical manner is part of a much deeper conundrum. Instead of considering whether the technology or its implementation is innately “ethical,” we must also ponder whether the decisions AI makes and the actions it takes are “ethical” in their own right. Who should make these decisions?

Defining the path forward for ethical AI and related technology

There’s an inherent challenge in assessing right and wrong with AI. If we rely on AI to drive decisions where someone ultimately wins or loses, how can we guarantee that AI always makes the ethically “correct” decision? Is it even possible to fully encode human empathy, emotion, and intention into AI? We need to decide now how we will react when we disagree with technology in the gray areas of “right” and “wrong.” Ultimately, we need to consider the population the AI truly serves: its creators, operators, or the greater good. Honest answers to these questions aren’t easy, but they’re critical for AI technology’s long-term health and success.

Here are a few predictions for where things might be heading. First, we’ll likely see significant formalization in the field of “digital ethics,” which logically should include AI ethics. If history holds, we will see the development of multiple standards and regulating bodies and a parade of key technology executives signing pledges to indicate accountability and forward progress. 

We will see the rise of chief ethical and chief principles officers. These executives will own the accountability for ethical and equitable creation, deployment, and adoption of all technology within an organization. Legal and Compliance leaders will initially fill these roles; however, this may prove ineffective. Hopefully, we will see leaders emerge in the executive suite from disciplines not typically associated with technology, like philosophy, theology, and psychology. Bringing these new stakeholders to the table will transform organizations in ways that reach far beyond AI governance.

The bottom line

We must initiate the discourse around ethical AI and technology now, while we’re still a long way from that science fiction AI vision. In reality, most organizations are still focusing on transforming their operations into the digital realm and becoming data-driven. A company can’t simply “upgrade” to AI. Getting the right people in place is key. Hire a Chief Data Officer and staff a diverse team of statisticians and analysts. Balance the team with academics and domain experts. Provide those experts with the tools and infrastructure they need to be effective. Build pipelines for data collection, processing, and storage using technologies like time series, graph, document and relational databases, and take advantage of both open-source tools and commercial platforms your data scientists, analysts and developers know and love. Connect the team with business stakeholders facing challenging problems and the data to go with those problems, and watch the magic happen!

Brian Gilmore is director of IoT product management at InfluxData, the creators of InfluxDB.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Brian Gilmore
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!