AI & RoboticsNews

Why you need an organizational AI ethics committee to do AI right

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.


Artificial intelligence (AI) may still feel a bit futuristic to many, but the average consumer would be surprised at where AI can be found. It’s no longer a science fiction concept confined to Hollywood and feature movies or top-secret technology only found in computer science labs at the Googles and Metas of the world—quite the contrary. Today, AI is not only behind many of our online shopping and social media recommendations, customer service inquiries and loan approvals, but it’s also actively creating music, winning art contests and beating humans in games that have existed for thousands of years.

Due to this growing awareness gap surrounding AI’s expansive capabilities, a critical first step for any organization or business that uses or provides it should be forming an AI ethics committee. This committee would be tasked with two major initiatives: engagement and education.

The ethics committee would not only prevent malpractice and unethical applications of AI as it’s used and implemented. It would also work closely with regulators to set realistic parameters and formulate rules that proactively protect individuals from potential pitfalls and biases. Further, it would educate consumers and allow them to view AI through a neutral lens backed by critical thinking. Users should understand that AI can change how we live and work and can also perpetuate bias and discriminatory practices that have plagued humanity for centuries.

The case for an AI ethics committee 

Leading institutions working with AI are probably the most aware of its potential to positively change the world, as well as to cause harm. Some may be more seasoned than others in the space, but internal oversight is important for organizations of all sizes and with leadership of varying experience. For example, the Google engineer who himself was convinced that a Natural Language Processing (NLP) model was actually sentient AI (it wasn’t) is a clear example that even education and internal ethical parameters must take priority. Starting AI development on the right foot is paramount for its (and our) future success.

Event

Intelligent Security Summit

Learn the critical role of AI & ML in cybersecurity and industry specific case studies on December 8. Register for your free pass today.


Register Now

Microsoft, for instance, is constantly innovating with AI—and placing ethical considerations at the forefront. The software giant recently announced the ability to use AI to recap Teams meetings. That could mean less note-taking and more strategic, on-the-spot thinking. But despite this win, it doesn’t mean there’s been perfect AI innovation coming from the software company either. Over the summer, Microsoft scrapped its AI facial-analysis tools because of the risk of bias.

Even though the development wasn’t perfect each time, it shows the importance of having ethical guidelines in place to determine the level of risk. In the case of Microsoft’s AI facial analysis, those guidelines determined that the risk was greater than the reward, protecting us all from something that could have had potentially harmful outcomes—like the difference between being granted an urgently needed monthly support check and unfairly being refused aid. 

Choose proactive over passive AI 

Internal AI ethics committees serve as checks and balances to the development and advancement of new technologies. They also enable an organization to fully inform and formulate consistent opinions on how regulators can protect all citizens against harmful AI. While the White House’s proposal for an AI Bill of Rights shows that active regulation is just around the corner, industry experts must still have knowledgeable insights on what’s best for citizens and organizations regarding safe AI. 

Once an organization has committed to building an AI ethics committee, it’s important to practice three  proactive, as opposed to passive, approaches:

1. Build with intention

The first step is to sit down with the committee and together finalize what the end goal is. Be diligent when researching. Talk to technical leaders, communicators, and everyone across the organization who may have something to add about the direction of the committee—diversity of input is critical. It can be easy to lose track of the scope and primary function of the AI ethics committee if goals and objectives are not established early on, and the final product could stray from its original intention. Find solutions, build a timeline and stick to it.

2. Don’t boil the ocean 

Just like the vast blue seas surrounding the world, AI is a complex field that expands far and goes deep, with many unexplored trenches. When starting your committee, don’t take on too much or too broad of a scope. Be focused and intentional in your AI plans. Know what your use of this technology is setting out to solve or improve.

Be open to various perspectives 

A background in deep tech is helpful, but a well-rounded committee includes various perspectives and stakeholders. This diversity allows for the expression of valuable opinions on potential ethical AI threats. Include the legal team, creative, media and engineers. This will give the company and its clients representation in all areas where ethical dilemmas may arise. Create a company-wide “call to action” or prepare a questionnaire to define goals—remember, the aim here is to broaden your dialogue.  

Education and engagement save the day 

AI ethics committees facilitate two aspects of success for an organization using AI: education and engagement. Educating everyone internally, from engineers to Todd and Mary in accounting, about the pitfalls of AI will better equip organizations to inform regulators, consumers and others in the industry and promote society that is engaged with and educated on matters of artificial intelligence.

CF Su is VP of machine learning at Hyperscience.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Ching-Fong Su
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!