AI & RoboticsNews

Why embedding AI ethics and principles into your organization is critical

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.


As technology progresses, business leaders understand the need to adopt enterprise solutions leveraging Artificial Intelligence (AI). However, there’s understandable hesitancy due to implications around the ethics of this technology — is AI inherently biased, racist, or sexist? And what impact could this have on my business?

It’s important to remember that AI systems aren’t inherently anything. They’re tools built by humans and may maintain or amplify whatever biases exist in the humans who develop them or those who create the data used to train and evaluate them. In other words, a perfect AI model is nothing more than a reflection of its users. We, as humans, choose the data that is used in AI and do so despite our inherent biases.

In the end, we’re all subject to a variety of sociological and cognitive biases. If we’re aware of these biases and continuously put measures in place to help combat them, we’ll continue to make progress in minimizing the damage these biases can do when they are built into our systems.

Examining ethical AI today

Organizational emphasis on AI ethics has two prongs. The first is related to AI governance which deals with what is permissible in the field of AI, from development to adoption, to usage.

Event

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.


Register Here

The second touches on AI ethics research aiming to understand the inherent characteristics of AI models as a result of certain development practices and their potential risks. We believe the learnings from this field will continue to become more nuanced. For instance, current research is largely focused on foundation models, and in the next few years, it will turn to smaller downstream tasks that can either mitigate or propagate the downsides of these models.

Universal adoption of AI in all aspects of life will require us to think about its power, its purpose, and its impact. This is done by focusing on AI ethics and demanding that AI be used in an ethical manner. Of course, the first step to achieving this is to find agreement on what it means to use and develop AI ethically.

One step towards optimizing products for fair and inclusive results is to have fair and inclusive training, development and test datasets. The challenge is that high-quality data selection is a non-trivial task. It can be difficult to obtain these kinds of datasets, especially for smaller startups, because many readily available training data contain bias. Also, it is useful to add debiasing techniques and automated model evaluation processes to the data augmentation process, and to start out with thorough data documentation practices from the very beginning, so developers have a clear idea of what they need to augment any datasets they decide to use.

The cost of unbiased AI

Red flags exist everywhere, and technology leaders need to be open to seeing them. Given that bias is to some extent unavoidable, it’s important to consider the core use-case of a system: Decision-making systems that can affect human lives (that is, automated resume screening or predictive policing) have the potential to do untold damage. In other words, the central purpose of an AI model may in itself be a red flag. Technology organizations should openly examine what the purpose of an AI model is to determine whether that purpose is ethical.

Further, it is increasingly common to rely on large and relatively un-curated datasets (such as Common Crawl and ImageNet) to train base systems that are subsequently “tuned” to specific use cases. These large scraped datasets have repeatedly been shown to contain actively discriminatory language and/or disproportionate skews in the distribution of their categories.  Because of this, it is important for AI developers to examine the data they will be using in depth from the genesis of their project when creating a new AI system.

Less expensive in the end

As mentioned, resources for startups and some technology firms may come into play with the effort and cost invested in these systems. Fully developed ethical AI models can certainly appear more expensive at the outset of design. For example, creating, finding, and purchasing high-quality datasets can be costly in terms of both time and money. Likewise, augmenting datasets that are lacking can take time and resources. It also takes time, money, and resources to find and hire diverse candidates.

In the long run, however, due diligence will become less expensive. For instance, your models will perform better, you won’t have to deal with large-scale ethical mistakes, and you won’t suffer the consequences of sustained harm to various members of society. You’ll also spend fewer resources scrapping and redesigning large-scale models that have become too biased and unwieldy to fix — resources that are better spent on innovative technologies used for good.

If we are better, AI is better

Inclusive AI requires technology leaders to proactively attempt to limit the human biases that are fed into their models. This requires an emphasis on inclusivity not just in AI, but in technology in general. Organizations should think clearly about AI ethics and promote strategies to limit bias, such as periodic reviews of what data is used and why.

Companies should also choose to live those values fully. Inclusivity training and diversity, equity, and inclusion (DE&I) hiring are great starts and must be meaningfully supported by the culture of the workplace. From this, companies should actively encourage and normalize an inclusive dialogue within the AI discussion, as well as in the greater work environment, making us better as employees and in turn, making AI technologies better.

On the development side, there are three main centers of focus so that AI can better suit end-users regardless of differentiating factors: understanding, taking action and transparency.

In terms of understanding, systematic checks for bias are needed to ensure the model does its best to offer a non-discriminatory judgment. One major source of bias in AI models is the data developers start with. If training data is biased, the model will have that bias baked in. We put a large focus on data-centric AI, meaning we try our best at the outset of model design, namely the selection of appropriate training data, to create optimal datasets for model development.  However, not all datasets are created equal and real-world data can be skewed in many ways — sometimes we have to work with data that may be biased.

Representational data

One technique to practice better understanding is disaggregated evaluation — measuring performance on subsets of data that represent specific groups of users. Models are good at cheating their way through complex data, and even if the variables such as race or sexual orientation were not explicitly included, they may surprise you by figuring this out and still discriminate against these groups. Specifically checking for this will help to shed light on what the model is actually doing (and what it is not doing).

In taking action after garnering a better understanding, we utilize various debiasing techniques. These include positively balancing datasets to represent minorities, data augmentation and encoding sensitive features in a specific way to reduce their impact. In other words, we do tests to figure out where our model might be lacking in training data and then we augment datasets in those areas so that we are continuously improving when it comes to debiasing.

Finally, it is important to be transparent in reporting data and model performance. Simply put, if you found your model discriminating against someone, say it and own it.

The future of ethical AI applications

Today, businesses are crossing the chasm in AI adoption. We are seeing in the business-to-business community that many organizations are adopting AI to solve frequent and repetitive problems and to leverage AI to drive real-time insights on existing datasets. We experience these capabilities in a multitude of areas — in our personal lives such as our Netflix recommendations to analyzing the sentiment of hundreds of customer conversations in the business world.

Until there are top-down regulations regarding the ethical development and use of AI, predictions can’t be made. Our AI ethics principles at Dialpad are a way to hold ourselves accountable for the AI technology leveraged in our products and services. Many other technology companies have joined us in promoting AI ethics by publishing similar ethical principles, and we applaud those efforts.

However, without external accountability (either through governmental regulations or industry standards and certifications), there will always be actors who either intentionally or negligently develop and utilize AI that is not focused on inclusivity.

No future without (ethical) AI

The dangers are real and practical. As we have said repeatedly, AI permeates everything we do professionally and personally. If you are not proactively prioritizing inclusivity (among the other ethical principles), you are inherently allowing your model to be subject to overt or internal biases. That means that the users of those AI models — often without knowing it — are digesting the biased results, which have practical consequences for everyday life.

There is likely no future without AI, as it becomes increasingly prevalent in our society. It has the potential to greatly increase our productivity, our personal choices, our habits, and indeed our happiness. The ethical development and use of AI is not a contentious subject, and it is a social responsibility that we should take seriously — and we hope that others do as well.

My organization’s development and use of AI is a minor subsection of AI in our world. We have committed to our ethical principles, and we hope that other technology firms do as well.

Dan O’Connell is CSO of Dialpad

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Dan O’Connell, Dialpad
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!