AI & RoboticsNews

How companies can practice ethical AI

Check out all the on-demand sessions from the Intelligent Security Summit here.


Artificial intelligence (AI) is an ever-growing technology. More than nine out of 10 of the nation’s leading companies have ongoing investments in AI-enabled products and services. As the popularity of this advanced technology grows and more businesses adopt it, the responsible use of AI — often referred to as “ethical AI” — is becoming an important factor for businesses and their customers.

What is ethical AI?

AI poses a number of risks to individuals and businesses. At an individual level, this advanced technology can pose endanger an individual’s safety, security, reputation, liberty and equality; it can also discriminate against specific groups of individuals. At a higher level, it can pose national security threats, such as political instability, economic disparity and military conflict. At the corporate level, it can pose financial, operational, reputational and compliance risks.

Ethical AI can protect individuals and organizations from threats like these and many others that may result from misuse. As an example, TSA scanners at airports were designed to provide us all with safer air travel and are able to recognize objects that normal metal detectors could miss. Then we learned that a few “bad actors” were using this technology and sharing silhouetted nude pictures of passengers. This has since been patched and fixed, but nonetheless, it’s a good example of how misuse can break people’s trust.

When such misuse of AI-enabled technology occurs, companies with a responsible AI policy and/or team will be better equipped to mitigate the problem. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

Implementing an ethical AI policy

A responsible AI policy can be a great first step to ensure your business is protected in case of misuse. Before implementing a policy of this kind, employers should conduct an AI risk assessment to determine the following: Where is AI being used throughout the company? Who is using the technology? What types of risks may result from this AI use? When might risks arise?

For example, does your business use AI in a warehouse that third-party partners have access to during the holiday season? How can my business prevent and/or respond to misuse?

Once employers have taken a comprehensive look at AI use throughout their company, they can start to develop a policy that will protect their company as a whole, including employees, customers and partners. To reduce associated risks, companies should factor in certain key considerations. They should ensure that AI systems are designed to enhance cognitive, social and cultural skills; verify that the systems are equitable; incorporate transparency throughout all parts of development; and hold any partners accountable.

In addition, companies should consider the following three key components of an effective responsible AI policy: 

  • Lawful AI: AI systems do not operate in a lawless world. A number of legally binding rules at the national and international levels already apply or are relevant to the development, deployment and use of these systems today. Businesses should ensure the AI-enabled technologies they use abide by any local, national or international laws in their region. 
  • Ethical AI: For responsible use, alignment with ethical norms is necessary. Four ethical principles, rooted in fundamental rights, must be respected to ensure that AI systems are developed, deployed and used responsibly: respect for human autonomy, prevention of harm, fairness and explicability. 
  • Robust AI: AI systems should perform in a safe, secure and reliable manner, and safeguards should be implemented to prevent any unintended adverse impacts. Therefore, the systems need to be robust, both from a technical perspective (ensuring the system’s technical robustness as appropriate in a given context, such as the application domain or life cycle phase), and from a social perspective (in consideration of the context and environment in which the system operates).

It is important to note that different businesses may require different policies based on the AI-enabled technologies they use. However, these guidelines can help from a broader point of view. 

Build a responsible AI team

Once a policy is in place and employees, partners and stakeholders have been notified, it is vital to ensure a business has a team in place to enforce it and hold misusers accountable for misuse.

The team can be customized depending on the business’s needs, but here is a general example of a robust team for companies that use AI-enabled technology: 

  • Chief ethics officer: Often called a chief compliance officer, this role is responsible for determining what data should be collected and how it should be used; overseeing AI misuse throughout the company; determining potential disciplinary action in response to misuse; and ensuring teams are training their employees on the policy.
  • Responsible AI committee: This role, performed by an independent person/team, executes risk management by assessing an AI-enabled technology’s performance with different datasets, as well as the legal framework and ethical implications. After a reviewer approves the technology, the solution can be implemented or deployed to customers. This committee can include departments for ethics, compliance, data protection, legal, innovation, technology, and information security. 
  • Procurement department: This role ensures that the policy is being upheld by other teams/departments as they acquire new AI-enabled technologies. 

Ultimately, an effective responsible AI team can help ensure your business holds accountable anyone who misuses AI throughout the organization. Disciplinary actions can range from HR intervention to suspension. For partners, it may be necessary to cease using their products immediately upon discoering any misuse.

As employers continue to adopt new AI-enabled technologies, they should strongly consider implementing a responsible AI policy and team to efficiently mitigate misuse. By utilizing the framework above, you can protect your employees, partners and stakeholders. 

Mike Dunn is CTO at Prosegur Security.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers


Author: Mike Dunn, Prosegur Security
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!