AI & RoboticsNews

The state of AI ethics: The principles, the tools, the regulations

Join today’s leading executives online at the Data Summit on March 9th. Register here.


What do we talk about when we talk about AI ethics? Just like AI itself, definitions for AI ethics seem to abound. A definition that seems to have garnered some consensus is that AI ethics is a system of moral principles and techniques intended to inform the development and responsible use of artificial intelligence technologies.

If this definition seems ambiguous to you, you aren’t alone. There is an array of issues that people tend to associate with the term “AI ethics,” ranging from bias in algorithms, to the asymmetrical or unlawful use of AI, environmental impact of AI technology and national and international policies around it.

For Abhishek Gupta, founder and principal researcher of the Montreal AI Ethics Institute, it’s all that and more. The sheer number of sets of principles and guidelines that are out there that each try to segment or categorize this area into subdomains — sometimes overlapping, sometimes not — presents a challenge.

The Montreal AI Ethics Institute (MAIEI) is an international nonprofit organization democratizing AI ethics literacy. It aims to equip citizens concerned about artificial intelligence to take action, as its founders believe that civic competence is the foundation of change.

The institute’s State of AI Ethics Reports, published semi-annually, condense the top research & reporting around a set of ethical AI subtopics into one document. As the first of those reports for 2022 has just been released, VentureBeat picked some highlights from the almost 300 page document to discuss with Gupta.

AI ethics: Privacy and security, reliability and safety, fairness and inclusiveness, transparency and accountability

The report covers lots of ground, going both deep and wide. It includes original material, such as op-eds and in-depth interviews with industry experts and educators, as well as analysis of research publications and summaries of news related to the topics the report covers.

The range of topics the report covers is broadly organized under the areas of analysis of the AI ecosystem, privacy, bias, social media and problematic information, AI design and governance, laws and regulations, trends, outside the boxes, and what we’re thinking.

It was practically impossible to cover all those areas, so a few were given particular focus.. However, to start, it felt important to try and pin down what falls under the broad umbrella of AI ethics. Gupta’s mental model is to use four broad buckets to classify topics related to AI ethics:

  • Implications of AI in terms of privacy and security 
  • Reliability and safety
  • Fairness and inclusiveness
  • Transparency and accountability

Gupta sees the above as the main pillars that constitute the entire domain of AI ethics.

Gupta graduated from McGill university in Montreal with a degree in computer science, and he works as a machine learning engineer at Microsoft in a team called commercial software engineering. He described this as a special division within Microsoft that is called upon to solve the toughest technical challenges for Microsoft’s biggest customers.

Over time, however, Gupta has been upskilling in the social sciences as well, as he believes is the right approach to the space of AI Ethics is an interdisciplinary one. This belief is also reflected in MAIEI’s core team membership, which includes people from all walks of life.

Interdisciplinary and inclusivity are the guiding principles in how MAIEI approaches its other activities as well. In addition to the reports and the AI Ethics Brief weekly newsletter, MAIEI hosts free monthly meetups open to people of all backgrounds and experience levels, and organizes a cohort-based learning community.

When MAIEI started, back in 2018, AI ethics discussions were held only  in small pockets around the world, and they were quite fragmented, Gupta said. There were many barriers to enter those discussions, and some of them were self-directed: Some people thought you need a Ph.D. in AI to be able to comprehend and participate in these discussions, which is antithetical to the view MAIEI takes.

However, Gupta’s own hands-on applied experience with building machine learning systems does come in handy. Many of the issues MAIEI talks about are quite concrete to him. This helps go beyond thinking about these ideas in the abstract, to thinking about how to put these principles into practice.

Too many AI principles, not enough tools

Part of the problem with AI ethics seems to be that there’s a scattered proliferation of AI ethics principles. This is one of the research publications MAIEI’s report covers, and resonates with Gupta’s own experience.

AI ethics may be grounded upon principles, but the domain has not yet managed to converge around one unifying set of principles. This, Gupta believes, probably tells us something. Perhaps what it means is that this is not the right direction for AI ethics.

“If we try to look for the broadest set of unifying principles, it necessitates that we become more and more abstract. This is useful as a point of discussion and framing conversations at the broadest level, and perhaps guiding research. When it comes to practical implementation, we need a little bit more concreteness,” he said.

“Let’s assume I’m working on a three-month project, and we’ve been working on issues of bias in the system. If we have to put in place some practices to mitigate bias, and we’re nearing the end of that project, then if I have only abstract ideas and principles to guide me, project pressure, timelines, and deliverables will make it incredibly difficult to put any of those ideas into practice,” Gupta added.

Perhaps, Gupta suggested, we should be considering looking at principles that are more catered to each domain and context, and work specifically towards more concrete manifestations that actually help guide the actions of practitioners.

If we consider principles as being the far abstract end of the spectrum of AI ethics, then what lies at the other, more concrete end of the spectrum? Tools. Somewhat surprisingly, tools for AI ethics also exist. This, in fact, is the topic of another research publication covered in MAIEI’s report.

Putting AI ethics to work: are the tools fit for purpose?” maps the landscape of AI ethics tools. It develops a typology to classify AI ethics tools and analyzes existing ones. The authors conducted a thorough search and identified 169 AI ethics documents. Of those, 39 were found to include concrete AI ethics tools. Those tools are classified as impact assessment tools, Technical and design tools, and Auditing tools. 

But this research also identified two gaps. First, key stakeholders, including members of marginalized communities, under-participate in using AI ethics tools and their outputs. Second, there is a lack of tools for external auditing in AI ethics, which is a barrier to the accountability and trustworthiness of organizations that develop AI systems.

Nevertheless, the fact that tools exist is something. What could be a way to bridge the gap from abstract principles to using tools to apply these principles concretely in AI projects?

Bridging the gap via regulation

There may be a bridge, yet it’s something equally elusive: privacy. When discussing the fact that privacy, despite being included under the broad umbrella of AI Ethics, is probably something not strictly pertaining to AI, Gupta had some thoughts to offer.

Indeed, Gupta noted, many of the issues framed as sub-areas of AI ethics are not strictly such. Harms that arise from the use of AI are not purely because of the use of AI, but they’re also related to the associated changes that happen in the software infrastructure.

AI models aren’t exposed to users in their raw form. There is an associated software infrastructure around them, a product or a service. This includes many design choices in how it has been conceived, maintained, deployed, and used. Framing all of these sub-domains as part of the broader AI Ethics umbrella may be a limiting factor.

Case in point — privacy. The introduction of AI has had implications that exacerbate privacy problems compared to what we were capable of doing before, Gupta said. With AI, it’s now possible to shine a light on every nook and cranny, every crevice, sometimes even crevices that we didn’t even think to look for, he said.

But privacy can also serve as a model of going from abstract principles to concrete implementations. What worked in privacy — regulation to define concrete measures organizations need to comply with — may work for AI ethics, too. Before GDPR, the conversation around privacy was abstract, too.

The advent of GDPR created a sense of urgency, forced organizations to take concrete measures to comply, and that also meant utilizing tools designed to aid towards this goal. Product teams altered their roadmaps, so that privacy became front and center. Mechanisms to do things such as reporting privacy breaches to a data production officer were put in place.

What GDPR entails is not groundbreaking or novel, Gupta noted. But what GDPR did was that it put forward a timeline with concrete fines and concrete actions required. This is why Gupta thinks regulation helps create a forcing function that accelerates the adoption of these ideas, and prioritizes compliance.

MAIEI organizes public consultations to include the public’s voice when policymakers ask us to respond to proposals. They also support declarations and movements they believe can make a difference, like the Montreal Declaration for a Responsible Development of AI, and the European Digital Rights’ call for AI red lines in the European Union’s AI proposal.

MAIEI consults with a number of organizations that work on topics related to regulatory frameworks, such as the Department of Defense in the United States, the Office of the Privacy Commissioner of Canada, and IEEE. They have also contributed to publications for responsible AI, including the Scottish national AI strategy and having worked with the Prime Minister’s office in New Zealand.

Is AI ethics regulation the end-all?

MAIEI’s report also includes analysis on regulatory efforts from across the world, from the EU to the U.S. and China. Gupta notes that similar to how the EU led the way in privacy regulation with GDPR, it seems to be leading the way in AI regulation too. However, there is nuance to be noted here.

What we had with GDPR, Gupta said, is the so-called “Brussels effect,” which is the adoption of ideas and regulation influenced by what is a Eurocentric view on privacy. This may, or may not, translate well to other parts of the world.

What is now manifesting in AI regulation is different sets of regulations coming from different parts of the world, imposing different views and requirements. We may well end up with a cacophony or regulatory frameworks, Gupta warns.

This will make it hard for organizations to navigate this landscape and comply with those regulations, which will end up favoring organizations with more resources. In addition, if the GDPR precedent is anything to go by, certain organizations may end up pulling out of certain markets, which will reduce choice for users.

One way to negate this, Gupta said, would be if regulations came accompanied with their own set of open-source compliance tools. That would democratize the ability for many organizations to compete, while ensuring compliance with regulations.

Admittedly, this is not something we have seen much in existing regulatory efforts. Typically the thinking seems to be that the job of the regulators is to set up a regulatory framework, and the market will do the rest. Gupta, however, pointed out an instance in which this approach has been taken.

Gupta referred to a group called Tech Against Terrorism. It’s a UN-backed group that builds technical tools, using resources from large organizations to make open-source tools available to smaller organizations. The aim is to combat the spread of terrorism, or coordination of terrorism activities.

Tech Against Terrorism have brought together a coalition of entities, and the organizations that have more resources invest them and create tools that are then disseminated to other resource-constrained organizations. So there is a precedent, Gupta noted.

This requires an organization that coordinates and steers these activities in a way that benefits the entire ecosystem. However, given the fact that AI is seen as an area of strategic investment and competition among nations, it’s uncertain how that would work in practice.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Author: George Anadiotis
Source: Venturebeat

Related posts
AI & RoboticsNews

Mike Verdu of Netflix Games leads new generative AI initiative

AI & RoboticsNews

Google just gave its AI access to Search, hours before OpenAI launched ChatGPT Search

AI & RoboticsNews

Runway goes 3D with new AI video camera controls for Gen-3 Alpha Turbo

DefenseNews

Why the Defense Department needs a chief economist

Sign up for our Newsletter and
stay informed!