AI & RoboticsNews

5 steps to creating a responsible AI Center of Excellence

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


To practice trustworthy or responsible AI (AI that is truly fair, explainable, accountable, and robust), a number of organizations are creating in-house centers of excellence. These are groups of trustworthy AI stewards from across the business that can understand, anticipate, and mitigate any potential problems. The intent is not to necessarily create subject matter experts but rather a pool of ambassadors who act as point people.

Here, I’ll walk your through a set of best practices for establishing an effective center of excellence in your own organization. Any larger company should have such a function in place.

1. Deliberately connect groundswells

To form a Center of Excellence, notice groundswells of interest in AI and AI ethics in your organization and conjoin them into one space to share information. Consider creating a slack channel or some other curated online community for the various cross-functional teams to share thoughts, ideas, and research on the subject. The groups of people could either be from various geographies and/or various disciplines. For example, your organization may have a number of minority groups with a vested interest in AI and ethics that could share their viewpoints with data scientists that are configuring tools to help mine for bias.  Or perhaps you have a group of designers trying to infuse ethics into design thinking who could work directly with those in the organization that are vetting governance.

2. Flatten hierarchy

This group has more power and influence as a coalition of changemakers. There should be a rotating leadership model within an AI Center of Excellence; everyone’s ideas count — everyone is welcome to share and to co-lead. A rule of engagement is that everyone has each other’s back.

3. Source your force

Begin to source your AI ambassadors from this Center of Excellence — put out a call to arms.  Your ambassadors will ultimately help to identify tactics for operationalizing your trustworthy AI principles including but not limited to:

A) Explaining to developers what an AI lifecycle is. The AI lifecycle includes a variety of roles, performed by people with different specialized skills and knowledge who collectively produce an AI service. Each role contributes in a unique way, using different tools. A key requirement for enabling AI governance is the ability to collect model facts throughout the AI lifecycle. This set of facts can be used to create a fact sheet for the model or service. (A fact sheet is a collection of relevant information about the creation and deployment of an AI model or service.) Facts could range from information about the purpose and criticality of the model to measured characteristics of the dataset, model, or service, to actions taken during the creation and deployment process of the model or service. Here is an example of a fact sheet that represents a text sentiment classifier (an AI model that determines which emotions are being exhibited in text.) Think of a fact sheet as being the basis for what could be considered a “nutrition label” for AI. Much like you would pick up a box of cereal in a grocery store to check for sugar content, you might do the same when choosing which loan provider to choose given which AI they use to determine the interest rate on your loan.

B) Introducing ethics into design thinking for data scientists, coders, and AI engineers. If your organization currently does not use design thinking, then this is an important foundation to introduce.  These exercises are critical to adopt into design processes. Questions to be answered in this exercise include:

  • How do we look beyond the primary purpose of our product to forecast its effects?
  • Are there any tertiary effects that are beneficial or should be prevented?
  • How does the product affect single users?
  • How does it affect communities or organizations?
  • What are tangible mechanisms to prevent negative outcomes?
  • How will we prioritize the preventative implementations (mechanisms) in our sprints or roadmap?
  • Can any of our implementations prevent other negative outcomes identified?

C) Teaching the importance of feedback loops and how to construct them.

D) Advocating for dev teams to source separate “adversarial” teams to poke holes in assumptions made by coders, ultimately to determine unintended consequences of decisions (aka ‘Red Team vs Blue Team‘ as described by Kathy Baxter of Salesforce).

E) Enforcing truly diverse and inclusive teams.

F) Teaching cognitive and hidden bias and its very real affect on data.

G) Identifying, building, and collaborating with an AI ethics board.

H) Introducing tools and AI engineering practices to help the organization mine for bias in data and promote explainability, accountability, and robustness.

These AI ambassadors should be excellent, compelling storytellers who can help build the narrative as to why people should care about ethical AI practices.

4. Begin teaching trustworthy AI training at scale

This should be a priority. Curate trustworthy AI learning modules for every individual of the workforce, customized in breadth and depth based on various archetype types. One good example I’ve heard of on this front is Alka Patel, head of AI ethics policy at the Joint Artificial Intelligence Center (JAIC). She has been leading an expansive program promoting AI and data literacy and, per this DoD blog, has incorporated AI ethics training into both the JAIC’s DoD Workforce Education Strategy and a pilot education program for acquisition and product capability managers. Patel has also modified procurement processes to make sure they comply with responsible AI principles and has worked with acquisition partners on responsible AI strategy.

5. Work across uncommon stakeholders

Your AI ambassadors will work across silos to ensure that they bring new stakeholders to the table, including those whose work is dedicated to diversity and inclusivity, HR, data science, and legal counsel. These people may NOT be used to working together! How often are CDIOs invited to work alongside a team of data scientists? But that is exactly the goal here.

Granted, if you are a small shop, your force may be only a handful of people. There are certainly similar steps you can take to ensure you are a steward of trustworthy AI too. Ensuring that your team is as diverse and inclusive as possible is a great start. Have your design and dev team incorporate best practices into their day-to-day activities.  Publish governance that details what standards your company adheres to with respect to trustworthy AI.

By adopting these best practices, you can help your organization establish a collective mindset that recognizes that ethics is an enabler not an inhibitor. Ethics is not an extra step or hurdle to overcome when adopting and scaling AI but is a mission critical requirement for orgs. You will also increase trustworthy-AI literacy across the organization.

As Francesca Rossi, IBM’s AI and Ethics leader  stated, “Overall, only a multi-dimensional and multi-stakeholder approach can truly address AI bias by defining a values-driven approach, where values such as fairness, transparency, and trust are the center of creation and decision-making around AI.”

Phaedra Boinodiris, FRSA, is an executive consultant on the Trust in AI team at IBM and is currently pursuing her PhD in AI and ethics. She has focused on inclusion in technology since 1999. She is also a member of the Cognitive World Think Tank on enterprise AI.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Phaedra Boinodiris, IBM
Source: Venturebeat

Related posts
AI & RoboticsNews

AI risk management startup ValidMind raises $8.1M to help banks comply with regulations

DefenseNews

Amid faltering domestic program, Taiwan orders more MQ-9B drones

DefenseNews

BAE demos platform that gives Army AMPVs turret system options

DefenseNews

US Army’s fresh look at watercraft includes unmanned options

Sign up for our Newsletter and
stay informed!