AI & RoboticsNews

AI Weekly: Here’s how enterprises say they’re deploying AI responsibly

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


Implementing AI responsibly means different things to different companies. For some, “responsible” implies adopting AI in a manner that’s ethical, transparent, and accountable. For others, it means ensuring that their use of AI remains consistent with laws, regulations, norms, customer expectations, and organizational values. In any case, “responsible AI” promises to guard against the use of biased data or algorithms, providing an assurance that automated decisions are justified and explainable — at least in theory.

But some evidence suggests that organizations are implementing AI less responsible than they internally believe. According to a recent Boston Consulting Group survey of 1,000 enterprises, less than half that achieved AI at scale had fully mature, responsible AI implementations, according to the same report.

The lagging adoption of responsible AI belies the value that these practices can bring to bear. A study by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them — and in turn, punish those that don’t. The study suggests that there’s both reputational risk and a direct impact on the bottom line for companies that don’t approach the issue thoughtfully.

To get a sense of the extent to which brands are thinking about — and practicing — the tenets of responsible AI, VentureBeat surveyed executives at companies that claim to be using AI in a tangible capacity. Their responses reveal that a single definition of “responsible AI” remains elusive. At the same time, they show an awareness of the consequences of opting not to deploy AI thoughtfully.

Companies in enterprise automation

ServiceNow was the only company VentureBeat surveyed to admit that there’s no clear definition of what constitutes responsible AI usage. “Every company really needs to be thinking about how to implement AI and machine learning responsible,” ServiceNow chief innovation officer Dave Wright told VentureBeat. “[But] every company has to define it for themselves, which unfortunately means there’s a lot of potential for harm to occur.”

According to Wright, ServiceNow’s responsible AI approach encompasses the three pillars of diversity, transparency, and privacy. When building an AI product, the company brings in a variety of perspectives and has them agree on what counts as fair, ethical, and responsible before development begins. ServiceNow also ensures that its algorithms remain explainable in the sense that it’s clear why they arrive at their predictions. Lastly, the company says it limits and obscures the amount of personally identifiable information it collects to train its algorithms. Toward this end, ServiceNow is investigating “synthetic AI” that could allow developers to train algorithms without handling real data and the sensitive information it contains.

“At the end of the day, responsible AI usage is something that only happens when we pay close attention to how AI is used at all levels of our organization. It has to be an executive-level priority,” Wright said.

Automation Anywhere says it established AI and bot ethical principles to provide guidelines to its employees, customers, and partners. They include monitoring the results of any process automated using AI or machine learning so as to prevent them from producing outputs that might reflect racial, sexist, or other biases.

“New technologies are a two-edged sword. While they can free humans to realize their potential in entirely new ways, sometimes these technologies can also, unfortunately, entrap humans in bad behavior and otherwise lead to negative outcomes,” Automation Anywhere CTO Prince Kohli told VentureBeat via email. “[W]e have made the responsible use of AI and machine learning one of our top priorities since our founding, and have implemented a variety of initiatives to achieve this.”

Beyond the principles, Automation Anywhere created an AI committee charged with challenging employees to consider ethics in their internal and external actions. For example, engineers must seek to address the threat of job loss raised by AI and machine learning technologies and the concerns of customers from an “all-inclusive” range of different minority groups. The committee also reevaluates Automation Anywhere’s principles on a regular basis so that they evolve with emerging AI technologies.

Splunk SVP and CTO Tim Tully, who anticipates the industry will see a renewed focus on transparent AI practices over the next two years, says that Splunk’s approach to putting “responsible AI” into practice is fourfold. First, the company makes sure that the algorithms it’s developing and operating are in alignment with governance policies. Then, Splunk prioritizes talent to work with its AI and machine learning algorithms to “[drive] continual improvement.” Splunk also takes steps to bake security into its R&D processes while keeping “honesty, transparency, and fairness” top of mind throughout the building lifecycle.

“In the next few years, we’ll see newfound industry focus on transparent AI practices and principles — from more standardized ethical frameworks, to additional ethics training mandates, to more proactively considering the societal implications of our algorithms — as AI and machine learning algorithms increasingly weave themselves into our daily lives,” Tully said. “AI and machine learning was a hot topic before 2020 disrupted everything, and over the course of the pandemic, adoption has only increased.”

Companies in hiring and recruitment

LinkedIn says that it doesn’t look at bias in algorithms in isolation but rather identifies what biases cause harm to users and works to eliminate this. Two years ago, the company launched an initiative called Project Every Member to take a more rigorous approach to reducing and eliminating unintended consequences in the services it builds. By using inequality A/B testing throughout the product design process, LinkedIn says it aims to build trustworthy, robust AI systems and datasets with integrity that comply with laws and “benefit society.”

For example, LinkedIn says it uses differential privacy in its LinkedIn Salary product to allow members to gain insights from others without compromising information. And the company claims its Smart Replies product, which taps machine learning to suggest responses to conversations, was built to prioritize member privacy and avoid gender-specific replies.

“Responsible AI is very hard to do without company-wide alignment. ‘Members first’ is a core company value, and it is a guiding principle in our design process,” a spokesperson told VentureBeat via email. “We can positively influence the career decisions of more than 744 million people around the world.”

Mailchimp, which uses AI to, among other things, provide personalized product recommendations for shoppers, tells VentureBeat that it trains each of its data scientists in the fields that they’re modeling. (For example, data scientists at the company working on products related to marketing receive training in marketing.) However, Mailchimp also admits that its systems are trained on data gathered by human-powered processes that can lead to a number of quality-related problems, including errors in the data, data drift, and bias.

“Using AI responsibly takes a lot of work. It takes planning and effort to gather enough data, to validate that data, and to train your data scientists,” Mailchimp chief data science officer David Dewey told VentureBeat. “And it takes diligence and foresight to understand the cost of failure and adapt accordingly.”

For its part, Zendesk says it places an emphasis on a diversity of perspectives where its AI adoption is concerned. The company claims that broadly, its data scientists examine processes to ensure that its software is beneficial, unbiased, following strong ethical principles, and securing the data that makes its AI work. “As we continue to leverage AI and machine learning for efficiency and productivity, Zendesk remains committed to continuously examining our processes to ensure transparency, accountability and ethical alignment in our use of these exciting and game-changing technologies, particularly in the world of customer experience,” Zendesk president of products Adrian McDermott told VentureBeat.

Companies in marketing and management

Adobe EVP of general counsel and corporate secretary Dana Rao points to the company’s ethics principles as an example of its commitment to responsible AI.  Last year, Adobe launched an AI ethics committee and review board to help guide its product development teams and review new AI-powered features and products prior to release. At the product development stage, Adobe says its engineers use an AI impact assessment tool created by the committee to capture the potential ethical impact of any AI feature to avoid perpetuating biases.

“The continued advancement of AI puts greater accountability on us to address bias, test for potential misuse, and inform our community about how AI is used,” Rao said. “As the world evolves, it is no longer sufficient to deliver the world’s best technology for creating digital experiences; we want our technology to be used for the good of our customers and society.”

Among the first AI-powered features the committee reviewed was Neural Filters in Adobe Photoshop, which lets users add non-destructive, generative filters to create things that weren’t previously in images (e.g., facial expressions and hair styles). In accordance with its principles, Adobe added an option within Photoshop to report whether the Neural Filters output a biased result. This data is monitored to identify undesirable outcomes and allows the company’s product teams to address them by means of updating an AI model in the cloud.

Adobe says that while evaluating Neural Filters, one review board member flagged that the AI didn’t properly model the hairstyle of a particular ethnic group. Based on this feedback, the company’s engineering teams updated the AI dataset before Neural Filters was released.

“This constant feedback loop with our user community helps further mitigate bias and uphold our values as a company — something that the review board helped implement,” Rao said. “Today, we continue to scale this review process for all of the new AI-powered features being generated across our products.”

As for Hootsuite CTO Ryan Donovan, he believes that responsible AI ultimately begins and ends with transparency. Brands should demonstrate where and how they’re using AI — an ideal that Hootsuite strives to achieve, he says.

“As a consumer, for instance, I fully appreciate the implementation of bots to respond to high level customer service inquiries. However, I hate when brands or organizations masquerade those bots off as human, either through a lack of transparent labelling or assigning them human monikers,” Donovan told VentureBeat via email. “At Hootsuite, where we do use AI within our product, we have consciously endeavored to label it distinctly — suggested times to post, suggested replies, and schedule for me being the most obvious.”

SVP of product development at ADP Jack Berkowitz says that that responsible AI at ADP starts with the ethical use of data. In this context, “ethical use of data” means looking carefully at what the goal of an AI system is and the right way to achieve it.

“When AI is baked into technology, it comes with inherently heightened concerns, because it means an absence of direct human involvement in producing results,” Berkowitz said. “But a computer only considers the information you give it and only the questions you ask, and that’s why we believe human oversight is key.”

ADP retains an AI and data ethics board of experts in tech, privacy, law, and auditing that works with teams across the company to evaluate the way they use data. It also provides guidance to teams developing new uses and follows up to ensure the outputs are desirable. The board reviews ideas and evaluates potential uses to determine whether data is executed on fairly and in compliance with legal requirements and ADP’s own standards. If an idea falls short of meeting transparency, fairness, accuracy, privacy, and accountability requirements, it doesn’t move forward within the company, Berkowitz says.

Marketing platform HubSpot similarly says its AI projects undergo a peer review for ethical considerations and bias. According to senior machine learning engineer Sadhbh Stapleton Doyle, the company uses proxy data and external datasets to “stress test” its models for fairness. In addition to model cards, HubSpot also maintains a knowledge base of ways to detect and mitigate bias.

The road ahead

A number of companies declined to tell VentureBeat how they’re deploying AI responsibly in their organizations, highlighting one of the major challenges in the field: Transparency. A spokesperson for UiPath said that the robotic process automation startup “wouldn’t be able to weigh in” on responsible AI. Zoom, which recently faced allegations that its face-detection algorithm erased Black faces when applying virtual backgrounds, chose not to comment. And Intuit told VentureBeat that it had nothing to share on the topic.

Of course, transparency isn’t the end-all-be-all when it comes to responsible AI. For example, Google, which loudly trumpets its responsible AI practices, was recently the subject of a boycott by AI researchers over the company’s firing of Timnit Gebru and Margaret Mitchell, coleaders of a team working to make AI systems more ethical. Facebook also purports to be implementing AI responsibly, but to date, the company has failed to present evidence that its algorithms don’t encourage polarization on its platforms.

Returning to the Boston Consulting Group survey, Steven Mills, chief ethics officer and a coauthor, noted that the depth and breadth of most responsible AI efforts fall behind what’s needed to truly ensure responsible AI. Organizations’ responsible AI programs typically neglect the dimensions of fairness and equity, social and environmental impact, and human-AI cooperation because they’re difficult to address.

Greater oversight is a potential remedy. Companies like Google, Amazon, IBM, and Microsoft; entrepreneurs like Sam Altman; and even the Vatican recognize this — they’ve called for clarity around certain forms of AI, like facial recognition. Some governing bodies have begun to take steps in the right direction, like the EU, which earlier this year floated rules focused on transparency and oversight. But it’s clear from developments over the past months that much work remains to be done.

As Salesforce principal architect of ethical AI practice Kathy Baxter told VentureBeat in a recent interview, AI can result in harmful, unintended consequences if algorithms aren’t trained and designed inclusively. Technology alone can’t solve systemic health and social inequities, she asserts. In order to be effective, technology must be built and used responsibly — because no matter how good a tool is, people won’t use it unless they trust it.

“Ultimately, I believe the benefits of AI should be accessible to everyone, but it is not enough to deliver only the technological capabilities of AI,” Baxter said. “Responsible AI is technology developed inclusively, with a consideration towards specific design principles to mitigate, as much as possible, unforeseen consequences of deployment — and it’s our responsibility to ensure that AI is safe and inclusive. At the end of the day, technology alone cannot solve systemic health and social inequities.”

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

AI Staff Writer

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!