AI-powered assistants like Siri, Cortana, Alexa, and Google Assistant are pervasive. But for these assistants to engage users and help them to achieve their goals, they need to exhibit appropriate social behavior and provide informative replies. Studies show that users respond better to social language in the sense that they’re more responsive and likelier to complete tasks. Inspired by this, researchers affiliated with Uber and Carnegie Mellon developed a machine learning model that injects social language into an assistant’s responses while preserving their integrity.
The researchers focused on the customer service domain, specifically a use case where customer service personnel helped drivers sign up with a ride-sharing provider like Uber or Lyft. They first conducted a study to suss out the relationship between customer service representatives’ use of friendly language to drivers’ responsiveness and the completion of their first ride-sharing trip. Then, they developed a machine learning model for an assistant that includes a social language understanding and language generation component.
In their study, the researchers found that that the “politeness level” of customer service representative messages correlated with driver responsiveness and completion of their first trip. Building on this, they trained their model on a dataset of over 233,000 messages from drivers and corresponding responses from customer service representatives. The responses had labels indicating how generally polite and positive they were, chiefly as judged by human evaluators.
Post-training, the researchers used automated and human-driven techniques to evaluate the politeness and positivity of their model’s messages. They found it could vary the politeness of its responses while preserving the meaning of its messages, but that it was less successful in maintaining overall positivity. They attribute this to a potential mismatch between what they thought they were measuring and manipulating and what they actually measured and manipulated.
“A common explanation for the negative association of positivity with driver responsiveness in … and the lack of an effect of positivity enhancement on generated agent responses … might be a discrepancy between the concept of language positivity and its operationalization as positive sentiment,” the researchers wrote in a paper detailing their work. “[Despite this, we believe] the customer support services can be improved by utilizing the model to provide suggested replies to customer service representatives so that they can (1) respond quicker and (2) adhere to the best practices (e.g. using more polite and positive language) while still achieving the goal that the drivers and the ride-sharing providers share, i.e., getting drivers on the road.”
The work comes as Gartner predicts that by the year 2020, only 10% of customer-company interactions will be conducted via voice. According to the 2016 Aspect Consumer Experience Index research, 71% of consumers want the ability to solve most customer service issues on their own, up 7 points from the 2015 index. And according to that same Aspect report, 44% said that they would prefer to use a chatbot for all customer service interactions compared with a human.
VentureBeat
VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you,
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more.
Author: Kyle Wiggers
Source: Venturebeat