AI & RoboticsNews

Amazon researchers use AI to improve Alexa’s joke selection

What do Google Assistant, Siri, Alexa, and Cortana have in common? They tell jokes of varying cleverness, most of which are the work of writing teams operating behind the scenes. They’re entertaining, but preliminary research suggests they also play a part in making interactions with assistants engaging.

Of course, there’s always room for improvement. In pursuit of assistants capable of tailoring jokes to individual users’ tastes, Amazon researchers investigated joke selection methods that tap either a basic natural language processing model or a machine learning model. They say that when tested against production data, both approaches “positively” impacted user satisfaction and potentially improved joke-telling.

Training the models required an extensively annotated data set, which the team compiled by recording a set of voice assistant users’ reactions to jokes. Two implicit feedback strategies were employed, one in which a joke was labeled “positive” (i.e., funny) if a user requested a new joke within five minutes of hearing it and a second that marked as positive all joke requests followed by new ones within 1 to 25 hours.

To compare the different labeling techniques, the team conducted an A/B test in a production setting, in addition to a comparison involving historical data and a selected labeling strategy. A joke data set containing thousands of unique jokes across categories (e.g., sci-fi and sports) and types (puns, limericks, and more) was used to validate each model, along with data from approximately 80,000 English-speaking “customers” in total (presumably Alexa users, though the researchers don’t say so explicitly).

The results show that the proposed natural language processing model consistently outperformed a rules-based method for both labeling strategies. They note that the machine learning model performed well in terms of accuracy and performance, but that its architecture — which is far larger in size than that of the natural language processing model — would make it difficult to extend to new countries and languages.

The researchers leave to future work comparing additional methods and developing a method that might easily extend to new languages.

A superior sense of humor could bolster assistant usage among those who haven’t climbed aboard the bandwagon. An estimate from eMarketer in August pegged the number of monthly users of voice assistants at roughly 112 million, up from 102 million in 2018, but a separate survey and report from PricewaterhouseCoopers found that poor understanding of what AI assistants are capable of doing and a general lack of trust could hamper the segment’s growth.


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

DeepSeek’s first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance

AI & RoboticsNews

Snowflake beats Databricks to integrating Claude 3.5 directly

AI & RoboticsNews

OpenScholar: The open-source A.I. that’s outperforming GPT-4o in scientific research

DefenseNews

US Army fires Precision Strike Missile in salvo shot for first time

Sign up for our Newsletter and
stay informed!