Amazon operates in 14 countries around the world, nine of which are eligible for its Prime yearly subscription service. It goes without saying that the company has a real desire to make available its shopping experience in any number of languages, particularly where customers who speak different dialects are searching for the same products.
In pursuit of an efficient means of translating multiple languages, Amazon researchers devised a shopping model called a multitask model, in which the functions overlap across tasks and tend to reinforce each other. They say that their AI, which was trained on data from several different languages at once, delivered better results using any of those languages.
As Amazon applied scientist Nikhil Rao explained in a blog post, the reason for the improvement is that a corpus in one language is able to fill gaps in that of another language. For instance, phrases easily confused in one French might not look much alike in German, so multilingual training could help sharpen the distinctions among several product queries.
The team’s system maps queries relating to a product and product d
escription into the same region of a representational space regardless of the language, principally to help the model generalize what it learns in one language to other languages. For example, the searches “school shoes boys” and “scarpe ragazzo” end up near each other in one region of the space, while the product names “Kickers Kick Lo Vel Kids’ School Shoes – Black” and “Kickers Kick Lo Infants Bambino Scarpe Nero” end up close in a different region.
The system ingests two inputs — a query and a product title — and it outputs a single bit, indicating whether the product matches the query or not. An encoder components tap Google’s Transformer architecture, which the researchers say scales better than alternative architectures, while the model’s classifier combines query and product encodings.
The team trained the system by picking one of its input languages at random and “teaching” it it to classify query-product pairs in just that language. Then, they trained it end to end over a series of epochs — complete presentation of the data set — on annotated sample queries in each of its input languages. An alignment phase ensured that the outputs tailored to different languages shared a representational space by minimizing the distance between encodings of product titles and queries.
Amazon says that in experiments involving 10 different bilingual models (five models, each of which was paired with the other four), 10 trilingual models, and one pentalingual model, they achieved “strong results” in as few as 15 or 20 epochs. According to F1 score, a common performance measure in AI that factors in false-positive and false-negative rates, a multilingual model trained on both French and German outperformed a monolingual French model by 11% and a monolingual German model by 5%. Separately, a model trained on five languages (including French and German) outperformed the French model by 24% and the German model by 19%.
“The results suggest that multilingual models should deliver more consistently satisfying shopping results to our customers,” said Rao. “In ongoing work, we are continuing to explore the power of multitask learning to improve our customers’ shopping experiences.”
Author: Kyle Wiggers
Source: Venturebeat