Human interpreters make choices unique to them, consciously or unconsciously, when translating one language into another. They might explicate, normalize, or condense and summarize, creating fingerprints known informally as “translationese.” In machine learning, generating accurate translations has been the main objective thus far. But this might be coming at the expense of translation richness and diversity.
In a new study, researchers at Tilburg University and the University of Maryland attempt to quantify the lexical and grammatical diversity of “machine translationese” — i.e., the fingerprints made by AI translation algorithms. They claim to have found a “quantitatively measurable” difference between the linguistic richness of machine translation systems’ training data and their translations, which could be a product of statistical bias.
The researchers looked a range of different machine learning model architectures including Transformer, neural machine translation, long short-term memory networks, and phrase-based statistical machine translation. In experiments, they tasked each with translating between English, French, and Spanish and compared the original text with the translations using 9 different metrics.
The researchers report that in experiments, the original training data — a collection of reference translations — always had a higher lexical diversity than the machine translations regardless of the type of model used. In other words, the reference translations were consistently more diverse in terms of vocabulary and synonym usage than the translations from the models.
The coauthors point out that while the loss of lexical diversity could be a desirable side effect of machine translation systems (in terms of simplification or consistency), the loss of morphological richness is problematic as it can prevent systems from making grammatically correct choices. Bias can emerge, too, with machine translation systems having a stronger negative impact in terms of diversity and richness on morphologically richer languages like Spanish and French.
“As [machine translation] systems have reached a quality that is (arguably) close to that of human translations and as such are being used widely on a daily basis, we believe it is time to look into the potential effects of [machine translation] algorithms on language itself,” the researchers wrote in a paper describing their work. “All [of our] metrics indicate that the original training data has more lexical and morphological diversity compared to translations produced by the [machine translation] systems … If machine translationese (and other types of ‘NLPese’) is a simplified version of the training data, what does that imply from a sociolinguistic perspective and how could this affect language on a longer term?”
The coauthors propose no solutions to the machine translation problems they claim to have uncovered. However, they believe their metrics could drive future research on the subject.
VentureBeat
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.
Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more
Author: Kyle Wiggers
Source: Venturebeat