Microsoft’s UniLM AI achieves state-of-the-art performance on summarization and language generation
October 16, 2019
Language model pretraining, a technique that “teaches” machine learning systems contextualized text representations by having them predict words based on their contexts, has advanced the state of the art across a range of natural language processing objectives. However, models like Google’s BERT, which are bidirectional in design (meaning they draw on left-of-word and right-of-word context…