Google achieves state-of-the-art NLP performance with an enormous language model and data set
October 24, 2019
Transfer learning, or a technique that entails pretraining an AI model on a data-rich task before fine-tuning it on another task, has been successfully applied in domains from robotics to object classification. But it holds particular promise in the subfield of natural language processing (NLP), where it’s given rise to a diversity of benchmark-besting approaches. To advance it further…