Maker of the popular PyTorch-Transformers model library, Hugging Face today said it’s bringing its NLP library to the TensorFlow machine learning framework. The PyTorch version of the library has seen more than 500,000 Pip installs since the beginning of the year, Hugging Face CEO Clément Delangue told VentureBeat.
The Transformers library for TensorFlow brings together the most advanced Transformers-based AI models, like Google’s BERT and XLNet, Facebook’s RoBERTa, and OpenAI’s GPT and GPT-2. It also includes Hugging Face’s DistilBERT.
Each of the models exceeds human performance and ranks atop the GLUE benchmark leaderboard.
In addition to bringing together top-performing AI models, the Transformers library has an Abstraction layer for each model to spare developers the time needed to integrate the model into their products or rebuild integration when a popular new model emerges.
Based in New York, Hugging Face began as a chatbot company, but its main focus today is on the Transformers library and helping developers integrate NLP into their own products or devices.
“If at the beginning you decide to use TensorFlow or PyTorch, you can actually move from one to another and train the same tasks based on what’s best for one part or another,” Delangue said in a phone interview. “We realized that to work on a lot of different tasks efficiently, you basically needed a layer of abstraction on top of the model, and the layer of abstraction on top of the different NLP tasks … this is why we built Transformers, our library.”
PyTorch-Transformers is currently used for NLP tasks by more than 1,000 companies, including Microsoft’s Bing, Apple, and Stitch Fix. Use this web app to try out the Hugging Face Transformers library with models like XLNet and GPT.
Author: Khari Johnson
Source: Venturebeat