The open-source TensorFlow machine learning library is getting faster, thanks to a collaboration between Google and Intel.
The open-source oneAPI Deep Neural Network Library (oneDNN) developed by Intel is now on by default in TensorFlow, a project led by Google. OneDNN is an open-source cross-platform performance library of deep learning building blocks that are intended for developers of both deep learning applications and frameworks such as TensorFlow.
According to Intel, the promise of oneDNN for enterprises and data scientists is a significant acceleration of up to 3 times performance for AI operations with TensorFlow, one of the most widely used open-source machine learning technologies in use today.
“Intel has been collaborating with TensorFlow on oneDNN feature integration over the last several years,” AG Ramesh, principal engineer for AI Frameworks at Intel told VentureBeat.
The oneDNN library first became available as a preview opt-in feature starting with Tensorflow 2.5, which was released in May 2021. After a year of testing and positive feedback from the community, Ramesh said that oneDNN was turned on by default in the recent TensorFlow 2.9 update.
The oneDNN library brings AI performance improvements for model execution
Ramesh explained that with oneDNN, data scientists will see performance improvements in the model execution time.
The oneDNN improvements are applicable to all Linux x86 packages and for CPUs with neural-network-focused hardware features found on 2nd Gen Intel Xeon Scalable processors and newer CPUs. Intel calls this performance optimization “software AI acceleration” and says it can make a measurable impact in certain cases.
Ramesh added that business users and data scientists will be able to access lower precision data types – int8 for inference and Bfloat16 for inference and training – to get additional performance benefits from AI accelerators such as Intel Deep Learning Boost.
Accelerating deep learning with oneDNN
According to Slintel, TensorFlow has a market share of 37%. Kaggle’s 2021 State of Data Science and Machine Learning survey pegged TensorFlow’s usage at 53%.
However, while TensorFlow is a popular technology, the oneDNN library and Intel’s approach to machine learning optimization isn’t just about TensorFlow. Ramesh said that Intel software optimizations through oneDNN and other oneAPI libraries deliver measurable performance gains to several popular open source deep learning frameworks such as TensorFlow, PyTorch and Apache MXNet, as well as machine learning frameworks such as Scikit-learn and XGBoost.
He adds that most of the optimizations have already been up-streamed into the respective default framework distributions.
Intel’s strategy for building out AI optimizations like oneDNN
The oneDNN library is part of a broad strategy at Intel to help enable AI for developers, data scientists, researchers, and data engineers.
Wei Li, vice president and general manager of AI and Analytics, told VentureBeat that Intel’s goal is to make it as easy as possible for any type of user to accelerate their end-to-end AI journey from the edge to the cloud – no matter what software they want to use, running on Intel. Li said that having an open ecosystem across software offerings helps to enable innovation. He noted that Intel is doing work ranging from contributions at the language level with Python, partnering and optimizing the industry frameworks like PyTorch and TensorFlow, to releasing Intel developed tools that increase productivity like the OpenVINO toolkit. “Intel recently announced Project Apollo at Vision which brings even more, new open source AI reference offerings that will accelerate the adoption of AI everywhere across industries,” Li said.
VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.
Author: Sean Michael Kerner