AI & RoboticsNews

Google’s Neural Tangents library gives ‘unprecedented’ insights into AI models’ behavior

Google today made available Neural Tangents, an open source software library written in JAX, a system for high-performance machine learning research. It’s intended to help build AI models of variable width simultaneously, which Google says could allow “unprecedented” insight into the models’ behavior and “help … open the black box” of machine learning.

As Google senior research scientist Samuel S. Schoenholz and research engineer Roman Novak explain in a blog post, one of the key insights enabling progress in AI research is that increasing the width of models results in more regular behavior and makes them easier to understand. By way of refresher, all neural network models contain neurons (mathematical functions) arranged in interconnected layers that transmit signals from input data and slowly adjust the synaptic strength (weights) of each connection. That’s how they extract features and learn to make predictions.

Machine learning models that are allowed to become infinitely wide tend to converge to another, simpler class of models called Gaussian processes. In this limit, complicated phenomena boil down to simple linear algebra equations, which can be used as a lens to study AI. But deriving the infinite-width limit of a finite model requires mathematical expertise and has to be worked out separately for each architecture. And once the infinite-width model is derived, coming up with an efficient and scalable implementation requires engineering proficiency, which could take months.

Google AI Neural Tangents

Above: A schematic showing how deep neural networks induce simple input/output maps as they become infinitely wide. | Image Credit: Google

That’s where Neural Tangents comes in — it lets data scientists construct and train ensembles of infinite-width networks at once using only five lines of code. The models constructed can be applied to any problem to which a regular model could be applied, according to Schoenholz and Novak.

“We see that, mimicking finite neural networks, infinite-width networks follow a similar hierarchy of performance with fully-connected networks performing worse than convolutional networks, which in turn perform worse than wide residual networks,” wrote the researchers. “However, unlike regular training, the learning dynamics of these models is completely tractable in closed-form, which allows [new] insight into their behavior.”

Neural Tangents is available from GitHub with an accompanying tutorial and Google Colaboratory notebook.

Notably, the release of Neural Tangents — which comes the same week as TensorFlow Dev Summit, an annual meeting of machine learning practitioners who use Google’s TensorFlow platform at Google offices in Silicon Valley — follows on the heels of TensorFlow Quantum, an AI framework for training quantum models. The framework can construct quantum datasets, prototype hybrid quantum and classic machine learning models, support quantum circuit simulators, and train discriminative and generative quantum models.


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!

Worth reading...
Oppo Find X2 Pro Hands-on: Super-smooth but super-samey?