AI & RoboticsNews

DeepMind’s MEMO AI solves novel reasoning tasks with less compute

Can AI capture the essence of reasoning — that is, the appreciation of distant relationships among elements distributed across multiple facts or memories? Alphabet subsidiary DeepMind sought to find out in a study published on the preprint server Arxiv.org, which proposes an architecture — MEMO — with the capacity to reason over long distances. They say that its two novel components — the first of which introduces a separation between facts and memories stored in external memory, and the second of which employs a retrieval system that allows a variable number of “memory hops” before an answer is decided upon — enables it to solve novel reasoning tasks.

“[The hippocampus supports the] flexible recombination of single experiences in novel ways to infer unobserved relationships is called inferential reasoning,” wrote the coauthors of the paper. “Interestingly, it has been shown that the hippocampus is storing memories independently of each other through a process called pattern separation [to] minimize interference between experiences. A recent line of research sheds light on this … by showing that the integration of separated experiences emerges at the point of retrieval through a recurrent mechanism, [which] allows multiple pattern separated codes to interact and therefore support inference.”

DeepMind’s work, then, takes inspiration from this research to investigate and enhance inferential reasoning in machine learning models. Drawing on the neuroscience literature, they devised a procedurally generated task called paired associative inference (PAI) that’s meant to capture inferential reasoning by forcing AI systems to learn abstractions to solve previously unseen problems. They then architected MEMO, which given an input query outputs a sequence of potential answers, with a preference for representations that minimize the necessary computation.

The researchers say MEMO retains a set of facts in memory and learns a projection paired with a mechanism that enables greater flexibility in the use of memories, and that it’s different from typical AI models because it adapts the amount of compute time to the complexity of the task. Taking a cue from a model of human associative memory called REMERGE, where the content retrieved from memory is recirculated as a new query and the difference between the content retrieved at different time steps in the recirculation process is used to calculate if the model has settled into a fixed point, MEMO outputs an action that indicates whether it wishes to continue computing and querying its memory or whether it’s able to answer a given task.

In tests, the DeepMind researchers compared MEMO with two baseline models, as well as the current state of the art model in Facebook AI Research’s bAbi suite (a set of 20 tasks for evaluating text understanding and reasoning). MEMO was able to achieve the highest accuracy on the PAI task, and it was the only architecture that successfully answered the most complex inference queries on longer sequences. Furthermore, MEMO required only three “hops” to solve a task compared with the best-performing baseline model’s ten steps. And in another task that required the models to find the shortest path between two nodes given a graph of nodes, MEMO outperformed the baselines in more complex graphs by 20%.


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
DefenseNews

Raytheon to develop two Standard Missile types with better targeting

DefenseNews

Boeing’s defense unit shows profit, despite $222M loss on KC-46, T-7

DefenseNews

Here are the two companies creating drone wingmen for the US Air Force

Cleantech & EV'sNews

CATL unveils world's first LFP battery with 4C ultra-fast charging for 370-mi in 10 mins

Sign up for our Newsletter and
stay informed!