Researchers at the Allen Institute for AI today launched AllenAct, a platform intended to promote reproducible research in embodied AI with a focus on modularity and flexibility. AllenAct, which is available in beta, supports multiple training environments and algorithms with tutorials, pretrained models, and out-of-the-box real-time visualizations.
Embodied AI, the AI subdomain concerning systems that learn to complete tasks through environmental interactions, has experienced substantial growth. That’s thanks in part to the advent of techniques like deep reinforcement learning and innovations in computer vision, natural language processing, and robotics. The Allen Institute argues that this growth has been mostly beneficial, but it takes issue with the fragmented nature of embodied AI development tools, which it says discourages good science.
In a recent analysis, the Allen Institute found that the number of embodied AI papers now exceeds 160 (up from around 20 in 2018 and 60 in 2019) and that the number of environments, tasks, modalities, and algorithms varies widely among them. For instance, 10% of papers list 6 modalities, while 60% test against just 1. Meanwhile, 10% of papers address 4 benchmark tasks, while 20% only cover 2.
“Just as we now expect neural architectures to be evaluated across multiple data sets, we must also start evaluating embodied AI methods across tasks and data sets … It is crucial to understand what components of systems matter most and which do not matter at all,” Allen Institute researchers wrote in a blog post today. “But getting up to speed with embodied AI algorithms takes significantly longer than ramping up to classical tasks … And embodied AI is expensive [because] today’s state-of-the art reinforcement learning methods are sample-inefficient and training competitive models for embodied tasks can cost tens of thousands of dollars.”
AllenAct aims to address challenges around embodied AI data replication, ramp-up time, and training costs by decoupling tasks and environments and ensuring compatibility with specialized algorithms that involve sequences of training routines. It ships with detailed startup guides and code and models for a number of standard embodied AI tasks, as well as support for embodied AI scenarios and so-called grid-worlds like MiniGrid. AllenAct’s visualizations integrate with TensorBoard, an analysis module for Google’s TensorFlow machine learning framework. And the Allen Institute claims AllenAct is one of the few reinforcement learning frameworks to target Facebook’s PyTorch.
“Just as the early deep learning libraries like Caffe and Theano, and numerous online tutorials, lowered entry barriers and ushered in a new wave of researchers towards deep learning, embodied AI can benefit from modularized coding frameworks, comprehensive tutorials, and ample startup code,” the researchers wrote. “We welcome and encourage contributions to AllenAct’s core functionalities as well as the addition of new environments, tasks, models, and pre-trained model weights. Our goal in releasing AllenAct is to make embodied AI more accessible and encourage thorough, reproducible research.”
AllenAct is open source and freely available under the MIT License.
The release of AllenAct comes after the Allen Institute encountered embodied AI research roadblocks arising from the pandemic. They had planned to launch the RoboTHOR challenge earlier this year, which would have involved deploying navigation algorithms in a robot — the LocoBot — and running it through a physical environment at the nonprofit’s labs. But due to the pandemic, all Allen Institute employees were working from home, preventing them from running experiments on LocoBot for the foreseeable future. They decided to pare down the challenge to only simulated scenes.
Author: Kyle Wiggers
Source: Venturebeat