AI & RoboticsNews

Uber details Fiber, a framework for distributed AI model training

A preprint paper coauthored by Uber AI scientists and Jeff Clune, a research team leader at San Francisco startup OpenAI, describes Fiber, an AI development and distributed training platform for methods including reinforcement learning (which spurs AI agents to complete goals via rewards) and population-based learning. The team says that Fiber expands the accessibility of large-scale parallel computation without the need for specialized hardware or equipment, enabling non-experts to reap the benefits of genetic algorithms in which populations of agents evolve rather than individual members.

Fiber — which was developed to power large-scale parallel scientific computation projects like POET — is available in open source as of this week, on Github. It supports Linux systems running Python 3.6 and up and Kubernetes running on public cloud environments like Google Cloud, and the research team says that it can scale to hundreds or even thousands of machines.

As the researchers point out, increasing computation underlies many recent advances in machine learning, with more and more algorithms relying on distributed training for processing an enormous amount of data. (OpenAI Five, OpenAI’s Dota 2-playing bot, was trained on 256 graphics cards and 1280,000 processor cores on Google Cloud.) But reinforcement and population-based methods pose challenges for reliability, efficiency, and flexibility that some frameworks fall short of satisfying.

Fiber addresses these challenges with a lightweight strategy to handle task scheduling. It leverages cluster management software for job scheduling and tracking, doesn’t require preallocating resources, and can dynamically scale up and down on the fly, allowing users to migrate from one machine to multiple machines seamlessly.

Uber AI Fiber

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

Fiber comprises an API layer, backend layer, and cluster layer. The first layer provides basic building blocks for processes, queues, pools, and managers, while the backend handles tasks like creating and terminating jobs on different cluster managers. As for the cluster layer, it taps different cluster managers to help manage resources and keep tabs on different jobs, reducing the number of items Fiber needs to track.

Fiber introduces the concept of job-backed processes, where processes can run remotely on different machines or locally on the same machine, and it makes use of containers to encapsulate the running environment (e.g., required files, input data, and dependent packages) of current processes to ensure everything is self-contained. The framework has built-in error handling when running a pool of workers to enable crashed workers to quickly recover. Helpfully, Fiber does all this while directly interacting with computer cluster managers, such that running a Fiber application is akin to running a normal app on a cluster.

In experiments, Fiber had a response time of a couple of milliseconds. With a population size of 2,048 workers (e.g., processor cores), it scaled better than two baseline techniques, with the length of time it took to run gradually decreasing with the increasing of the number of workers (in other words, it took less time to train 32 workers than the full 2,048 workers). With 512 workers, finishing 50 iterations of a training workload took 50 seconds, compared with the popular IPyParellel framework’s 1,400 seconds.

“[Our work shows] that Fiber achieves many goals, including efficiently leveraging a large amount of heterogeneous computing hardware, dynamically scaling algorithms to improve resource usage efficiency, reducing the engineering burden required to make [reinforcement learning] and population-based algorithms work on computer clusters, and quickly adapting to different computing environments to improve research efficiency,” wrote the coauthors. “We expect it will further enable progress in solving hard [reinforcement learning] problems with [reinforcement learning] algorithms and population-based methods by making it easier to develop these methods and train them at the scales necessary to truly see them shine.”

Fiber’s reveal comes days after Google released SEED ML, a framework that scales AI model training to thousands of machines. Google said that SEED ML could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn’t previously compete with large AI labs. (edited)


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
DefenseNews

Lockheed to supply Australia with air battle management system

DefenseNews

First upgraded F-35s won’t be ready for combat until next year

DefenseNews

US Army faces uphill battle to fix aviation mishap crisis

Cleantech & EV'sNews

GreenPower just launched this versatile electric utility truck

Sign up for our Newsletter and
stay informed!