AI & RoboticsNews

AI researchers challenge a robot to ride a skateboard in simulation

AI researchers say they’ve created a framework for controlling four-legged robots that promises better energy efficiency and adaptability than more traditional model-based gait control of robotic legs. To demonstrate the robust nature of the framework that adjusts to conditions in real time, AI researchers made the system slip on frictionless surfaces to mimic a banana peel, ride a skateboard, and climb on a bridge while walking on a treadmill. An Nvidia spokesperson told VentureBeat that only the frictionless surface test was conducted in real life because of limits placed on office staff size due to COVID-19. The spokesperson said all other challenges took place in simulation. (Simulations are often used as training data for robotics systems before those systems are used in real life.)

“Our framework learns a controller that can adapt to challenging environmental changes on the fly, including novel scenarios not seen during training. The learned controller is up to 85% more energy-efficient and is more robust compared to baseline methods,” the paper reads. “At inference time, the high-level controller needs only evaluate a small multi-layer neural network, avoiding the use of an expensive model predictive control (MPC) strategy that might otherwise be required to optimize for long-term performance.”

The quadruped model is trained in simulation using a split-belt treadmill with two tracks that can change speed independently. That training in simulation is then transferred to a Laikago robot in the real world. Nvidia released video of simulations and laboratory work Monday, when it also unveiled AI-powered videoconferencing service Maxine and the Omniverse simulated environment for engineers in beta.

A paper detailing the framework for controlling quadruped legs was published a week ago on preprint repository arXiv. AI researchers from Nvidia; Caltech; University of Texas, Austin; and the Vector Institute at the University of Toronto contributed to the paper. The framework combines a high-level controller that uses reinforcement learning with a model-based lower-level controller.

“By leveraging the advantages of both paradigms, we obtain a contact-adaptive controller that is more robust and energy-efficient than those employing a fixed contact sequence,” the paper reads.

Researchers argue that a number of networks for controlling robotic legs are fixed and therefore unable to adapt to new circumstances, while adaptive networks are often energy-intensive. They say locomotion systems created with reinforcement learning are often less robust than model-based approaches, need a lot of training samples, or use a complicated approach to rewarding agents.

Earlier this year at the International Conference on Robotics and Automation (CRA), AI researchers from ETH Zurich detailed DeepGait, AI trained with reinforcement learning to do things like bridge unusually long gaps and walk on uneven terrain.


Author: Khari Johnson
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!