AI & RoboticsNews

Google’s AI teaches robots to grasp occluded objects and adapt to new situations

In a pair of papers published on the preprint server Arxiv.org this week, Google and University of California, Berkeley researchers describe new AI and machine learning techniques that enable robots to adapt to never-before-seen tasks and grasp occluded objects. The first study details X-Ray, an algorithm that when deployed on a robot can search through heaps of objects to grasp a target object, while the second lays out a policy adaptation technique that “teaches” robots skills without requiring from-scratch model training.

Robot grasping is a surprisingly difficult challenge. For example, robots struggle to perform what’s called “mechanical search,” which is when they have to identify and pick up an object from within a pile of other objects. Most robots aren’t especially adaptable, and there’s a lack of sufficiently capable AI models for guiding robot hands in mechanical search.

X-Ray and the policy adaptation step could form the foundation of a product-packaging system that spots, picks up, and drops a range of objects without human oversight.

X-Ray

The coauthors of the study about X-Ray note that mechanical search — finding objects in a heap of objects — remains challenging due to a lack of appropriate models. X-Ray tackles the problem with a combination of occlusion inference and hypothesis predictions, which it uses to estimate an occupancy distribution for the bounding box (coordinates for a rectangular border around an object) most similar to an object while accounting for various translations and rotations.

X-Ray assumes that there’s at least one target object fully or partially occluded by unknown objects in a heap, and that a maximum of one object is grasped per timestep. Taking RGB images and target objects as inputs, it predicts the occupancy distribution and segmentation masks for the scene and computes several potential grasping actions, executing the one with the highest probability of succeeding.

VB TRansform 2020: The AI event for business leaders. San Francisco July 15 - 16

To train and validate X-Ray, the researchers produced a corpus of 10,000 augmented depth images labeled with object occupancy distributions for a rectangular box target object. Sampling from an open source data set of 1,296 3D CAD models on Thingiverse, they selected 10 box targets of various dimensions with equal volume but small thickness, so that they were more likely to be occluded. This netted them a total of 100,000 images.

Google robot grasping occluded

Above: A diagram illustrating the X-Ray technique.

Image Credit: Google

About 8,000 of those 10,000 images were reserved for training, and the rest were set aside for testing. One thousand additional images containing simulated objects — a lid, a domino, and a flute — were used to evaluate X-Ray’s generalization to unseen shapes, objects, aspect ratios, and scales.

In physical experiments involving a real-world ABB YuMi robot with a suction cup and a parallel jaw gripper, the researchers tasked X-Ray with filling a bin with objects and then dumping the bin on top of the target object. In heaps initially containing 25 objects, the system extracted the target object in a median of 5 actions over 20 trials with a 100% success rate.

The coauthors leave to future work increasing X-Ray’s training efficiency and analyzing the effect of data set size and the number of translations and rotations used to generate training distributions. They also plan to explore reinforcement learning policies based on the reward of target object visibility.

Policy adaptation

In the more recent of the two papers, the coauthors sought to develop a system that continuously adapts to new real-world environments, objects, and conditions. That’s in contrast to most robots, which are trained once and deployed without much in the way of adaptation capabilities.

The researchers pretrained a machine learning model to grasp a range of objects on a corpus of 608,000 grasp attempts, which they then tasked with grasping objects using a gripper moved 10 centimeters to the right of where it started. After the system practiced gripping for a while (over the course of 800 attempts) and logged those attempts into a new data set — a target data set — the new attempts were mixed in 50% of the time with the original data set to fine-tune the model.

Google robot grasping adaptation

Above: The model adaptation training process, in schematic form.

Image Credit: Google

These steps — pretraining, attempting a new task, and fine-tuning — were repeated for five different scenarios. In one, harsh lighting impeded the robot’s cameras; in another, a checkerboard-patterned background made it difficult for the model to identify objects. Lastly, the experimenters had the robot grasp transparent bottles not seen during training (transparent objects are notoriously hard for robots to grasp because they sometimes confuse depth sensors) and pick up objects sitting on a highly reflective sheet metal surface.

The researchers report that in experiments, the model successfully grasped objects 63% of the time in harsh lighting, 74% of the time with transparent bottles, 86% of the time with a checkerboard backing, 88% of the time with an extended gripper, and 91% of the time with an offset gripper. Moreover, they say that it only took 1 to 4 hours of practice for the robot to adapt to new situations (compared with roughly 6,000 hours learning how to grasp) and that performance didn’t degrade the more the model adapted.

In the future, the team plans to investigate whether the process can be made automatic.


Author: Kyle Wiggers.
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!