AI & RoboticsNews

Researchers develop AI framework that predicts object motion from image and tactile data

Recent AI research has pointed out the synergies between touch and vision. One enables the measurement of 3D surface and inertial properties while the other provides a holistic view of objects’ projected appearance. Building on this work, researchers at Samsung, McGill University, and York University investigated whether an AI system could predict the motion of an object from visual and tactile measurements of its initial state.

“Previous research has shown that it is challenging to predict the trajectory of objects in motion, due to the unknown frictional and geometric properties and indeterminate pressure distributions at the interacting surface,” the researchers wrote in a paper describing their work. “To alleviate these difficulties, we focus on learning a predictor trained to capture the most informative and stable elements of a motion trajectory.”

The researchers developed a sensor — See-Through-your-Skin — that they claim can capture images while providing detailed tactile measurements. Alongside this, they created a framework called Generative Multimodal Perception that exploits visual and tactile data when available to learn a representation that encodes information about object pose, shape, and force and make predictions about object dynamics. And to anticipate the resting state of an object during physical interactions, they used what they call resting state predictions along with a visuotactile dataset of motions in dynamic scenes including objects freefalling on a flat surface, sliding down an inclined plane, and perturbed from their resting pose.

AI tactile motion research

In experiments, the researchers say their approach was able to predict the raw visual and tactile measurements of the resting configuration of an object with high accuracy, with the predictions closely matching the ground truth labels. Moreover, they claim their framework learned a mapping between the visual, tactile, and 3D pose modes such that it could handle missing modalities like when tactile information was unavailable in the input, as well as predict instances where an object had fallen from the surface of the sensor, resulting in empty output images.

AI tactile motion research

“If a previously unseen object is dropped into a human’s hand, we are able to infer the object’s category and guess at some of its physical properties, but the most immediate inference is whether it will come to rest safely in our palm, or if we need to adjust our grasp on the object to maintain contact,” the coauthors wrote. “[In our work,] we find that predicting object motions in physical scenarios benefits from exploiting both modalities: visual information captures object properties such as 3D shape and location, while tactile information provides critical cues about interaction forces and resulting object motion and contacts.”

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform
  • networking features, and more

Become a member


Author: Kyle Wiggers
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!