AI & RoboticsNews

Stanford and Meta inch towards AI that acts human with new ‘CHOIS’ interaction model

Researchers from Stanford University and Meta’s Facebook AI Research (FAIR) lab have developed a breakthrough AI system that can generate natural, synchronized motions between virtual humans and objects based solely on text descriptions.

The new system, dubbed CHOIS (Controllable Human-Object Interaction Synthesis), uses the latest conditional diffusion model techniques to produce seamless and precise interactions like “lift the table above your head, walk, and put the table down.” 

The work, published in a paper on arXiv, provides a glimpse into a future where virtual beings can understand and respond to language commands as fluidly as humans.

“Generating continuous human-object interactions from language descriptions within 3D scenes poses several challenges,” the researchers noted in the research paper.

They had to ensure the generated motions were realistic and synchronized, maintaining appropriate contact between human hands and objects, and the object’s motion had a causal relationship to human actions.

The CHOIS system stands out for its unique approach to synthesizing human-object interactions in a 3D environment. At its core, CHOIS uses a conditional diffusion model, which is a type of generative model that can simulate detailed sequences of motion.

When given an initial state of human and object positions, along with a language description of the desired task, CHOIS generates a sequence of motions that culminate in the task’s completion.

For example, if the instruction is to move a lamp closer to a sofa, CHOIS understands this directive and creates a realistic animation of a human avatar picking up the lamp and placing it near the sofa.

What makes CHOIS particularly unique is its use of sparse object waypoints and language descriptions to guide these animations. The waypoints act as markers for key points in the object’s trajectory, ensuring that the motion is not only physically plausible, but also aligns with the high-level goal outlined by the language input. 

CHOIS’s uniqueness also lies in its advanced integration of language understanding with physical simulation. Traditional models often struggle to correlate language with spatial and physical actions, especially over a longer horizon of interaction where many factors must be considered to maintain realism.

CHOIS bridges this gap by interpreting the intent and style behind language descriptions, then translating them into a sequence of physical movements that respect the constraints of both the human body and the object involved.

The system is especially groundbreaking because it ensures that contact points, such as hands touching an object, are accurately represented and that the object’s motion is consistent with the forces exerted by the human avatar. Moreover, the model incorporates specialized loss functions and guidance terms during its training and generation phases to enforce these physical constraints, which is a significant step forward in creating AI that can understand and interact with the physical world in a human-like manner.

The implications of the CHOIS system on computer graphics are profound, particularly in the realm of animation and virtual reality. By enabling AI to interpret natural language instructions to generate realistic human-object interactions, CHOIS could drastically reduce the time and effort required to animate complex scenes.

Animators could potentially use this technology to create sequences that would traditionally require painstaking keyframe animation, which is both labor-intensive and time-consuming. Furthermore, in virtual reality environments, CHOIS could lead to more immersive and interactive experiences, as users could command virtual characters through natural language, watching them execute tasks with lifelike precision. This heightened level of interaction could transform VR experiences from rigid, scripted events to dynamic environments that respond to user input in a realistic fashion.

In the fields of AI and robotics, CHOIS represents a giant step towards more autonomous and context-aware systems. Robots, often limited by pre-programmed routines, could use a system like CHOIS to better understand the real world and execute tasks described in human language.

This could be particularly transformative for service robots in healthcare, hospitality, or domestic environments, where the ability to understand and perform a wide array of tasks in a physical space is crucial.

For AI, the ability to process language and visual information simultaneously to perform tasks is a step closer to achieving a level of situational and contextual understanding that has been, until now, a predominantly human attribute. This could lead to AI systems that are more helpful assistants in complex tasks, able to understand not just the “what,” but the “how” of human instructions, adapting to new challenges with a level of flexibility previously unseen.

Overall, the Stanford and Meta researchers have made key progress on an extremely challenging problem at the intersection of computer vision, NLP (natural language processing), and robotics.

The research team believes that their work is a significant step towards creating advanced AI systems that simulate continuous human behaviors in diverse 3D environments. It also opens the door to further research into the synthesis of human-object interactions from 3D scenes and language input, potentially leading to more sophisticated AI systems in the future.

Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Researchers from Stanford University and Meta’s Facebook AI Research (FAIR) lab have developed a breakthrough AI system that can generate natural, synchronized motions between virtual humans and objects based solely on text descriptions.

The new system, dubbed CHOIS (Controllable Human-Object Interaction Synthesis), uses the latest conditional diffusion model techniques to produce seamless and precise interactions like “lift the table above your head, walk, and put the table down.” 

The work, published in a paper on arXiv, provides a glimpse into a future where virtual beings can understand and respond to language commands as fluidly as humans.

Credit: lijiaman.github.io

“Generating continuous human-object interactions from language descriptions within 3D scenes poses several challenges,” the researchers noted in the research paper.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 


Learn More

They had to ensure the generated motions were realistic and synchronized, maintaining appropriate contact between human hands and objects, and the object’s motion had a causal relationship to human actions.

How it works

The CHOIS system stands out for its unique approach to synthesizing human-object interactions in a 3D environment. At its core, CHOIS uses a conditional diffusion model, which is a type of generative model that can simulate detailed sequences of motion.

When given an initial state of human and object positions, along with a language description of the desired task, CHOIS generates a sequence of motions that culminate in the task’s completion.

For example, if the instruction is to move a lamp closer to a sofa, CHOIS understands this directive and creates a realistic animation of a human avatar picking up the lamp and placing it near the sofa.

What makes CHOIS particularly unique is its use of sparse object waypoints and language descriptions to guide these animations. The waypoints act as markers for key points in the object’s trajectory, ensuring that the motion is not only physically plausible, but also aligns with the high-level goal outlined by the language input. 

CHOIS’s uniqueness also lies in its advanced integration of language understanding with physical simulation. Traditional models often struggle to correlate language with spatial and physical actions, especially over a longer horizon of interaction where many factors must be considered to maintain realism.

CHOIS bridges this gap by interpreting the intent and style behind language descriptions, then translating them into a sequence of physical movements that respect the constraints of both the human body and the object involved.

The system is especially groundbreaking because it ensures that contact points, such as hands touching an object, are accurately represented and that the object’s motion is consistent with the forces exerted by the human avatar. Moreover, the model incorporates specialized loss functions and guidance terms during its training and generation phases to enforce these physical constraints, which is a significant step forward in creating AI that can understand and interact with the physical world in a human-like manner.

Implications for computer graphics, AI, and robotics

The implications of the CHOIS system on computer graphics are profound, particularly in the realm of animation and virtual reality. By enabling AI to interpret natural language instructions to generate realistic human-object interactions, CHOIS could drastically reduce the time and effort required to animate complex scenes.

Animators could potentially use this technology to create sequences that would traditionally require painstaking keyframe animation, which is both labor-intensive and time-consuming. Furthermore, in virtual reality environments, CHOIS could lead to more immersive and interactive experiences, as users could command virtual characters through natural language, watching them execute tasks with lifelike precision. This heightened level of interaction could transform VR experiences from rigid, scripted events to dynamic environments that respond to user input in a realistic fashion.

In the fields of AI and robotics, CHOIS represents a giant step towards more autonomous and context-aware systems. Robots, often limited by pre-programmed routines, could use a system like CHOIS to better understand the real world and execute tasks described in human language.

This could be particularly transformative for service robots in healthcare, hospitality, or domestic environments, where the ability to understand and perform a wide array of tasks in a physical space is crucial.

For AI, the ability to process language and visual information simultaneously to perform tasks is a step closer to achieving a level of situational and contextual understanding that has been, until now, a predominantly human attribute. This could lead to AI systems that are more helpful assistants in complex tasks, able to understand not just the “what,” but the “how” of human instructions, adapting to new challenges with a level of flexibility previously unseen.

Promising results and future outlook

Overall, the Stanford and Meta researchers have made key progress on an extremely challenging problem at the intersection of computer vision, NLP (natural language processing), and robotics.

The research team believes that their work is a significant step towards creating advanced AI systems that simulate continuous human behaviors in diverse 3D environments. It also opens the door to further research into the synthesis of human-object interactions from 3D scenes and language input, potentially leading to more sophisticated AI systems in the future.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Michael Nuñez
Source: Venturebeat
Reviewed By: Editorial Team

Related posts
Cleantech & EV'sNews

Einride deploys first daily commercial operations of autonomous trucks in Europe

Cleantech & EV'sNews

ChargePoint collaborates with GM Energy to deploy up to 500 EV fast chargers with Omni Ports

Cleantech & EV'sNews

How Ukraine assassinated a Russian general with an electric scooter

CryptoNews

Day-1 Crypto Executive Orders? Bitcoin Bulls Brace for Trump's Big Move

Sign up for our Newsletter and
stay informed!