AI & RoboticsNews

How DeepMotion uses AI to create believable characters

This article is made possible by Intel’s GameDev BOOST program — dedicated to helping indie game developers everywhere achieve their dreams


Throughout Kevin He’s career in the tech and gaming industries, something has always bothered him. While rendering and other technologies evolved tremendously over time, animation woefully lagged behind. As an engineer, He wanted to find an efficient way to create more lifelike animations, and he was convinced that advanced physics simulation and AI could solve that problem.

So in 2014, He struck out on his own to start DeepMotion. The goal of the San Mateo, California-based company is to provide developers with powerful software development kits (SDK) that’ll allow them to create realistic animations for games and other applications. DeepMotion is trying to offer a better alternative to what He called the “tedious and labor-intensive” method of traditional keyframe animation.

Initially, the company applied physics simulation to make animation automatic. It then progressed into machine learning and deep learning, explained He, who is DeepMotion’s CEO.

“I think animation systems need a lot of love to get to a level where everyone feels engaged and immersed,” said He.

GamesBeat Summit - It's a time of change in the game industry. Hosted online April 28-29.Some of DeepMotion’s products include Motion Brain, Body Tracking, and Virtual Reality Tracking. Motion Brain uses machine learning algorithms to animate characters that can interact with users in believable ways. Body Tracking captures movements in the real world via a camera and reconstructs it in the digital world (as seen with the emoji avatars in Samsung’s Galaxy S10 smartphones). And Virtual Reality Tracking helps create full body animations for VR.

Instead of individually animating characters or avatars, the company leverages AI to do some of the heavy lifting. Kevin He said the team’s long-term vision is to treat the real world “as a resource that we can use to train AI. Like in the TV series Westworld, the AI can observe how humans do everyday things, and the AI will learn how to do the movements itself.”

It’s still early days for DeepMotion’s technology — while it has a number of partners, only a few have been announced. One is New Zealand developer DryCactus, which is known for the popular bridge-building simulator Poly Bridge. The studio is working with DeepMotion’s Avatar Physics Engine for its next game.

He said DeepMotion is also working on a cloud version of their Body Tracking solution, which it calls its Cloud Animation Service. This allows you to easily convert reference clips in various formats (like MP4 and AVI) into FBX animations. It’s available now in an invite-only alpha testing phase, but developers can request access by contacting the company.

Building the future of AI-driven animation

DeepMotion is currently hard at work to improve its suite of SDKs. It recently earned a MegaGrant funding award from publisher Epic Games for the purpose of continuing Epic’s Unreal Engine support for VR Body Tracking. The company has also been closely working with Intel to further advance the capabilities of its Motion Brain technology, the next step of which is called Generative Motion Brain.

Moreover, Intel was excited about DeepMotion’s vision. It helped the company on hardware and also provided support on performance optimization.

DeepMotion is using Intel’s 192-core SDP server to train its generative Motion Brain models so that they can create infinite AI-crafted motions that look like real humans. The additional computing power from Intel’s hardware means that the company can decrease the costs and development time it’d otherwise spend on training, and thus offer new products sooner.

“In short, the Intel multi-core servers allow us to compress multiple weeks’ worth of work into multiple days. That makes our iteration much faster,” said He.

Another benefit of the Intel partnership is that it frees up DeepMotion’s resources to focus on other endeavors. For example, the speed and efficiency of using the company’s products can be useful for creating immersive virtual worlds filled with naturally behaving characters. And that goes beyond just video games.

“If you look at what all these big tech companies are doing — Google, Facebook, Apple — everyone believes that the future of the digital world will be three-dimensional. It will be potentially AR and VR enhanced,” said He. “So it will go beyond a flat medium like a web page or email.

“If that assumption, that dream, comes true, you can imagine that you will have a huge, virtual 3D world that would need to be populated with content. … We believe that’s the future, and that we need to create powerful tools for content creators so they can use our products and services to populate a futuristic 3D world with a massive amount of realistic and interactive content.”


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com


Author: VB Staff.
Source: Venturebeat

Related posts
AI & RoboticsNews

H2O.ai improves AI agent accuracy with predictive models

AI & RoboticsNews

Microsoft’s AI agents: 4 insights that could reshape the enterprise landscape

AI & RoboticsNews

Nvidia accelerates Google quantum AI design with quantum physics simulation

DefenseNews

Marine Corps F-35C notches first overseas combat strike

Sign up for our Newsletter and
stay informed!

Worth reading...
How DeepMotion uses AI to create believable characters