AI & RoboticsNews

Latent Technology raises $2.1M to blend AI and game animations

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.


London-based startup Latent Technology has raised $2.1 million to build the next-generation animation technology for virtual worlds and game characters.

The startup uses AI-based animation technology and real-world physics to enable virtual world characters to move in physically accurate real-time, in contrast to today’s tech where characters are loaded with thousands of hand-crafted animations.

Spark Capital and Root Ventures led the investment round, and game venture capital fund Bitkraft Ventures also participated. The funding round will help Latent to scale its team and develop the first version of their product, said Jorge del Val, CEO of Latent Technology, in an interview with GamesBeat.

“My cofounder and I were working in the AI space in research in the video game industry,” he said. “After knowing how interesting this space could be, I had this idea in my head to take the technology and take the next step in animation. We are building the next-generation animation technology for virtual worlds, which will allow characters in virtual worlds to take their own decisions about the movements they make, and allow them to interact physically with their environment, instead of loading them with thousands of animations.”

Event

GamesBeat Summit 2023

Join the GamesBeat community in Los Angeles this May 22-23. You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.



Register Here

Latent’s mission

Founded last November by AI gaming veterans del Val and CTO Jack Harmer, Latent Technology developed a technology dubbed Generative Physics Animation. They previously worked at Electronic Arts and Embark Studios.

They have a better way to animate virtual worlds and characters, said del Val. It allows virtual world characters to move about using physics-based natural movements in real time. By contrast, today’s game characters are loaded down with thousands of handcrafted animations that show every single movement.

“It isn’t hard to see that videogames haven’t fundamentally changed in a long time, and the magic we used to feel early on is long gone,” del Val said in a statement. “Meanwhile, technologies such as artificial intelligence have advanced dramatically. There is a huge potential to leverage the latest technology to empower players and creators in ways they never imagined. We aim at nothing less than to reinvent how virtual worlds are experienced and created so that we can bring magic back at the fingertips of players and game creators”

Leveraging the latest advancements in reinforcement learning and generative modeling, the resulting characters interact physically with the environment in an emergent manner, increasing immersion while dramatically reducing development time, del Val said.

“Traditional animation is limited, unrealistic and bound to game design,” del Val said. “The industry solution typically implies scaling up the team and the complexity of the outcome. We believe giving the characters the autonomy to decide how to move in real time while interacting physically with their environment has the potential to radically change how we approach this problem.”

Del Val acknowledges that there are built-in tradeoffs. Sometimes, game developers don’t want characters to have realistic movements. They want them to have superhuman movements, like having Call of Duty characters run around at 40 miles per hour all of the time.

“It’s not a problem for the technology because you can train them with different physics,” he said. “Then you can transport that into the game. What we want to do in the companies is keep developing different physics in different conditions that solve different tasks.”

Del Val thinks that his company can still deal with such circumstances by modifying the physics of what’s possible in a game.

“There is an inherent tradeoff between creative control and emergent interactions. We are used to the leftmost extreme of this trade-off: millimetrically controlling the outcome. However, this usually comes at the price of a limited experience and a big team scale,” del Val said. “The other side of the trade-off implies delegating details to the computer and embracing the results. Due to this, experiences can be more general and much more immersive, while needing only a fraction of the time to produce them. This part of the spectrum hasn’t really been explored that much, while it could have profound implications in the industry.”

The founders believe a physics-based approach really yields more realism in movements. No matter how much motion capture data that game artists use, there is often something artificial about how character animations look, del Val said.

“It takes a lot of scale for the team, and the result is still not interactive,” he said. “Take the simple problem of throwing a rock at a character. If the animator hasn’t thought about how the character will react beforehand, then the character won’t be able to react. That’s a pretty fundamental problem. That’s exactly what we’re trying to solve. If we manage to solve this problem, then most of the animations in characters that are made for reactions with the environment will become natural. They will be emergent.”

The reason this is hard to do is that when you have a physics-based character, the challenge of making that accurate can only be solved with machine-learning technology, del Val said.

“We train this character in a physical simulation, and allow them to learn how to move by themselves, giving them a reference of real human data, to learn to solve a task, not just by itself, but also like a human would do it,” del Val said. “We want to create a product that would be very easy to integrate and use by any game studio.”

This kind of improvement in efficiency is enabled by AI, and it is the kind of thing we’ll need to create applications for something as huge as the metaverse, del Val said. But this doesn’t mean the artists won’t be necessary. Instead, it means that the artists will operate at a high level of creative control.

The company has two people now and it is hiring. Once the tech is ready, it could work with various game engines such as Unity, which is the first target platform for the startup. The team can add different game engines like Unreal over time, del Val said.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.


Author: Dean Takahashi
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!