AI is really, really good at automating repetitive tasks. Laika, the stop-motion animation studio best known for its feature films , , ,, and , captures and edits tens of thousands of frames for each of its movies. Needless to say, the company has plenty of repetitive tasks that could benefit from acceleration. So, Laika teamed up with Intel to create software tools able to save time and keep its animators focused on their craft.
The studio is training neural networks to detect and help correct lines and other artifacts on the faces of its puppets during post-production. These artifacts are inherent in the way Laika builds its characters, so the AI has to be trained to identify the offending lines, trace them through a process called rotoscoping, and track them. At the same time, Laika wants to preserve the characteristic imperfections of its handmade artwork, leaving artists in charge of what goes and what stays.
If all goes according to plan, you will enjoy the product of Laika’s efforts in the studio’s next film.
Key points
- Creating a modern stop-motion feature film is more complex than ever; Laika’s latest project, Missing Link, involved more than 106,000 different 3D-printed faces.
- Cleaning up unwanted lines is currently a manual and repetitive process, making it an ideal target for machine learning.
- Laika teamed with Intel to build tools based on the oneAPI programming model. These leverage the studio’s existing Xeon CPU-based infrastructure to accelerate digital paint and rotoscoping tasks.
A classic film-making technique meets modern technology
In stop-motion animation, objects are photographed, moved slightly, and photographed again, over and over. Flipping through the still images quickly enough creates the illusion of movement. It’s a technique with origins in the 19th century.
Many of us were assigned stop-motion art projects as early as elementary school. If we revisited those amateur efforts today, we’d undoubtedly notice some pretty severe technical imperfections, such as seams in the clay we used to shape our puppets and alignment issues with our cameras. Plus, there’s only so much artistic expression to extract from items lying around the house.
Professional stop-motion animators must overcome those same challenges, while taking storytelling and artistry to the next level. In a recent webinar, Steve Emerson, Laika’s visual effects supervisor, stepped through the studio’s labor-intensive process.
“What we do here at Laika is we take puppets—fully articulated puppets—and we create miniature sets to put those puppets within, with real-world light, and we capture them one frame at a time. So, for every frame across one second of film, we pose a puppet, we take an exposure, we move the puppet or something else within that environment in the smallest of increments, and we take another exposure. Then, after we’ve done 24 of those, we have a one-second performance. So, we do this again and again and again and again until we have a 90, 95-minute, 2-hour movie.”
The math is almost overwhelming. Laika’s most recent project, , had a runtime of 93 minutes. With 24 frames in a second, that means the studio’s animators had to painstakingly gather nearly 134,000 frames. According to Emerson, each animator can capture somewhere between three and a half or four seconds per week. “It is a time-consuming, intense, insane process at times, that is rife with issues. Often those issues are corrected in post-production.”
Creating a more engaging stop-motion experience
Laika’s projects are successful in part because they push the boundaries of what audiences expect from stop-motion animation. On one hand, they exemplify the technique’s visceral aesthetic. On the other, they transcend the palette of emotions typically available from a puppet. “…we get there using technology,” says Laika’s Emerson.
Before puppets ever make it to the set, Laika’s facial animation team uses software to determine how a shot is going to look, and then 3D prints the faces in advance. For , Laika created 106,000 different faces composed of overlapping textured parts, such as skin, eyebrows, teeth, and tongues. Each face was split through the center, giving animators an opportunity to snap in features on a per-frame basis and achieve a wider range of expressions using fewer facial components. But it also meant that the characters from Laika’s films all have this obvious artifact that needs to get cleaned up in visual effects.
“What we’ve been doing for the past five films is we’ve been using rotoscoping, which is a high-tech kind of tracing that you do with a computer,” says Emerson. “…you’re essentially drawing lines and shapes to tell the computer this particular area I want to fix in some way.”
Imagine you’re painting a room. Before starting on the walls, you tape off the ceiling and floorboards to protect them. Although the step adds significant time to the process, it’s necessary for a clean finished product. The same goes for rotoscoping. And as 3D printing technology evolves, allowing Laika to create even more shapes for increasingly nuanced performances, rotoscoping them all gets even harder.
Harnessing AI for rotoscoping (without affecting the art)
Laika’s decision to clean up the lines on its puppets’ faces wasn’t made lightly. The team knew it’d be a lot of work to get rid of the artifacts in every frame of the film. Ultimately, though, creating a stronger emotional connection with more realistic-looking characters took precedence.
To help accelerate the process of rotoscoping a growing library of face shapes, the team started exploring machine learning solutions using Xeon CPUs with the hope of a 50% time savings. Along the way, it made a couple of observations that ran counter to common tenets about AI.
First, the applied machine learning group at Intel helped Laika figure out that feeding its neural network lots of general training data about puppets wasn’t as beneficial as creating a toolset specific to a handful of characters.
“It turned out that, when you focus in on this particular task of creating a rotoshape, which is an area you want to mask or track, from tracking points on the face, a good five or six shots of well-designed ground truth data was enough to train the system,” says Jeff Stringer, director of production technology at Laika.
Laika’s team was also interested in tempering the software’s quest for perfection. They wanted a solution that could isolate the lines an artist would want to clean up while preserving the blemishes created by human hands, which add authenticity. So, when it came to machine learning, the company wanted to make sure its animators got the last word.
“The machine would take a crack at it,” says Emerson. “It’d give you back your tracking information, your rotoshapes. But then, at the same time, there was an opportunity there for an artist to assess what had been done and have the tools to go back in and augment that in whatever way he or she needed to.”
Laika went the Xeon route for its first foray into machine learning partly because Intel’s processors were already powering the studio’s workstations and render farm. “If the applied machine learning group at Intel was going to build something, we knew they would be able to optimize it for those CPUs,” says Stringer. And because the toolset is built using oneAPI components, its code won’t need to be rewritten to support future architectures.
The marriage of art and technology fosters a roadmap for innovation
Thanks to advances in 3D printing and machine learning, Laika’s artists are creating more emotive characters that feel like action figures come to life. The studio continues evolving a century-old craft to tell stories in ways that wouldn’t have been possible in years past. Perhaps that’s why all five of its films were Academy Award nominees for Best Animated Feature.
Moving forward, Emerson wants to incorporate bigger environments and larger crowds into Laika’s projects. But once again, there are technical limitations to overcome. “You would never see big crowds in stop-motion films prior to Laika because there just isn’t the ability to create that many puppets. So, if you’re going to create an enormous crowd of characters, unless you’re going to go out and build a thousand puppets to put back there, you’re going to need to infuse some digital technology.”
Laika already has experience adding digital extras using mattes, alpha channels, and rotoscoping. This combination of effects, which involves multiple exposures to get a crowd added behind the puppet, is, again, time consuming and expensive.
“What would be incredible with this is if we could do full character rotomation, to separate characters from animation plates without having to put green screens behind them. And what that would enable us to do would be to skip that second exposure. And then stop motion animators wouldn’t have to stop to put the green screen in. They could just keep moving forward with the performances, because a lot of what they are doing is all about rhythm.”
Technology plays a supporting role to Laika’s art. But it’s clear the team wants new capabilities to help streamline the repetitive, costly tasks that keep animators from capturing their ideas on camera. There’s already plenty of tech involved in Laika’s latest features. Soon, machine learning will help speed up the rotoscoping process. And beyond that, the studio’s technical team has plenty more on its wish list. Fortunately, software tools built using a unified programming model make it easy to support whatever hardware Laika needs to accelerate the future of stop-motion animation.
Go Deeper: Laika Studios & Intel Join Forces to Expand What’s Possible in Stop Motion Filmmaking
Author: Chris Angelini
Source: Venturebeat