Advanced Computing in the Age of AI | Thursday, March 28, 2024

How Laika Uses Intel ML to Hide the Seams in Stop Motion Animation 

Image courtesy of Laika.

The Oscar-nominated animators at Laika are practitioners of a grueling art. The 15-year-old movie studio – which has so far released Coraline, ParaNorman, The Boxtrolls, Kubo and the Two Strings and Missing Link – works in stop motion animation, requiring the use of physical puppetry and real-world lighting, painstakingly adjusted frame-by-frame. Even for these animators, however, higher-tech solutions are sometimes necessary – and now, Laika is highlighting how an Intel-provided machine learning solution is helping the studio streamline its workflow.

“For every frame across one second of film, we pose a puppet, we take an exposure, we move the puppet or something else within that environment in the smallest of increments, we take another exposure,” explained Steve Emerson, the visual effects supervisor at Laika. “And then after we’ve done 24 of those, we have a one-second performance. So we do this again and again and again and again until we have a 90-minute, 95-minute, two-hour movie.”

This punishing animation procedure means that Laika is able to film maybe a second’s-worth of time per day – on a good day – and takes several years to complete a given film (roughly 18 months each for filming and post-production). 

Laika is no stranger to using advanced technology to accelerate these processes. In order to adjust characters’ facial expressions, Laika swaps out magnetic faceplates on the puppets – and in order to make these facial hotswaps possible in an already-lengthy workflow, Laika renders the faces on computers, then uses 3D printers to deliver them on “cookie sheets” to the puppeteers. Then, they took it a step further.

“One thing that was decided early on was that in order to give them a greater range of expression, they split the face down the center – right between the eyes,” Emerson said. “And that would not only give the opportunity to the animator to change his or her mind on a given frame on which mouth or brow might be snapped in at that moment, but it also allowed us to achieve a greater range of expression for the puppet performances with fewer facial components.”

Examples of facial lines uncorrected, rotoscoped, and erased. Image courtesy of Laika and Intel.

One unfortunate side effect of this, however, was that this created a visible line bisecting the eyes of all of Laika’s puppets. For a studio that prides itself on its visual immersion, this became a problem necessitating a delicate balance between digital correction and maintaining the realism of the puppetry. Until recently, Laika had been solving this problem using a digital tracing process called rotoscoping. Their rotoscoping process, unfortunately, required a human in the loop to set boundaries for the rotoscoping on each frame of each face, encompassing thousands of “performances” for any given film, with each performance several seconds long at 24 frames per second.

“It’s time-consuming,” Emerson said. “And honestly, a lot of artists don’t like to do it.”

Intel sends researchers to help solve the problem

Fortunately, other parties were interested. Intel, which had already provided the technology for Laika’s workstations and render farm, had a team looking for an applied, real-world image segmentation problem to solve. Intel sent its researchers to visit Laika, where they observed the animation process and began working on a machine learning-based solution for the rotoscoping process.

Intel’s solution, powered by Xeon CPUs, surprised Laika. “Instead of trying to make a very generalized tool that just recognized the puppet faces, they made a tool that was very specific to this task,” explained Jeff Stringer, director of production technology for Laika. “And it turns out that when you do that, the data you need is just as specific, instead of generalized.” While Laika had attempted to produce training data for the model through meticulous photographs of its puppet faces from various angles, the team discovered that “five or six shots of well-designed ground truth data” was enough to train the system effectively for automated rotoscoping of the facial seams.

“It is a bit of an odd process,” Stringer said of stop motion animation, “and Intel’s willingness to go there with us and then build a tool that was really tailored specifically to this task – that was unusual.”

The line-erasing process still requires some hands-on work: after all, the animators want to make sure that the ML doesn’t go overboard and interfere with the facial performances. Still, the studio is excited about the possibilities of new, more ambitious ML applications for stop motion animation in the future. One example of this is helping with compositing crowds out of individual puppets – an arduous task when done by hand.

“Going forward, we could do [this] again and again,” Stringer said. “Just look for artist processes that are repetitive and consistent and then try and build these kinds of tools around those systems and accelerate what they’re doing – but not replace what they’re doing.”

EnterpriseAI