Saturday, April 20, 2024
Subscribe Now

Voice Of The Crew - Since 2002

Los Angeles, California

HomeCraftsPostproductionPP-Supervisor series-Polar Express-Jerome Chen

PP-Supervisor series-Polar Express-Jerome Chen


By Mary Ann SkweresFor director Robert Zemeckis (Forrest Gump, Cast Away), reading Chris Van Allsburg’s The Polar Express to his son as he was growing up became an annual holiday tradition. Now he has brought this enchanting holiday story to the screen with cutting-edge filmmaking that defies categorization. The film is a completely synthetic creation, “shot” without film, except for going to negative at the end for distribution.The task of supervising a team of over 600 artists in the creation of the film’s visual effects fell to senior visual effects supervisors, Ken Ralston, a five-time Academy Award winner, and Jerome Chen, honored in 2000 for Stuart Little. The film took motion-capture technology to a new level. Rather than directing motion-capture stand-ins and animators, Zemeckis directed multiple live actors. The actors were equipped with sensors that transmitted their movements into computers for creating the subsequent animation, turning the process into a “performance capture” technique whereby all digital performances were driven by live-action performances. Tom Hanks played a variety of characters—the conductor, the boy and his father, a hobo and Santa Claus himself—since Tom Hanks himself never actually appears on the screen, all of his characters are CG animations. The whole pre-production process of the movie was driven as a live-action film, for which Zemeckis’ hired mainly his regular collaborators. “In his mind he directed a live-action film. It’s not an animated feature,” says Chen. “The preproduction was pretty normal.”The visual effects team collaborated with production designers Rick Carter and Doug Chiang for the film’s look. With the aid of visual effects and a hefty $200 million plus budget, the design for the film could be as grand as they wanted.Although there was a first-unit film crew, no cameras were actually rolling during the “live-action” shoot. Instead, there were motion performance capture sensors. And instead of recording the performance onto film from the camera’s point of view, the performances and action were captured as 360-degree data onto hard drives. Also, unlike a traditional live-action shoot, none of the composition, lighting or camera moves were done during this initial stage of production. The sole focus of this first unit was on performance. The camerawork and lighting were done later in another phase. It took over two years to make the movie, with the line between production and postproduction truly blurred.Chen describes the approach to character animation as twofold: “The way that a person moves and the way that the person is lit or shaded in the computer.”To capture movement, Hanks (and the other actors) wore a performance-capture suit with 32 special markers on it. These special motion capture sensors, glued onto the actors by a team of 15 make-up artists, contained jewels—152 on the face alone—which recorded every movement or twitch. Markers were also placed on the sets and props. Anything the actors touched—even the tiny paper circles punched from the tickets by the conductor—had markers. The only thing the computer sees are these marker dots and how they move. The performance information, the recorded movement, was then applied to the digital characters. All the movement, all the smiles, all the acting came from the actor. This is how Hanks was able to play, and not just vocally, multiple characters in the film.Live action was filmed on three stages: two for capturing the body performances of groups of actors and a small stage to capture the close-up body and facial performances of up to four actors at a time. To give the actors a sense of place and allow them to interact with set pieces, sets and props were constructed, but they were made of chicken wire so the motion sensor markers would not be blocked. The sets were built in two sizes. If adult actors were playing the children, the sets were scaled in a larger size to make them look child-sized. To adjust the adult performances for that of children, the team used a process called retargeting—the movement is subjectively mapped onto a child’s proportions. Motion-capture supervisor Damian Gordon surrounded the set with special Vicon motion-capture cameras to pick up the markers. As many as 12 cameramen filmed the performances. Eyelines and screen direction were a critical factor in the process. Cinematographer, Don Burgess was on the set taking notes to use later.To help the actors, the film was shot in continuity. Scenes were blocked and run as a whole with all the actors. Scenes with Hanks and the children were motion-captured twice—with the children as stand-ins and with the adult actors. Because the train car was the main set, tape marks were left on the floor so that the chicken-wire benches could be moved into place as needed. When it came time to focus on a particular moment, Hanks and whoever he was working with would go in a do their scene as if they were going in to do their close-up. At the end of a shooting day, Zemeckis chose performances from the video reference of the motion-capture. He could choose a body performance and a separate facial performance.Although the process sounds complicated, the flow worked so well that the live-action part of the process only took 40 days for first unit and about 12 days for a second-unit shoot—which involved background children without the principal actors. A traditional shoot with child actors could have taken 120 days.Although the acting was done by adults, Zemeckis cast actual children with the likeness that he desired for each character. Digital laser scans were made of all the human actors, including the children. All were cyber-scanned in costume and makeup, including Hanks wearing make-up for the various characters that he played. These were used as visual templates and mapped onto the virtual characters.The face is one of the hardest things to animate. For each digital character a virtual muscle system was created with all the muscles a real person would have. Then the data from the recorded motion of the selected takes drove those muscles. So all the movement in the face, the cheeks, the forehead, the mouth is the adult actor’s performance, applied to the animated character’s muscle system. The animators, headed by animation supervisor David Schaub, fine-tuned the animation for maximum reality. For instance, a younger face has less elasticity than someone older, so an adult performance for a child character may need an adjustment to what the computer is putting in. For mapping the facial data, Sony Imageworks’ procedural facial system was used. Alias Motionbuilder was used to attach the body data and to put the digital characters into a digital set that matched the chicken-wire sets and props.The eyes are the most expressive part of the face. These were animated by studying a video reference of the actor’s performance and matching what the eyes do. “We don’t get motion capture data from the eyes because you’d have to put a marker on the eyes,” says Chen. “You could put in a reflective contact lens, but a contact lens on the eye still moves around. When it comes to time for the computer to shade and give texture and reflection, then these elements are artistically added in until they feel right to us. The eyes were definitely the hardest part.” To make the characters life-like, more information was needed in the eyes so that the audience could relate to them as characters. The animators also created a few characters using traditional keyframe animation—scenes including animals, birds and the cartoony engineer and his sidekick, but primarily, the animator’s job was to preserve the motion-captured performance.To achieve the cinematic look of a live-action film, cinematographers Don Burgess and Rob Presley operated the virtual cameras in post during the layout process. Special technolo
gy was developed so that the feel of operating the camera was the same as in live-action. Burgess was given a remote head like on a Technocrane to create the camera moves. The system—dubbed Wheels—could tilt and pan the virtual camera. Dolly moves were keyframed. They could put the virtual camera anywhere they wanted in the 3D environment. They could “shoot” a close-up, an over-the-shoulder, a master, anything that they normally would shoot on set, only this time the shots were of digital characters on digital sets and in a computer. Shots could be done that would have been extremely difficult if not impossible in live-action. Chen notes, “It feels like a Bob movie. It has the trademark of his camerawork.”Besides creating the characters, the whole environment of the film had to be created. All the scenes were modeled, painted and sometimes animated using Maya, Houdini, RenderMan, Imageworks’ proprietary software lighting system and their new proprietary program, SPLAT—used to render smoke. Although most of the action takes place on the train, the train moves through more than 20 environments. It plunges like a roller coaster into steep ravines, encounters herds of caribou and packs of wolves, traverses a frozen sea until it finally arrives at the North Pole to a city inhabited by 30,000 singing and dancing elves—including a cameo by Aerosmith’s Steve Tyler as an elfin rock-and-roller.The movie is not meant to be photo-real. It captures the dream-like imagery of the book’s impressionistic, pastel drawings. Like in the book the lighting is very dramatic and the composition is very stylistic. The images are powerful. “We had to work hard to keep the spirit of the drawings,” says Chen. “Even though it was very, very hard, the most exhilarating part was that we knew we were doing something that had never been done before. The crew was all aboard.”

Written by Mary Ann Skweres

Previous article
Next article
- Advertisment -


Beowulf and 3-D

By Henry Turner Beowulf in 3D is a unique experience, raising not just questions about future of cinema, but also posing unique problems that the...