Filed in: Contender Portfolios, Featured, Postproduction, Visual FX
|

Joe Letteri on the Visual Effects of Avatar

January 22, 2010 | By
<em>Avatar</em>'s visual effects supervisor Joe Letteri

Avatar's visual effects supervisor Joe Letteri

After supervising visual effects on massive projects such as Peter Jackson’s final two films in his Lord of the Rings trilogy, Stephen SommersVan Helsing, and Jackson’s King Kong, Joe Letteri might have faced an even greater challenge with James Cameron’s Avatar. For this newest project, which is winning troves of awards and breaking both worldwide and domestic box office records, Letteri faced supervising effects for a movie that was a predominantly virtual production, with fully-realized computer-generated characters and worlds, on top of being shot in 3D with new camera processes.

For Letteri, the origin of this project dates back five years.  “It started as we were finishing Kong,” he said.  “We talked to Jim about 3D because Peter was interested in shooting Kong in 3D.  We saw the camera system he had built. Jim knew what we were up to with Rings and Kong. He thought it was getting close to be possible to do Avatar.”

Certainly, one of the major hurdles with the last two Lord of the Rings films and King Kong was the creation of a photorealistic character based on motion-capture and facial-capture technologies.  “With Gollum, we had a character with full-on dialogue with the other actors,” Letteri explained of The Two Towers and Return of the King.

“Jim came down for a test, and we took it from there – we could do on Avatar what we had done on Rings and Kong. We would try to do it all here [at Weta Digital in New Zealand] and that encompassed a lot.  Once we set up our team to do facial capture, we realized that if the data was good, it could go into the movie. Suddenly, we were already into it, and why throw it away? From tests to performance capture, it was only several months.”

This slideshow requires JavaScript.

Though Weta craftspeople were familiar with the technology that Cameron required, there were definitely new directions in which the director wished to create his world in Avatar. Rather than create the computer-generated character material largely in postproduction, where, typically, CGI artists are placing digital characters into live-action plates, Cameron wanted to see his camera moves on the day he shot his motion-capture and facial-capture scenes. “So, the virtual stage was a live-action stage.” Letteri expounded.  “The biggest difference is that Gollum was being put into a live-action plate whereas Zoe Saldana and Sam Worthington in Avatar were performing in a virtual world, and we were putting everything in digitally around them.”

Certainly, both motion-capture and facial-capture are not in-and-of themselves new technologies for cinema, but the way in which the Avatar team synergized the material into a virtual environment took the entire project to a new level. “We came up with the idea of facial motion-capture like we did for Kong,” said Letteri.  “We tracked facial motion-capture with video cameras on a head unit. The idea is that you could move easily from the virtual to the real world. Other films have been done in that way. The difference is that we created this so that the stage was a real-time stage.”

On other films with motion-capture technology at their cores, performances are captured by cameras and the material is processed later with backgrounds and other actors.  Not so on Avatar. “On the day that motion-capture was shot, Jim could point the virtual camera and see the actors the same way that you would in a the live-action film,” said Letteri,  “but he’s seeing the virtual characters in the camera.  Blocking the actors occurred during the performance-capture stage. These camera moves throughout the movie are like what Jim would have done [in live-action photography]. He would do them on the day on the capture stage, but you can play back the characters like a pre-recorded track. You can pick the take that you like and play it back, and you can do different camera moves. You can get all of your coverage after the fact.”

With Cameron on the capture stage looking through his virtual camera, he would see his digital actors and backgrounds, according to Letteri, “looking like a video game ten years ago,” but it generated enough material for Cameron to resculpt the terrain of the alien planet, Pandora, and move the camera around his characters.

“Then, Jim was able to go back in and do his camera coverage,” said Letteri.  “Once he had that worked out, we had a template to cut with. He would send that to us [in New Zealand], and we would get the capture data and the reference video from the hi-def cameras in the capture stage. One hi-def camera was always on the actors in roughly the camera position Jim wanted. We had the body capture data, the facial capture data, and the HD camera footage for reference.”

With all of the data intact, Letteri’s considerable team at Weta would then animate the characters, principally, the Na’vi creatures and the native animal-esque inhabitants of Pandora. Then, their final animation would be implemented onto the hi-res models of all CG creatures. “We would go a couple of rounds of notes then render it, and put the characters into the environment and put the lights on the scene,” said Letteri. “We would start rendering everything — weeks later we would have a finished shot.”

To realize this blend of virtual cinematography and computer-generated animation, Letteri worked closely with a team in the US, traveling from New Zealand monthly to review the performance-capture process with key collaborators.  “Jim designed the virtual camera with Rob Legato and set up the virtual stage in Playa Vista,” Letteri stated. “When I first got there, Rob had the prototype already working, but we had done a lot of motion-capture with Gollum and Kong. On Lord of the Rings, we used Giant Studios’ software before they had their own stage. Giant set up the real-time performance capture to realize the idea of looking at Gollum’s motion-capture in real-time. We needed that to get Gollum registered in the plates. It was a good fit because if Giant did the capture [for Avatar], we were familiar with their data. We wrote the software that would do the facial capture and analysis. Steven Rosenbaum was our on-set supervisor, and Daion Millivich was facial capture supervisor. Glen Darie is an engineer who did the electronics on the virtual stage.”

Using 102 capture cameras with performance-capture suits and head-rigs fitted to the actors, Cameron started shooting in April 2007 for six months of work with the actors who played Na’vi aliens. Simultaneously, Letteri would receive the data in New Zealand and start translating it onto Weta’s digital models of the characters, adding other creatures into the shots.  “You can talk to Jim on the day about any particular virtual sets, changing art direction and lighting,” said Letteri.  “We talked with Jim about what he wanted, and we got Sam’s capture data and sent the digital Sully back to Jim. He recorded with Sam in the viewfinder, but he could frame for creatures as well as Sam. He used pre-recorded animation, and we got final animation back and put it together in the final shot. Once you put hi-resolution detail, we are pretty far along at that point with animation.”

Even creatures were performance-captured on Avatar, including horses and puppets of flying beasts. “They had a banshee puppet whose head Jim puppeteered himself,” Letteri said of work on the Playa Vista stage.  “As much as Jim could, if it could be done on the day, he would do it.”  But not everything was performance-captured.  “If there was no practical way to do it, we would do it digitally and send it to him.”

“We had to key-frame a kiss, and these are the moments where the animators really had to do the work.”

Because the film started growing in size, several visual effects vendors got involved. For instance, wide shots in the battle scenes were created by Industrial Light and Magic. Weta realized all of the featured characters’ dialogue and all Na’vi performances.  Additionally, the company digitally created the entire planet of Pandora, including all of its exotic foliage.

“What was kind of amazing about it is that Jim set this up as this whole immersive experience — inspired by his dives — the bioluminescence and these strange creatures,” Letteri said.  “We really tried to get that on Pandora.  In 3D, it looks really spectacular.  We think it looks great just seeing it as it is, but it really opens up in 3D.”

After five years of work on Avatar, what became most apparent to Letteri, who also supervised visual effects for Peter Jackson’s new film The Lovely Bones, is that much of what he used to consider postproduction, he now has to do before any shooting commences. “Avatar gives you that direct feedback,” he said.  “It’s taking the last century of filmmaking and apply it to a virtual world. When you are doing that, you want to retain as much as what you know about making good films as possible.”

Now that Cameron and company have set up a system for creating a film with largely virtual elements, other productions are picking up on it. Peter Jackson’s own Tintin — with Steven Spielberg directing the first installment — was shot using the same technology that Cameron set up.

“What we came up with was a pretty good way of working,” said Letteri.  “We try to take the live-action world and apply it to the digital world.  It becomes pretty intuitive.  Once you see it, you get it right away.”

Related Articles:
The Making of Avatar

View The Trailer

BTL Production Listings

Video of the Day

The PA 101 for Military Veterans workshop is a 3 day free workshop created by Navy Vet Mark August.