It’s not every film that’s big enough to warrant three separate visual effects supervisors, but Tim Burton‘s Alice in Wonderland, Disney‘s revisiting of material they last brought to screen in 1950’s-era animation, apparently nibbled the cake that said “Eat Me,” and became that big.
The original supe, now senior supervisor, was Ken Ralston, of Star Wars, Star Trek and Forrest Gump fame. “Ken initially had the job,” reports Sean Phillips, “but realized such a large show in such a short amount of time” would be nearly impossible to pull off.
Visual effects technology may have improved since those Star Wars/Trek days, but films like Alice are also created almost entirely in post now, so Ralston brought in reinforcements. Thus Phillips – who had known Ralston from two previous Bob Zemeckis outings, Beowulf and Polar Express – found himself brought in to oversee much of the considerable rendering, along with Carey Villegas, who supervised on Hancock, and oversaw visual effects plate work for films like I Am Legend and Spiderman 3.
The plate credits were important, since Phillips notes, Villegas was brought in to oversee “more of the live-action compositing.”
Villegas said he’d never “had that experience before,” decidedly not referring to compositing, but rather, being on a multi-supervised visual effects staff, aside from working with in-house supervisors at different vendors.
Ralston, he says, was the visionary, overseeing the whole, especially “in terms of getting shots,” which was important, since “95% of the movie” was done in a “full greenscreen environment.”
And while there were 40 days of actual shooting versus just over a year of postproduction, some of the rendering needed to be done in advance in a previs, rough-draft kind of way, since Burton wielded two monitors on set – one with the live greenscreen shooting as it was happening, and the other to see that character composited into the Wonderland environment that would be surrounding him.
Of course, sometimes the character was the thing in need of a good composite. Villegas recounts ‘the director wanted to bridge the gap between live-action and CG characters, making animated characters more lifelike, and amping up the “caricature level of live actors.” For example Helena Bonham Carter‘s turn as the ultra-brachycephalic Red Queen, or perhaps both Tweedledum and dee.
For the latter characters, Phillips reports that “animators worked with Tweedle’s head, blending the nose, eyes (and other features) of the actor’s face.” The actor in question is Matt Lucas, looking a bit like Uncle Fester in The Addams Family, or at least a young Fester – who’s been cloned.
Phillips adds that Lucas had a double he was playing off of in a lot of scenes, but later, elements from his own face were digitized into whichever Tweedle was “the other.”
But Phillips mostly focused not on eyes and noses, but full-scale created environments, citing the “rabbit hole sequence”, the “Underland Garden,” and pretty much “all CG until the tea party.”
But in keeping with Burton’s edict to meld the digital and the fleshly, he also had a lot of “CG cloth work,” saying that “anytime Alice shrunk or grew, he didn’t want her clothing to come along with (her).” In other words, in the name of what might be called “fantastic realism,” Burton dispensed with the old convention of having inanimate objects grow or shrink with their biological counterparts. Get this man a job directing a Hulk installment!
In addition to creating Wonderland above and Underland below, there was the matter of delivering all that digital eye candy in a full three dimensions, as was intended by producer and director alike, long before Avatar was a twinkle in the Academy’s – or box office’s – eye.
The difference here was that the decision had always been to go from two to three dimensions in post.
Burton “didn’t shoot with stereo cameras,” Phillips confirms. They were using a 4k Dalsa Evolution for the Red Queen scenes, among others, but it was flying solo, with the idea to “dimensionalize after the fact.”
Among the things helping it work was a magnet tool within the Maya software they were using that “projects an image onto geometry” that’s extrapolated from the original 2D image.
As Phillips looks back on the process – “the most creative I’ve ever had a chance to work on” – he said “in practice, it worked pretty well.”
Villegas notes that to make the process work, they had to “capture as much depth information on set as possible. So in addition to the aforementioned Dalsa set-up, and a Panavision Genesis for the main “filming,” he oversaw an array of smaller Sony HD cameras that were on set capturing that info.
They’d then project that image on the geometry of the characters, the challenges of eye-line matching made even tougher with the literal 3D mapping out that came in post.
In post, Villegas describes two separate pipelines, doing “compositing on its pass,” then “3D on its pass,” using both to double check and fine tune the sequences and shots.
Since they successfully pulled it off, creating a 3D film without a “huge technical process,” that could get in the way of what he describes as a spontaneous feel on the set, Villegas also allows that the finished product, with its extra dimension “adds another storytelling dimension. It’s here to stay.”
As, apparently, is Alice’s journey, nearly 150 years after first being chronicled by Lewis Carroll.