I hate to be one of those people who immediately leaps to discuss stereoscopic 3D first, especially as I’m not personally a huge fan of the technique, simply because it always makes my head hurt. That said, if NAB 2012 has a defining characteristic, it’s the fact that tools for stereoscopic work both in production and post are starting to, well, not mature necessarily, but at least proliferate, which is probably a prerequisite for eventual maturity.
There is necessarily a dichotomy between people who’d like to shoot new material in 3D, and those who’d like to convert existing 2D material to 3D. It’s inevitable that the latter will happen, because there are a lot of studios out there with libraries of material they’d like to sell again, so the question really is whether new productions will shoot 2D for 3D, or shoot in stereo to begin with,
The surprise for me is that outfits like 3ality Technica don’t seem to object as strongly as you’d have thought to the idea of post conversion. I spoke to Ted Kenney, 3ality’s director of production, who’s worked on all kinds of amusing things all over the world. He seemed happy with the idea that both techniques are in the arsenal, but was cautious about the potential for re-releasing everything that’s ever been made on the basis that the technique is currently so absurdly labor intensive. That, combined with the limited box-office appeal of a movie like Top Gun that’s 25 years old, came up as an example of a situation that might not be worth pursuing until conversion becomes faster, and therefore cheaper, to do. Not every movie can be Titanic and not every movie will have the degree of care and attention that Titanic received; it’s obvious – but, reassuringly, mentioned by Kenney – that the quality of the work that’s done is what really controls how well this works.
Perhaps the single most directly relevant NAB event (and certainly the best attended) was one at which James Cameron and Vince Pace talked about 3D. I’m almost hesitant to mention this so early on, as it just sounds like name-dropping, but in reality it wasn’t particularly exciting – really just an infomercial for their joint efforts in the direction of live broadcast 3D, particularly sports. The idea that 3D can be simple and ordinary from a production standpoint was pushed quite forcibly, as Cameron has done in the past, but I find that quite difficult to take at face value. The sitiation is similar to that which arises when discussing film versus video acquisition with someone who’s used film a lot – the average ASC member is so insulated from the realities of loading mags, processing, and of course footing the bill, that his or her opinion is probably based on precious little recent experience of the downsides.
Such, I suspect, may be the case with Cameron. He has a reputation as a hands-on guy with a technical background, but it’s an unfortunate fact that the people often seen as most senior and experienced in the field are often in a position where they’re insulated from technical reality. Or, to put it another way, Avatar is not really a model for every other production, because every other production does not have a nine figure budget.
For the sake of completeness, I must admit that Rob Powers from NewTek objected to that idea quite strongly. The virtual-set previsualisation technology used on Avatar, with which he was quite heavily involved, has now been quite considerably simplified and streamlined, and was used on the upcoming Battlestar Galactica preequel which used virtual sets throughout. All of these people have an axe to grind – NewTek is clearly quite interested in selling Lightwave 11, with its cute new shatter routines – but we probably shouldn’t be too surprised to discover that this technology does eventually begin to filter down to more everyday circumstances.
I’m not going to apologise for my failure to secure an interview with someone from Cameron/Pace. They’ve never been the easiest outfit to contact, and when this article was being prepared, Cameron was at the bottom of the Marianas Trench, probably with most of the rest of the company bobbing about in a boat just above it, staring at monitors. Dedicated as I like to think I am, my interest in the dissemination of this sort of info stops some way short of donning SCUBA gear.
So, that’s the politics – what about the technology?
There is inevitably a somewhat combative relationship between conversion to stereo and acquisition in stereo, although the overwhelming amount of legacy material that could potentially be re-sold in 3D means that development of the conversion technology will continue regardless of what happens with acquisition.
It’s no secret that manual techniques for 2D-3D conversion are almost unthinkably labor-intensive. One NAB session, with producer Jon Landau, concerned the conversion of Titanic to 3D, and was subtitled simply “279,360 Frames”, every one of which will have been touched by an artist at some point during the process. Again, we have something which will probably be cited as a benchmark for the process, and again it’s impossible to avoid the reality that few if any productions will get as much attention as a film like Titanic and a director like Cameron.
Therefore, it’s probably a very good thing that outfits like JVC are working on at least a degree of automation for the process. They’ve had a fully-automated 2D to 3D conversion device for some time, although it’s very obviously not a solution that produces sufficiently good results for even quite permissive customers. New at the show this year was their intermediate solution: a system with some automation, but also some ability for the artist to guide the machine’s work. Particularly, it can do an intelligent infill of background areas revealed by parallax shifts in the new frame, which otherwise lack real picture information.
While it seemed to work, there’s still a lot of rotoscoping work to do, and even then things can look like a stack of cardboard cutouts. OK, so that’s actually a problem that bedevils 3D acquisition as well, especially on longer lenses, but there’s certainly a problem with edges looking a bit too clean. The system seems – from what I’ve seen looking at it – to support the reimposition of a degree of motion blur on roto’d edges, which takes the curse off it quite effectively. In short, you could do better by hand, but I suspect we’ll end up seeing an awful lot of this sort of thing in the rush to re-sell the back catalogue in yet another new format. JVC are (perhaps uncharacteristically) doing this work as a service provider to Fox, who clearly have a huge interest in that sort of thing; they showed a scene from I, Robot at the show.
Whether you think the results are reasonable or not is somewhat subjective, but I suspect at this level of quality it will not be the state of the art for long.
Slightly more interesting – and certainly speaking to better quality – is the work of the people at the Fraunhofer Institute, who are always extremely entertaining simply because of the breadth of their work. They developed the MP3 compression algorithm, among other things, and were influential in the development of the h.264 video codec. Fraunhofer is essentially science for hire, science for a purpose, and it’s good to see them at NAB.
The Institute’s display included several pieces of stereoscopy-related technology, perhaps most interestingly a combined 2D/3D rig involving a single Alexa and two Indie Cams in a horizontal array, with the Alexa in the middle. The idea here is that the Alexa captures the image, and the two auxiliary cameras provide stereoscopic depth information, allowing for automatic reconstruction of a complete stereo image later on. The beauty of this is that the convergence and interaxial distances of the two stereo cameras is not directly coupled to the stereo geometry of the output image, and can be left alone from shot to shot, with these decisions deferred until later in postproduction. Purists (and convergence pullers) might scoff at this approach, but I generally find it hard to argue with anything that defers complex, subjective and far-reaching decisions from expensive set time to far cheaper post time, and based on a very brief glance the results are pretty creditable.
If you really must shoot true two-camera 3D, the Institute can help you again with a stereo analysis and processing device that not only displays information about the stereoscopy in the shot but also allows for a degree of automated correction. Offset the two cameras vertically, and it’ll slide one of the images to match. Mismatch focal length, and it’ll scale one of the images to compensate, all the while providing information on exactly how it’s wrong, and by how much, so that the appropriate optical compensation can be made.
What’s more, it’ll actually do a degree of convergence pulling, inasmuch as that’s possible in software. There’s a rather innovative little depth-histogram display with red bars indicating objects which are outside a reasonable depth range for the scene, as well as warning overlays on the image itself, indicating which object is causing the problem. When these appear, the device will slide the images horizontally to get as much of the image as possible inside the allowable depth window.
As a research institute, Fraunhofer are refreshingly honest about the fact that this device is no substitute for human intervention and propose it mainly for use on static cameras and other material where the motion won’t be too kinetic or the stereoscopy too extreme; in that situation it seems competent. It can also run on a small portable PC; Fraunhofer showed one using a Blackmagic board for the video I/O.
At the end of the day, 3D is something that even its staunchest supporters admit is not for everything, a concession which is highly unusual in a field that runs on slightly bright-eyed positivity in the face of even the direst inadequacy. That being the case, one would hope that completely inappropriate conversions of classic old films will be avoided by the studios; one would also hope that upcoming 3D-originated films will use the technique with a light touch.
Hope, that is, not necessarily expect.
The one ray of light in the darkness was the Christie demo, which oddly enough gives James Cameron a third chance to appear in this article: they showed his demonstration of high-frame-rate 3D. Anyone who’s critically observed 3D material being shown will have realised that the effect is defined best around the edges of objects, and that the edges of objects is also where the chatter inherent to 24fps imaging is most obvious. It’s intuitively true, therefore, that stereo material presented in higher than usual frame rates will be more comfortable to watch, and so it proves to be: not a lot, but it is better. As Cameron points out, the 24fps rate was chosen solely because that film speed was the least that was adequate for reasonable optical sound quality. It certainly has nothing to do with producing a convincing illusion of movement in the human visual system, which, they claim, starts to happen at about 60fps, a notion with which I’d subjectively agree. The demo showed both 60fps and 48fps-originated material, the latter of which is of course far easier to convert back to 24fps for conventional exhibition. The difference is otherwise hard (though not impossible) to spot, so I suspect 48fps may be the favorite. The claim is that almost all of the currently-installed base of digital cinema projectors are either capable, or a software update away from being capable, of handling higher frame rate material.
Great as all this new stuff is, none of it really solves the basic problems of 3D, which have been the same for decades, even a century. The mismatch between focus distance and convergence distance. The need for the audience to converge on the right part of the image. The need for the audience to keep their heads level. The appalling tendency for producers to insist on far more extreme effects than are comfortable for hours, based on five minutes’ viewing. There is no immediate prospect of solving these issues with technology and none of the basic problems are addressed by anything at NAB 2012. About the best we can get out of it is the hope that of the 3D material that will be produced in the next year or so, it should be closer to the currently-achievable ideal than was possible (or, at least, easy) previously.