I went to NAB 2013 with no great expectations for new equipment, and therefore wasn’t disappointed. ARRI is still ascendant with Alexa. Sony has just released the F5 and, to the horror of F65 owners, the F55. Canon seems confident in its position, having accidentally built an excellent indie filmmakers’ camera in the 5D Mk. II, and having capitalized on it with the C300 and C500. So, NAB 2013 was never going to be about cameras, even given Blackmagic’s announcement of its new 4K Production Camera.
That announcement does hint at what the show was really about – the rise of 4K. Blackmagic itself announced a collection of 4K-capable broadcast hardware using the imminent 6-gigabit SDI standard. Sony announced consumer TFT-based televisions at 4K resolutions, in 55- and 65-inch varieties for $5,000 and $7,000 respectively, which are presumably intended to facilitate NHK’s intention to broadcast 4K at some point in the next few years. Speaking of NHK, it was nice to see some new 8K “Super Hi-Vision” footage on their demo, as well as proof that it can be broadcast in such as way that it’s still 8K when it gets to the consumer.
While 4K is obviously an industry darling, analysis is tricky. 4K is, undoubtedly, a higher resolution than 2K. It’s also higher resolution than the 35mm exhibition chain, and frankly higher than most camera negative. But is it useful? Well, producing in 4K for 2K distribution is a great idea, inasmuch as it’ll allow us to make and correct mistakes without visible loss. Later in this article, I’m going to talk about a speculative technology, which might make 4K camera sensors very relevant indeed. But in distribution?
I’m less than convinced, at least unless we somehow foist very different viewing habits upon the public, most of whom sit so far from the TV that HD is barely relevant, let alone 4K. While I don’t want to whine about 4K on the basis that it’s difficult, it makes the job of everyone from set decorators to focus pullers considerably harder. Aesthetically, people are already fighting excessive sharpness with old-style lenses and diffusion filters, and the degree of resolution sacrificed to filtration that’s required to drop 4K down to an effective 2K resolution is absolutely microscopic.
It’s therefore arguable that 4K distribution is neither necessary nor desirable. But unnecessary, undesirable things are sometimes saleable, and in either case there’s nothing quite like shooting two-times oversampled to end up with nice sharp quiet pictures. I’d advise readers to stock up on F65s, but the F55 is both broadly comparable and cheaper, to the venomous chagrin of many of Sony’s recent F65 clients.
So, while it’s a bit strong to call 3D “vanquished,” I managed to trawl the show without encountering much 3D gear other than the compact camera exhibited by Fraunhofer. This is nice for several reasons: it leverages the institute’s expertise with stereoscopy, which they’ve previously shown as a piece of software designed to detect and, in software only, fix problems with stereoscopic images. Using that know-how in a camera system capable of making proper opto-mechanical adjustments to the stereoscopic characteristics of the image ought to provide very useful results. Fraunhofer has enlisted the aid of P&S Technik to build the hardware, and even though there’s a limit to the performance of the two tiny cameras, I can see a role for the system in live TV, particularly sports, where its diminutive size and automated features will be very welcome.
The other 3D reference came from Dr Barry Sandrew, founder, chief creative officer and CTO of Legend3D, a company performing conversion of conventional 2D photography to 3D. Conversion is a slightly controversial approach on both technical and philosophical levels. I found the talk interesting enough, with a few basic dos-and-donts of conversion intended to allow attendees to more readily identify sloppy work as well as a brief discussion of Legend’s proprietary software. The elephant in the corner of this is the simple need to define the edges of objects, effectively rotoscoping, an almost absurdly labour-intensive process that Legend makes practical by sourcing it outside the country. Despite all the controversy, it all came off as well-thought-out and reasonable, taking advantage of the fact that single-camera acquisition for later 3D conversion suffers none of the mismatching problems which can occur with two-lens stereoscopy.
I’m someone who doesn’t often enjoy stereoscopic 3D, for the simple reason that it absolutely always – including during Legend’s demo – gives me a headache. My eyes therefore remain dry at the prospect of 3D being somewhat marginalized this year, although it may grow slowly over the long term. I just hope they find out why it gives me a headache first.
The great 3D-4K war aside, what else is new? A few more serious manufacturers of LED lighting, although only a couple of them – such as Brother, Brother and Sons who we discussed after IBC – appeared to be doing it with any genuine innovation. Vague claims of a special relationship with the LED manufacturers aside, not much can be done to alter the performance of LED lights other than significant advances in LED manufacturing, which are controlled by much larger forces than the film industry.
The increasing availability of cameras beyond HD resolution, the comparative excellence of even low-cost cameras, and the fact that 3D is at the very least accelerating less quickly than before, make me wonder if it isn’t possible for technology to plateau. It’s impractical to chase resolution indefinitely in order to sell more TVs and cameras; the NHK Hi-Vision demo made it clear that more than 4K is barely useful at the point of exhibition. So, NAB this year makes me look for things, which might take advantage of the technology in a less easily anticipated way.
One such thing brings us back to Fraunhofer and their exhibit of a high-dynamic-range camera. This involves applying a random pattern of ND filtration to 50 percent of the photosites on a sensor, then interpolating the missing pixels from each of the high and low exposure parts of each frame to create an overexposed and underexposed image. These can then be combined to produce a true HDR frame, with the random patterning ensuring that resolution loss and visible artifacts are kept to a minimum, with losses lower than 50 percent. The current test bed operates only in monochrome and at one frame per second, but this is exactly why we should be pushing for 4K imagers, so we can maintain output resolution while using this sort of technique, not because a 4K image is, in itself, that useful.
So, much as hyperbolic commentators have described NAB 2013 as “the death of 3D,” I’d like to think of it as the death of the obvious, the death of the now increasingly pointless numbers race in which camera and distribution equipment manufacturers have recently indulged. We’re hitting the point at which it isn’t useful anymore, and I hope that unusual ideas like Fraunhofer’s will define the future debate.
But with Sony talking about $5,000 4K TVs, I’m not holding my breath.