The recent photokina show certainly had its share of new products and tech advances, and with them came a new crop of lingo. Perhaps the best way to get across what these changes mean to customers is to describe their benefits and how they might solve an issue encountered in the past. Or, even better, how they might open up new doors of picture possibilities.
Yet, techno-lingo can be intimidating and even scare some customers away. Innovation has become a major competitive issue and a sort of tit-for-tat game played between vying companies. But there’s no denying that we are in a tech-driven market, and that responsive and diverse features can be a draw, as long as their benefits are clear. When defined, customers can decide for themselves if they really need what’s on offer. The old adage that you lose more customers in the long run by overselling them rather than underselling them (or hopefully finding the perfect match) still holds true.
With that in mind, I’d like to offer a quick rundown that I hope will be helpful in explaining some of the advances that were on display at photokina and sure to be seen at CES. In all, there were major advances in focusing and viewing systems; speeded up and more powerful in-camera processors; and new “handshakes” between lenses and cameras that for me were among the tech highlights of the show.
I have always struggled to appreciate electronic viewfinders (EVF). Early versions were subject to blooming (flashes of whiteout when you move around the scene and hit a highlight) and, worst of all, the confounding smearing that occurred when you had the audacity to change the framing of the scene. The reason for this, of course, was that the EVF showed a “signal” of the image and not a reflection (à la DSLRs). At heart, the problem was the resolution was too low or, more crucially, the processor in the camera couldn’t keep up with the stream of information it was being asked to handle.
Well, we now have a measurable term for this condition: “image latency.” Camera makers have finally owned up to this problem and have started to cure it with higher resolution finders and next-generation image processors. The first stat (measurable and defined lag) I saw was with Samsung’s rather remarkable NX1, which is said by the company to yield a 5-millisecond “latency.” The benefit for all is that EVFs promise to be more tolerable as a way to view and compose an image. A very welcome change.
Electronic Rangefinder Focusing
In days of old, rangefinder focusing presented two images superimposed onto one another in the viewfinder. When you wanted to focus, you looked into the finder and turned the focusing ring to make them one. Admittedly, it took a bit of getting used to, but generations of great photographers used it and got very sharp shots in very spontaneous fashion.
The new Fujifilm X100T offers a version of this through the camera’s large eyepiece finder, but it works differently from rangefinders of the past. Fujifilm has caught the attention of many photographers with its “hybrid” finders, and this new twist should be equally appealing. There are a few variations, but my favorite is when it splits the image electronically and moves a portion into the lower right-hand area of the frame. You then manually focus the lens to get that portion sharp. For me, it beats most manual focus setups in almost all mirrorless cameras hands down.
The electronic connections that carry signals back and forth between the camera processor and the lens is how users can change aperture without touching the lens, have all the wonders of autofocus at hand, and more. The latest developments include some remarkable new handshakes between the lens and the image processor, thus more options in camera use and control.
The phraseology around all this is varied, but it all adds up to a rather profound advance in AF speed and accuracy. For example: Sony’s 4D Focus system is said to provide “constant focus throughout space and time”; Samsung’s 3D AF in their new NX1 is said to offer predictive phase-detection AF throughout the entire imaging area. The sensor in the NX1 has 153 cross-type sensors, 205 phase- and 209 contrast-detection arrays. This might seem like overkill, but it makes focusing with the NX1 superfast and responsive.
Acquisition time refers to the speed with which the system can activate and verify autofocus, and now it’s gotten even faster. Fujifilm claims a 0.06-second acquisition time in their X30, while Samsung, with the NX1 again, pins theirs at 0.05 second. To boast about their predictive focusing specs, Samsung added an “auto shot” feature they say can track a pitched ball and at the same time the swing of a bat, so the camera grabs the exact moment of impact. They even had a mockup display of this scenario in their booth to prove it.
Lens “profiling” is a term used to identify and thus cure, via image processing, possible “defects.” These include chromatic aberration (fringing of colors due to wavelengths of different colors of light not focusing on the same plane) and vignetting (slight darkening of corners at certain apertures and focal length settings). Others have offered this as a download, but Samsung added what they call OIC (optical inverse correction) right into their NX1 in-camera processor that will recognize the lens mounted and trigger the lens profile so it “auto corrects” in-camera. This would also seem to eliminate the need to do this in post, as it’s often done now.
In the early, early days of digital imaging, every shot was essentially a frame grab. Now it’s back to the future with the newly touted “4K photo,” wherein you can, as told by Panasonic, pause at the perfect moment during your 4K video playback and make a still frame grab. It’s suggested that users shoot at 1/8,000 second so they don’t miss a millisecond of the action, and then take the time to search the playback for their personal decisive moment.
The resultant still image is 8.3 megapixels (24MB for printing fans), but wait . . . there’s more. When 8K video shows up (around 2020, says Panasonic) you’ll be able to grab a 33MP (100MB) frame. With this setup it seems all the user need do is point the camera and let ’er rip, then sort it all out later. I’m pretty sure this will not be a big selling feature right now (or ever), but it may well point the way to what’s in store down the road.
And More . . .
Of course there are also superfast shutter speeds, advanced and selective-area noise reduction, superhigh ISOs, superfast framing rates and burst capacity, high zoom ratio, and fast, constant aperture lenses at half the size and weight of past manifestations (Olympus). Plus, there are more tablet and phone remote camera control capabilities, new “apodization” filter lenses for a different take on bokeh, and firmware upgrades that actually add to camera functionality rather than just patch problems.
Space precludes exploring the techno-lingo in all the above, but you get the idea. photokina started the ball rolling, but you can bet that it was just a moment to catch our collective breath and consider where we go from here. Can’t wait for Vegas.