The Digital Ecosystem—On-Screen Viewing: “Soft Imaging” Shows Things Hard Copy Can’t

The Digital Ecosystem—On-Screen Viewing: “Soft Imaging” Shows Things Hard Copy Can’t

1433

When a supermodel like Tyra Banks promotes a new imaging trend, can you afford to ignore it?

Of course you can: models, even ones who’ve been very successful in business, are not exactly technology experts—and business acumen aside, no one can perfectly predict in what ways we’ll enjoy photography in the future. Nonetheless, the Flixel software Banks is working with is interesting (if kitschy) tech—and it’s one part of what we’ll look at in this issue.

Last time we discussed printing and modeling technologies for “hard copy” physical output from photos. But today most pictures are viewed on a screen—whether on the phone or camera on which the picture was captured, or later on a tablet, LCD picture frame, computer or even big-screen TV.

Images on-screen can be a lot more than just unmoving pixels, and that’s where we’ll focus: not movies, not video, not even slideshows—but other ways in which pictures are more than 2D stills.

Minimal Movement
Is it a still photo or a video? How about both? In early 2011, a photographer and a web designer debuted a GIF format they called “cinemagraphs,” in which the images are primarily still content, but one eye-catching element moves. “We wanted to tell more of a story than a single still frame,” they said, without making a video.

It wasn’t exactly a mass-market technique. The process took a year of experimenting to develop, and each image took a day of editing to complete.

Technology progresses quickly, and there were soon myriad competing versions of the same look: Adobe offered its own cinemagraphs, and Microsoft made “cliplets” last year. Nokia even built it into its Lumia 620 phone.

Now Banks’s Fierce Capital is investing in Flixel Photos’ mobile app that let’s you create cinemagraphs by “simply rubbing a finger over the object” on-screen. “Whether it’s hair blowing in the wind or a flickering fire in the background of an otherwise still image, the effect can be spellbinding,” the company says. Banks says the images have “a ghostlike quality,” and she used shots on America’s Next Top Model.

It’s not just fashion or Flixels: The NY Times reported on photographer Tim Burke creating looping shots from football games and other sports. He’s working with animated GIFs, in which multiple shots or a few seconds of video are compressed into a small file viewed in a web browser. Such GIFs are big on Twitter and social imaging sites like Tumblr. But key to Burke’s success is what’s called a “sixth sense for identifying the exact moments” to capture and loop.

Short videos, animated GIFs and now looping Vines on Twitter. . . . It can become perhaps a semantic difference. Some argue it’s one or the other, a still image or a moving one, which is a video. Others argue that the “long still” is something else: video is about time, while a long still can be about a particular moment or sequence. And some cinemagraphs direct your attention to the moving element, while others make you focus on what remains still.

All of these are useful ways in which on-screen image viewing differentiates itself from prints, and (apart from some work in multi-frame lenticular output) they can’t be duplicated in hard copy.

Moving within Images
Of course digital images can do more than emulate still or moving analog pictures. There’s a book’s worth of innovation in this area, but let’s look at a few of them.

Microsoft developed DeepZoomPix, a web viewer that it said provided a “tactile” way of exploring and interacting with photographic images, and a seamless experience for navigating large numbers of images. You could dynamically zoom in on photos without having to navigate to higher resolution versions. It’s a lot like Google Maps, where you can zoom in from space to a sidewalk, and the app loads only the high-res imagery you need rather than the entire picture. But DZP also worked to let you move through a huge collection of your images as well. The technology later came to market as part of Photosynth, the tool for capturing a 360º immersive image, and it is now in Silverlight.

A more recent on-screen-only imaging technique is the post-capture focus offered by Lytro. Its camera captures a full “light field” with each exposure, which includes the angle of incoming light, from which it can determine distance data. This data is used to let you alter the focal plane in the captured image in a way that emulates a primary attribute of human vision: rack focus.

When we look at something close, everything in the distance is blurry; when we look at something far away, that which is near goes out of focus. Lytro sounds cool, but in practice it’s a cute gimmick with limited appeal: Click on one part of the picture to make another part blurry . . . once you’ve done it a few times, you’re done. (By the way, Nokia now offers a similar function called ReFocus in its latest PureView cameraphones.)

More useful is the 3D viewing Lytro promised when it debuted, but only delivered in November 2013. It can synthesize an on-screen 3D view in which you can pivot the view a little—but it doesn’t show you much outside the original single capture position, of course, as that’d just be magic. (Or computational imaging, which can kind of do that, but is years away from providing a realistic view. We’ll talk about that next time.)

What’s more potentially useful are upcoming full 3D environment capture devices. These don’t work in a single exposure of course; you walk about the room taking a lot of shots that are stitched together. But the result lets the viewer virtually wander through the scene. These laptop-like camera devices are in development now and should be out next year.

Future Imaging
These types of changes call into question what photography is. Photography has been about controlling the composition, angle, lighting and focal plane. Lytro changed part of that, letting the viewer alter the focus; tomorrow’s 3D photographic environments will do so even more, delivering to viewers not the photographer’s intention and artistic message, but instead the equivalent of “raw data” they can look at and use as best suits them.

Think of real estate photography: today the seller hires a photographer to provide shots that are only the most flattering, and all-but illusions, while the potential buyer wants a utilitarian unfiltered view from multiple vantage points that let them tour the home without driving there first.

Photography can be both artistic and objective, and perhaps future image files can deliver both types: one click lets you interactively examine the raw data at any angle, the other shows the shooter’s impressions of what looks best. In either case, the ever-growing number of screens on which we now view images promises more from imaging than prints in the past could ever provide.

Next up, we’ll look at two types of imaging intelligence: smart photos with embedded metadata, and computational photography that works with multiple inputs to derive an otherwise impossible shot.

NO COMMENTS