Artificial Intelligence: Cameras Are Getting Smarter All the Time

Artificial Intelligence: Cameras Are Getting Smarter All the Time

MRMC Polycam Player

Digital cameras are increasingly becoming more sophisticated. They now offer capabilities like face detection, high dynamic range, extended ISO and a host of preset program modes to capture different shooting situations. As a result, they’re more dependent on processing power—and the software programs designed to control them. Moreover, digital cameras, and photography in general, are on the verge of a whole new revolution, as artificial intelligence (AI) is integrated into the photographic process.

AI is impacting almost every aspect of computing. According to the research firm International Data Corp., worldwide spending on AI will jump to $46 billion by 2020 and $77.6 billion by 2022. That’s up from just more than $7 billion in 2016.

Artificial Intelligence and Machine Learning

An important aspect of AI is machine learning, where computers get smarter as they accumulate information. Machine learning involves an accumulation of data to increase functionality as well as a systematic analyses of the data that is input and processed.

What’s more, with AI, computers develop logical connections in the data sets, similar to the connections that neurons make in the brain, to come up with conclusions that go beyond the actual information contained in those sets. The more data sets available, the more analyses becomes possible; the more machine learning takes place, the more advanced the capabilities of the system become. However, AI requires significant computing capabilities. And until relatively recently, those capabilities were well beyond what digital cameras could handle.

Most of the major camera manufacturers are looking at the different ways that AI will impact their products, as well as how to implement artificial intelligence into their offerings. AI is going to have a particularly strong impact on their medical and scientific imaging divisions; but it will also play an important part in general photography.

Camera Makers Moving to AI

Canon’s published articles and white papers on AI research, relating not only to imaging but also other professional specialties. Its Imaging Systems Research Division is tasked, in part, with “researching and developing systems and algorithms in computational imaging and big data analytics.”  It addresses the challenge of how to leverage large amounts of data to make cameras smarter.

Nikon is moving into AI aggressively. It’s a major component of the high-end Polycam Player system developed through its Mark Roberts Motion Control (MRMC) unit. The Polycam Player is an automated sports tracking system that was touted as changing the way sporting events will be covered when it was released a couple of years back.

artificial intelligence MRMC-Polycam-Player-back
MRMC’s Polycam Player

According to MRMC, it will eliminate, or significantly reduce, the need to have multiple camera operators for sporting competitions; it automatically tracks individual players. I saw it in action at the NAB show in Las Vegas when it was introduced; its capabilities and tracking speed are quite remarkable.

On the DSLR side, last summer, Nikon made a $7.5 million investment in the Canadian firm wrnch, a company developing machine learning tools. Its technology makes it possible for computers to see and understand human movement; in effect, it teaches cameras to read body language.

In addition, Sony recently established Sony AI. The organization, with offices in Japan, Europe and the U.S., will advance fundamental R&D of artificial intelligence. Sony’s goal is to “fill the world with emotion, through the power of creativity and technology.” Recognizing AI will play a vital role in the fulfillment of this purpose, Sony AI’s mission is to “unleash human imagination and creativity with AI.”

Sony AI combines its R&D with Sony’s technical assets. This includes its expertise in imaging and sensing solutions; robotics; and entertainment (games, music and movies).

AI Facial Recognition Capabilities in Cameras

While there are cameras with built-in AI capabilities, at this point, most AI photographic implementations are still primarily system rather than camera based. Facial recognition (FR) is a good example. It’s probably the best known photographic AI application (see sidebar below).

Some of the earliest camera-based facial recognition was implemented on smartphones; since they are basically powerful minicomputers, they have considerably more processing power than digital cameras.

Apple added an AI chip (actually more of a chip set, sometimes called a neural engine) to its newest phones. It makes it possible to utilize FR as a security feature to access the device. FR is also integrated into some of the new smartphone-based payment systems.

However, there are some digital cameras with built-in facial recognition. A few years back, Google, a leader in AI development, introduced the Clips. The small digital camera can learn who and what to shoot. Using facial recognition, it automatically learns the most important individuals it’s supposed to take pictures of; it does this through images shot by the camera itself or through those in a Google Photo cloud account.

Google Clips uses AI to learn who to shoot.

Furthermore, the Clips goes beyond face recognition to determine what to shoot. Utilizing what Google calls “Moment IQ” (an AI learning tool) and a visual processing unit, the camera learns what to take pictures of as well. It can automatically capture a series of meaningful individual images (compiled into 7-sec GIF files) by recognizing the right expressions, activities, lighting and framing. That’s pretty sophisticated for a $250 camera.

AI in Program and Scene Modes

Digital cameras have provided program and scene modes that automatically set apertures, speeds and ISOs to meet specific shooting requirements for quite some time. However, without built-in intelligence to assess the quality of the results, when the same variables capture the same scene, the results are the same.

AI will make it possible for digital cameras to make automatic program mode adjustments to learn to take better pictures. With AI, the camera will analyze an image it captures and determine what changes to make to the exposure and focusing variables to come up with the best results.

Coming up with better pictures through AI goes beyond just analyzing pictures the camera takes and then making adjustments. AI empowered cameras will learn from previously captured images or from source image databases. A camera will take what it learns about exposure and focusing from those images and apply that knowledge to make the best settings for the current shot.

Say you want to take pictures of a person standing in front of fireworks at night. AI will search all the images in the databases it can access, see what the settings were for the best example and apply those settings.

Focus Stacking and HDR

Two other areas where AI is having an impact on improving image quality are focus stacking and HDR. With focus stacking, the camera calculates the number of exposures required for everything in the frame to appear sharp. It takes a series of images at different focus positions and merges them into one sharp, crisp image.

HDR (high dynamic range) is another example. One of the fundamental problems with a single exposure is that if the shadows are exposed correctly, the highlights are blown out; or if the highlights are exposed correctly, the shadows turn black.

Because of the limited dynamic range of digital sensors, the only way to do that effectively is through HDR photography. With HDR, multiple bracketed frames are shot in rapid succession and merged into one overall correctly exposed image.

Merging HDR images is possible on a computer in post processing, as well as in the camera directly. There are various programs that handle HDR on the computer.

In post processing, it’s possible to improve final image quality with programs like Aurora. Skylum, the company that developed Aurora, added a quantum HDR engine to its newest release, Aurora HDR 2019. It uses a form of AI tone mapping to analyze the images and create a more realistic final shot. The engine analyzes each individual frame before merging the sequence, resulting in more realistic colors, lower noise and less contrast variations.

Hasselblad X1D’s built-in intelligence automatically handles bracketing and exposure sequencing.

An increasing number of digital cameras have HDR built right in. The Hasselblad X1D, for one, lets the photographer set the exposure variables or the camera can do it. The X1D’s built-in intelligence handles bracketing and exposure sequencing automatically, depending upon shooting situations, to create extremely broadly exposed, extremely high-quality merged images.

Voice Recognition

Moreover, another form of AI implementation in digital cameras is voice recognition. With voice recognition, theoretically there’s no more having to turn dials, page through menus or access commands on touch screens. In reality, cameras with voice recognition still have most of their settings controlled by menus. However, they recognize a series of simple commands that handle some common functions.

GoPro’s newer Hero action cameras respond to a dozen or so voice commands. This makes it possible to start recording video, capture individual images and begin taking a series of time-lapse images, among other things, with a spoken command.

artificial intelligence GoPro-Hero6-Voice-Commands
GoPro Hero6’s voice command menu

What’s more, voice recognition will become increasingly important for camera control as AI becomes integrated more effectively into digital cameras.

All these factors only touch on a small sample of how AI will impact photography. The impact will range all the way from taking the picture and optimizing it to categorizing it; organizing it; and distributing it. But most importantly, it frees photographers up from worrying about the technical aspects of their craft to concentrate on the creativity. ♦


    The Future of Facial Recognition

Panasonic’s FacePRO

I was walking the massive halls at CES a year or two back when I came across a giant screen with hundreds of portraits on it, including mine. Under it, it said: “75 to 80 Year Old Male, Angry.” Below it showed the number of times that day I had walked passed that point.

Utilizing AI, the system took the initial image it had captured of me and made a faceprint. It matched it up with images of me it subsequently took. The system then applied what it learned about sex, age and disposition characteristics from the countless facial images the software had studied to create a file on me. It learned what I looked like and was able to track and characterize me without any human intervention.

It was off with my age by some 5 to 10 years and I wasn’t angry, just tired. But basically, it recognized me out of the hundreds, if not thousands, of showgoers who went by there that day. However, it wasn’t the camera that recognized me and analyzed my disposition; the computer tethered to the camera did that.

Facial Recognition: It’s All in the Computing Power

That’s the case with most facial recognition systems (as well as most other types of AI implementations). Products like Panasonic’s FacePRO security system rely on considerable centralized computing power, beyond the capabilities of the cameras attached to them.

Even when AI becomes an integrated part of digital cameras, and cameras have the internal processing power to handle large data sets, for many applications, such as face recognition, a system component will still exist.

When a camera tries to identify a person that was just photographed, the massive image databases required to match individuals up effectively will exist somewhere in the cloud, not in the camera.

The camera will hunt for the image online through connectivity options such as Wi-Fi or Bluetooth-paired smartphones. It will learn who you’re taking pictures of and automatically tag the images with that information. Next time, the camera will know who that is without having to search again.

Facebook: Case in Point

Facebook is already anticipating the proliferation of facial recognition by building a database of user faceprints. As a result, it will have the capability to tag uploaded images without the need to have them identified by the photographer.

Eventually, Facebook will only need to take a picture of a person and almost instantly come up with the name and all relevant biographical data of that person on the screen. Consequently, searching through social media and profile databases, with facial recognition, a camera will tell you everything you’ve ever wanted to know about a person, even if you didn’t know his name.

Going even further, the camera might go beyond identifying the people in the pictures. Through geotagging and feature recognition, it will determine where they are; what they’re doing; and even how they feel about it all. Any or all of that information could integrate into a camera’s metadata or entered into an online database without user input.

That’s a very interesting possibility, but also a little scary. It may sound farfetched, but it’s already happening. A recent PBS Frontline documentary noted that AI and facial recognition are finally letting China develop the Big Brother surveillance state that it’s been trying to establish.

Developments in AI continue to progress. And so are its implications on not only photography but also on our daily lives. Stay tuned!—Ron Eggers