Reestablishing Trust in Visuals: Untangling the Potential Solutions

Reestablishing Trust in Visuals: Untangling the Potential Solutions

53
Reestablishing-trust-in-visuals-banner

People’s trust in visuals (are they real, are they manipulated, are we being tricked?) is at an all-time low. Moreover, it’s only getting worse. How will this impact our photo and video industry, or society at large? How can our industry help to reestablish trust in visuals? And can/should governments have an impact on the issue? To untangle the potential technological solutions for reestablishing trust in visuals, my partner Alexis Gerard and I had the great privilege to dive into these questions with four experts in the field of image authenticity.

The Spotlight event gave us ample opportunities to delve into the primary approaches. Further, they range from image content credentials to digital certificates, forensic watermarks, deepfake detection, government initiatives and beyond. Many thanks to them for sharing their insights!

C2PA-logo-reestablishing trust in visualsRegaining trust in digital media requires attacking the problem from all angles. Supporting a content credential standard like C2PA (Coalition for Content Provenance and Authenticity) is nontrivial. Further, Truepic is leading the charge to help companies and governments integrate provenance into their processes using C2PA. (See Jeffrey McGregor’s slides shared at our Spotlight.)

However, C2PA alone is not enough; it can be circumvented by stripping it out. Further, Steg AI uses cutting-edge forensic watermarks to link digital assets back to remote C2PA content credentials even if they were stripped from the media. (See Joseph DeGol’s slides shared at our Spotlight.) Trupic-logo-reestablishing trust in visuals

However, deepfake assets could be created without C2PA content credentials and watermarks. That’s why we also need state-of-the-art detection approaches like Clarity AI. (See Michael Matias’s slides shared at our Spotlight.)

Perhaps you are a company or government looking to solve this problem but don’t know where to start. Melcher System specializes in providing consulting services to help. (See Paul Melcher’s slides shared at our Spotlight.)

Rebuilding Trust in Visuals: A Must Clarity-Logo-reestablishing-trust-in-visuals

Here are our takeaways:

Manipulated images that no longer mirror reality date back to the earliest days of photography. However, image manipulation received a historic boost from tools like Photoshop. This has become so mainstream that apparently even British Royalty can’t resist manipulating their photos. Recently, the arrival of sophisticated generative AI (artificial intelligence) imaging tools has made it easier than ever to generate fake images as well as manipulate existing ones. And to do so at unprecedented speed and scale.

As a result, “deepfakes” and the “trustworthiness” of photoSteg-AI-Logo-1-reestablishing trust in visualss and videos are mainstream topics of a public discourse that go far beyond our industry. In fact, in the history of our photo and video industry, we have never witnessed a visuals-related subject that is such a broadly discussed topic in mainstream society: among consumers and businesses, in mass media as well as among politicians.

Furthermore, feeding these discussions is a widely held belief that there will be a devastating impact on society if visuals can no longer be trusted. An exaggeration? We asked our panelists to provide a few examples of what would happen if we can’t find ways to reestablish trust in visual and audio content:

How about a robo call from our president urging people not to vote; a banker paying out $25M after receiving a video call featuring a “chief financial officer” that is a deepfake; fake nude versions of children’s photos; brands imitated to urge consumers to buy rip-off products? melcher-system-logo-reestablishing trust in visuals

In short, visuals play a key role in how our democracy, the financial system, commerce and we as humans communicate. Rebuilding trust in visuals is a must. Societies rely on visuals and can’t function without trust.

Some Tech Solutions

Additionally, tech solutions to rebuild trust in visuals fall into two categories. 1) Upstream solutions are focused on creators, enabling them to attach content credential data to their visuals as assurances for their viewers/users. 2) Downstream solutions analyze visuals of unknown or doubtful provenance to determine whether they were tampered with or are fake. More about each category below.

Upstream Methods to Rebuild Trust in Visuals

C2PA and CAI (Content Authenticity Initiative) are two related initiatives aimed at rebuilding trust in visuals. What’s the difference?  https://contentauthenticity.org/

CAI is a community of media, tech as well as other companies, spearheaded by Adobe. Moreover, it is aimed at promoting the adoption of an open industry standard for content authenticity and provenance. This would enable viewers of visuals to determine the authentic intent behind an image and therefore to trust an image. CAI started at a time when traditional guard rails were starting to fall apart. That is, when increasing numbers of visuals distributed through social media channels were no longer offering the level of trust that traditional reputable media offered. What’s more, since these photos lacked the context provided by reputable publishers, the idea behind CAI was to capture this context with the image and have it travel along with the image. And to do so regardless of how the image was shared. adobe firefly-CAI-Logo-specially equipped Nikon Z 9

C2PA is a working group aimed at developing technical standards for certifying the source as well as history (or provenance) of media content. By defining these content metadata standards, C2PA aims to provide an environment of trust for responsible digital media creation, publication and sharing. Additionally, if questionable or illegitimate visuals are distributed with missing or invalid C2PA content credentials, the idea is that viewers will at minimum question the authenticity of that content.

This year C2PA has gotten a lot of press coverage and momentum. This was spurred by high-profile vendors joining or announcing their adoption of the C2PA standard. These include Open AI, Google, Meta, Adobe Firefly, Microsoft as well as various digital camera vendors.

C2PA Members Are Not Necessarily Also CAI Members

Also, companies can decide to implement C2PA without joining C2PA as a member. For instance, companies like Meta and OpenAI have implemented or committed to implementing the C2PA standard without formally being C2PA members.

Conversely, not all C2PA members have implemented C2PA (yet) in their products. Implementing C2PA is more involved than one might think. Not only are the specifications comprehensive and complex, developing and managing digital certificates that run on different platforms is also far from trivial. (Note: Truepic offers integrated turnkey solutions that make it easy for companies to implement C2PA.)

Furthermore, to ensure the authenticity metadata isn’t circumvented (for instance, by removing it from the image, bypassing it by taking a screen capture or editing it in applications that don’t support C2PA), forensic, imperceptible watermarks should be an integral part of authenticity metadata solutions. (Steg offers forensic watermarking solutions.)

In addition, forensic watermarks can provide an unbreakable/unremovable link to a remote watermark manifest website. The site contains the original image metadata, along with info detailing when and where changes to the visuals were made. In this way, the content credential data is still viewable and retrievable even after removal from the image. Thus, illegitimately distributed images are traceable to which version of the image was leaked when, where and by whom.

Downstream Methods to Rebuild Trust in Visuals

In addition to upstream methods that transparently identify and disclose the origin of visual media, deepfake detection methods are also essential to flag misleading visuals of unknown origin. Moreover, detection is particularly crucial for the massive number of images shared online with no or only limited context. This includes those lacking reputable sources or verified metadata.

Clarity offers deepfake detection solutions. Their deepfake detection tools typically rely on a constantly evolving mix of heterogenous techniques. They comprise reverse engineering of machine learning tech; knowledge of the background of the identities portrayed in the visuals or audio; analysis of the platforms on which the media have been distributed; analysis of how the audio signals align with the video footage; understanding of the content of the message shared in the visuals; as well as how actors might verbally communicate this message in a video, etc. reestablishing trust in visuals-Trust-speakers-moderators-post2

Rebuilding Trust in Visuals: Where Are We Heading?

Governments are also entering the fray. As mentioned earlier, the societal impact of deepfakes is increasingly a topic of discussion in mass media and among politicians. With one legislative initiative after another announced in the U.S. and overseas, what is—or could be—the contribution of governments? Here are a few observations expressed in our Spotlight.

  • First and foremost, impending legislation puts indirect pressure on imaging as well as AI tech companies to develop and start implementing image authenticity solutions.
  • Much of the news involving government initiatives to battle the negative impact of deepfakes centers around proposed legislation to force companies to comply with standards or practices that minimize downside risks (the “stick” approach).
  • Interestingly, we hear relatively little in terms of “carrot” approaches. Governments across the globe have historically used these to support the world of cyber security and critical infrastructure. For example, why not bolster funding of science research projects to battle deepfakes and reestablish trust in visuals? Or ramp up initiatives along the lines of: the FTC awarding four organizations prizes for developing technologies to distinguish between authentic human speech and audio generated by AI. Even if, in this instance, the prizes were largely symbolic.
  • Walk the talk. Governments are in a unique position to take a lead in educating both consumers and distributors of visual content about how to identify trustworthy visuals. They can implement their own verifiable content credentials when publishing photos, videos as well as audio files.
Lessons from Cyber Security

In addition, the world of cyber security has taught us a thing or two.

  • For one, we need to realize that a 100% bulletproof solution is not possible. Malicious actors will always try to figure out new ways to create or distribute malicious visual content. This is like the bad actors behind computer viruses as well as other malware.
  • With that in mind, it is imperative to start with a zero-trust mindset before embarking on the question of how to best build more and more layers of trust. Indeed, rebuilding trust in visuals is not a matter of black or white.
  • The harder it becomes for bad actors to generate and distribute effective deepfakes, the more expensive—and therefore less attractive—it is for them. We have already seen this happen in the world of cyber security. Additionally, this will at minimum weed out “casual” offenders. Consequently, it will lead to lesser numbers of problematic images in the public space. Numbers matter. What if we could reasonably trust more than 75% of the visuals we encounter, as opposed to less than 25%?
The Wrap-Up Visual-1st-Logo 2020 visual first awards

What’s more, the various upstream and downstream methods to rebuild trust in visuals should not be developed in isolation. They should mutually reinforce each other. For instance, if a deepfake detection solution also understands the visual’s content credentials and forensic watermark (or lack thereof), it could conceivably fine-tune its analysis of whether a visual is a deepfake.

Conversely, upstream authentication initiatives could conceivably also adjust their standards and solutions from the learnings developed in the downstream world of deepfake analysis and detection.

Ultimately, for most businesses implementing solutions to reestablish trust in visuals is yet another item on their task lists. It joins the likes of cyber security protocols and privacy policies.

Finally, the more these various trust-building solutions are integrated and are provided in an easy to implement way, the more adoption we’ll see among businesses that create and/or distribute photos or videos.

Hans Hartman is chair of Visual 1st. The annual conference promotes innovation and partnerships in the photo and video ecosystem. Visual 1st 2024 will be held October 16–17 at Fort Mason in San Francisco.

NO COMMENTS

LEAVE A REPLY