31 Oct Combating Deep Fakes and Establishing Authenticity
At this year’s Adobe MAX, the software giant gave an update on their Content Authenticity Initiative (CAI). CAI’s goal is to counter deep fake images* by creating standards and a process by which the authenticity of an image can be documented. By showing an image’s provenance, including a revision history, CAI will also assist visual artists in combatting piracy and plagiarism.
CAI was announced at the 2019 Adobe MAX as a partnership between Adobe, the New York Times, and Twitter. Since that announcement, additional partners have signed on; all companies active in media or technology: Truepic, Qualcomm, Witness, CBC Radio-Canada, and BBC. In August, the workgroup published a white paper, Setting the Standard for Content Attribution, which outlines their proposal for an “industry standard content attribution solution.” While the whitepaper addresses solutions for photographs and images, Adobe expects their solution to eventually encompass all forms of content.
For graphic artists, embedding a record of an image’s provenance would be a valuable tool in combatting infringement. Adobe products already permit users to embed their metadata. That metadata can be read by Google images, which will then flag licensable works. However, the metadata created in the CAI process is far more robust than the existing metadata fields. More significantly, the integrity of the CAI metadata would be verified via trust certificates.
Creating a Chain of Assertions
The system that Adobe envisions recognizes that methods to detect fake media will be continuously outwitted by bad actors (something Adobe likens to an arms race). Instead, their authentication process works by building a chain of assertions that reveal the provenance of the image: for example, where and by whom the image originated, what edits were made to it, and what works were incorporated. Users and viewers of the image would be able to access this information to see whether an image was altered and how.
The CAI metadata consists of “assertions”— the information—about the creator, edits, etc. bundled into units called “claims.” The claim itself is digitally signed using a proprietary set of trust certificates. (The certificate holder wouldn’t be the image creator, but rather the hardware or software used to create the certificate. This preserves the anonymity of the creator, if desired, and reinforces the authenticity of the data.) Each time the image reaches a new milestone (such as being published), a new set of assertions and claims are created.
A key feature of the proposed process is that the CAI workflow is opt-in. The creator must actively enable the CAI functionality in their image editing software. This ensures that the privacy of the creator is protected. For example, creators may opt not to utilize the CAI process to protect their identities if they are working on politically sensitive material. (Adobe cites photojournalists documenting human rights abuses as an example.) Creators can also elect to document only some file information. For example, an illustrator may wish not to document their edit process, since doing so would essentially reveal the steps they take to create unique images.
The whitepaper outlines how the CAI process could work for a creative professional such as a graphic artist:
- The creator opens their software and selects the CAI settings they need before starting to work. For example, if they want to keep their work process secret, they won’t include a detailed edit history or capture progress thumbnails.
- The creator then either opens a new document to create an entirely new work or composites existing works. CAI creates a specific assertion that references the assertions and claims for each work included in a composite. The creator can capture “before” and “after” thumbnails while they work.
- The creator saves the final work and distributes it, for example, by publishing it to a platform or website or delivering it to a publisher. Their embedded information remains intact with the work and can be viewable in systems and platforms that are CAI compliant.
- People viewing the work clicks onto a CAI icon to view attribution information, such as the creator, date, thumbnails, and a link for more information.
- The “more information link” takes the viewer to a CAI-enabled website where the entire attribution information can be read.
What it Will Take
In proposing this system, Adobe is assuming that CAI standards will be widely adopted. They also acknowledge that CAI compliance may not be possible along every workflow step. (For example, a photojournalist may submit their photos to a newsroom using legacy hardware or software which is not CAI compliant.)
For the CAI system to become standard, many entities will need to buy into the system: platforms, software developers, and hardware companies. The metadata Adobe already permits to be embedded provides a cautionary example. That metadata is stripped out by social media and other platforms, rendering it ineffective for transmitting copyright management information along with the images posted on these sites.
However, Adobe is betting that the demand for verifiable images and content will drive companies and organizations to invest in CAI compliance. That would be an outcome Adobe believes will benefit creators, photojournalists, publishers, and the general public:
With widespread adoption of CAI’s attribution specifications, we hope to significantly increase transparency in online media, provide consumers with a way to decide who and what to trust and create an ecosystem that rewards impactful, creative work.”
*Ironically, within the same keynote presentation in which the CAI update was given, Adobe also touted their new Photoshop neural filters could be used to create deep fakes. The filters permit users to change facial age, change expression and gaze, and even turn a subject’s head.
Top image: A diagram showing the claims and assertions embedded in an image file.
Image and video © Adobe