The Integrity of the Image tl;dr

The Integrity of the Image tl;dr

In response to the increasing ambiguity over acceptable levels of manipulation in photojournalism contests, World Press Photo commissioned a report entitled “The Integrity of the Image: Current practices and accepted standards relating to the manipulation of still images in photojournalism and documentary photography.” It’s 20 pages long, so here’s the tl;dr:

  1. Using the darkroom as an analogy and starting point for the discussion of digital manipulation is outmoded
  2. There are no concrete, widely accepted guidelines. Interviewees assess images on a case-by-case basis.
  3. The industry should try to develop a verifiable “digital audit trail.”

Darkrooms as an analogy

This is perhaps the most instructive statement. Even though digital capture is necessarily computational in nature, we still use darkroom analogs in explaining manipulation. Photoshop icons are even skeumorphic in that regards (e.g. dodge and burn). We now do things in Photoshop that are much faster than their darkroom processes (e.g. masking), but we also do things that were impossible (e.g. lens correction, channel mixing, magic wand selection). When photographers and organizations describe acceptable manipulation as “the stuff we did in the darkroom,” they are not accurately accounting for the state of digital photography.

There are no concrete guidelines

The report notes that a number of organizations in the US have published guidelines or code of ethics, but this type of statement rarely exists in the rest of the world. Combined with a generally more liberal attitude towards manipulation in Europe, it’s easy to see how a contest like World Press Photo would have such variance in understanding and acceptability of manipulation.

The industry should develop a digital audit trail

Revision control and centralized repositories are a cornerstone of open-source software, and even consumer cloud software (e.g. Google Docs) provides a history of changes. The process and mechanisms for creating an audit trail are easily understood, but very data intensive insofar as an image history is concerned. Instead, contests should simply rely upon the “original” image and compare it to the final submission. If we agree that there will be no consensus on exact guidelines, then the only way to gauge whether something is within bounds is to use before/after comparisons.

All of this aside, I still refer back to my piece entitled “Why Do Photo Contest Winners Look Like Movie Posters?” and show you the World Press Photo winners in 2003 and 2012.

World Press Photo of the Year 2012. Photo by Paul Hansen.

World Press Photo of the Year 2012. Photo by Paul Hansen.

Photo by Jean-Marc Bouju

World Press Photo of the Year 2003. Photo by Jean-Marc Bouju

What have we really gained by the overcooking of the more recent image? Is it more accurate? More truthful? The Bouju image is as powerful as the Hansen image, and it doesn’t rely on heavy toning. Both images have incredible composition, overall scene exposure, and capture a peak moment. But with the Hansen image, I feel like I have been visually clickbaited.

Visual tastes may change, but the way the real world looks doesn’t. That to me is the biggest takeaway in the manipulation fracas.

Next Post:
Previous Post:
This article was written by

Allen Murabayashi is the co-founder of PhotoShelter.

There are 5 comments for this article
  1. Pingback: The Integrity of the Image tl;dr | PhotoShelter Blog – The Click
  2. Pingback: The Integrity of the Image tl;dr | shootplex
  3. David Campbell at 7:31 pm

    Allen, I have to say I’m dismayed by your attempt to summarise the report based on my research for World Press Photo. Point 1 is correct and important. Point 2 utterly misses the central findings of the research, which are almost the exact opposite of your summary – please re-read the Executive Summary point 3, and Sections 6 and 7, which run for less than four pages so should be easily comprehensible. You will see that we found a de facto global consensus on manipulation and how to approach processing, and that your highlighting of the case-by-case approach is not a reasonable summary of those findings. Point 3 is also in part accurate, but misses the much larger context of verification practices designed to support the integrity of the image of which it is a part.

    • Allen Murabayashi Author at 11:35 pm

      David,

      Thanks for the report and thanks for responding. I would agree that there is consensus towards general guidelines for manipulations — many of which have to do as much with the pre-photo as they do with the post processing. But the post processing guidelines are very subjective. Yes, the interviewees agree that dodging and burning are ok, but there’s no concrete agreement on how much is too much — and therein lies the problem with many of the questionable/disqualified images in the past. For example, the Wei Zheng image from 2012 is indicative of a image that would straddle both sides of the line for many people. Compared to similar images from the same venue, it is highly saturated and dodged, and the defocus effect is curious.

      But you are correct. I unfairly applied my own bias in explaining the key points of the report.

      • David Campbell at 10:32 am

        Thanks for the response Allen. It’s important to move this debate forward on the basis of what we found, recognising also that we or someone else could do more research to go deeper (which would have been possible if more people had responded to the survey when approached).

        I just want to be clear about what we found from our 45 respondents from 15 countries.

        There is a de facto global consensus on manipulation and how to approach processing. The material addition or subtraction of elements in an image is deemed totally unacceptable. That is manipulation. That then leaves the issue you are most concerned about.

        Processing is a better term than post-processing, because every digital image requires processing to be an image in the first place. This there is no original that exists prior to processing – so it makes no sense to talk of “post”. That is an important starting point because it means everyone has to process to create an image from the data recorded.

        The second element of the de facto global consensus is that “minor” processing was acceptable, “excessive” was not, at least in the context of news, documentary, sports and nature images (portraits and fashion were a different matter altogether). People do think about this in terms of “dodging” and “burning” but given that the darkroom analogy really is totally outmoded, it’s time to deal with this in terms of adjustments to the data file that can become an image.

        And you are quite right to say that leaves a lot open to interpretation – where is the line between minor and excessive? That is where the respondents said they approached the matter on a case by case basis. And given that I am yet to see someone successfully propose a clear, universally applicable line, perhaps – perhaps – it will remain a case by case question.

        So this is where, to move the debate forward, I think we have to take a slightly different tack. First, we should think about what is acceptable for an image in terms of what we want that image to do, rather than what we think it is. That means different considerations for images we want to function as documents, evidence or record, in contrast to those we want to be entertainment or illustrations. Second, the wider context of verification – which includes, but is not limited to the issue of the digital audit trail – becomes very important if we wish to secure the integrity of the image.

Leave a Reply

Your email address will not be published. Required fields are marked *