How World Press Photo Catches Image Manipulators

How World Press Photo Catches Image Manipulators

How World Press Photo Catches Image Manipulators

June 09, 2015

By Greg Scoblete

http://www.pdnonline.com/static/content_images/Hany-Farid-tARgF.jpg

© HANY FARID

Hany Farid (cowering) and Kevin Conner (brandishing fork) co-founded Fourandsix Technologies to create software to detect image manipulation. This image was not manipulated using software—only perspective.

When World Press Photo (WPP) disqualified 8 percent of its finalists’ photos in 2014 for image manipulation, emotions ran high. Gary Knight, the jury chair at the time, told The New York Times he felt “real horror and considerable pain” that so many images had to be rejected. This year, the number of disqualified images soared to 22 percent—more than doubling last year’s percentage—prompting WPP managing director Lars Boering to express not horror, but stunned disbelief.

“We were shocked by the 22 percent,” Boering admits. “Industry veterans I spoke to, the jury chair, everyone, just shocked. We thought it would be lower than the year before.”

The WPP contest rules state that the “content of an image must not be altered. Only retouching that conforms to currently accepted standards in the industry is allowed.” According to Boering, “currently accepted standards” encompass basic processing for color, tone, etc. Disqualifying manipulations are edits that materially change the image’s contents—such as excessive toning, and especially adding or removing objects from the frame. It was the latter action that implicated most of the rejected photos. “People have been focusing on the excessive toning [criteria] but that was only a small percentage of what we threw out,” Boering says. (WPP’s charges cannot be verified because it has not made the disqualified entries—or the names of the photographers who shot them—available to the public.)

While photo-editing technology grows more sophisticated with each passing year, the method employed by WPP to sniff out manipulations was surprisingly low-tech. It was facilitated by a major rule change from the 2014 contest—namely, that any contestant whose image was being considered in the penultimate round had to submit the original RAW image file. If they shot film, contestants were required to send an unedited scan of the entire negative, including borders. If the images were originally shot as JPEGs, which was more common in the sports news category, photographers were asked to send in the series of photos that the competition image was a part of, WPP forensic expert Eduard de Kam relays to us via email.

Armed with these originals, it was “a very visual workflow,” de Kam says. The contest JPEGs were compared side-by-side with the originals in Adobe Photoshop and Lightroom on a monitor. Two forensic specialists, working independently, reviewed the images and produced a unanimous verdict on the photos they deemed in violation of the rules. “We only go for removal when we are absolutely certain,” Boering says. In most of these cases, he adds, “it wasn’t that difficult” to see that images had been altered.

Boering insists this simple compare-and-contrast workflow doesn’t yield false positives since—particularly in the case of RAW files—it’s extremely difficult to disguise editing. “You cannot cover your tracks in a RAW file,” Boering says. Hany Farid, a professor of computer science at Dartmouth College and a leading expert on imaging forensics, concurs. “Because of the proprietary nature of RAW formats, it would be very difficult to open, edit and repackage a RAW image,” Farid tells us.

In fact, Boering says that only two of the disqualified photographers wrote in to question WPP’s decision. According to de Kam, most of the photographers who were disqualified “admitted they had made a mistake.”

Without access to original images, an organization would have to work a lot harder to detect manipulation, calling upon a range of techniques that are often unreliable, even when combined, says Jessica Fridrich, a professor at the T.J. Watson School of Applied Science and Engineering at the University of Binghamton.

Nonetheless, by throwing multiple forensic methods at an image, it becomes more difficult for manipulations to go undetected, Farid says. The basic approach to discovering whether an image has been edited is to mathematically model the “entire imaging pipeline,” following light as it journeys from its source to a digital file, Farid says.

At each point along the way, there are expectations for how light should behave—expectations that are shaped by the geometry of the scene and the light source (for example: are shadows where they should be?) all the way down to the behavior of the camera’s sensor and compression algorithms. All across this pipeline, Farid says, there are models that predict, with varying degrees of specificity, what an image from a given camera and sensor should look like at a very granular level. Deviations raise red flags.

One such technique draws on a modeling of the color filter arrays used by image sensors to interpolate (or artificially generate) colors, Farid says. Interpolation algorithms used by camera companies yield a consistent pattern of color reproduction across an image, a pattern that is broken the moment an editor starts airbrushing or cutting-and-pasting objects in a scene. Farid also developed software to spot localized cloning by scouring an image for duplicated pixel regions.

Another approach is to study pixel-level image characteristics, like in-camera lens corrections, resampling artifacts and chromatic aberration, to ensure they are consistent across the image, Fridrich says. Systematic patterns of noise levels (called fixed pattern noise) can also yield clues to an image’s origins and whether it’s been altered, she adds.

Many of these techniques for image analysis are out of reach for most news organizations to use routinely. Farid co-founded a company, Izitru, that offers a free authentication app for consumers and businesses, but it doesn’t use all the modeling techniques used in a forensic search like the one conducted for WPP.  Fridrich says the modeling she describes requires a human analyst and specialist code, and are usually only requested by law enforcement and government agencies.

Regardless of how a manipulation is discovered, Farid says that the competitive pressures of photojournalism, combined with a lack of clear standards across the industry for exactly what is permissible during the editing process, will keep the issue of photo manipulation, and photo forensics, alive and well for the foreseeable future. For his part, Boering thinks WPP contestants understand the rules related to manipulation. “The focus should be on ethics. There is a generation out there—and it’s not an age thing—that has a different opinion about ethics in photojournalism and we need to find out why that is,” he says. “In photojournalism, ‘journalism’ should still be the main part. That’s not something we should change.”