5 Myths of Digital Photography

5 Myths of Digital Photography

Myths and misconceptions persist because they’re either compelling or no one bothers to correct them. For photographers, many myths of digital photography arose when trying to draw imperfect analogies to film photography. Given the complex physics behind digital imaging, it’s not totally surprising that some myths persist, but here are a few you might want to be aware of.

ISO changes sensitivity

Unlike film, digital sensors have a single sensitivity. Changing the ISO on a digital camera doesn’t make the sensor more sensitive (i.e. capture more photons). Instead the camera amplifies a weak signal (gain) and the accompanying noise. It’s kind of like turning up the volume on a low quality audio recording. You can hear it, but it still sounds lousy.

[sciba leftsrc=”https://blog.photoshelter.com/wp-content/uploads/2016/09/DSC_6844-low.jpg” leftlabel=”ISO 12800″ rightsrc=”https://blog.photoshelter.com/wp-content/uploads/2016/09/DSC_6844.jpg” rightlabel=”ISO 100″ mode=”horizontal” width=””]

 

An easy way to illustrate this is to take a low and high ISO photo on a bright day. Even though there are plenty of photons to go around, using the high ISO makes the sensor capture less light by “fooling” each pixel into thinking its full when it’s not. As ISO increases, dynamic range decreases. Noise also becomes more apparent leading to poor image quality.

Some recent cameras have been dubbed “ISO invariant” meaning the sensor read noise is constant irrespective of ISO. This allows photographers to preserve highlights in wide dynamic range scenes and boost the shadows in post processing even though the initial image might look severely underexposed.

Bottom line: High ISO yields a noisier image because physics. New technologies like ISO invariance gives photographers more options to preserve dynamic range.

Higher bit-depth means better quality images

Bit-depth is related to the resolution of the analog-to-digital converter in your camera. The higher the bit-depth, the more the info from a pixel can be chopped into increasingly smaller units leading to smoother tonal transitions. If current cameras have 14-bit A/D converters that can address 16,384 levels why not build 16-bit or 24-bit converters for smoother gradations? Aside from the huge files that would result from more data, there is a point of diminishing returns with such a fine level of quantization because of noise.

Gah, noise! It’s everywhere. It starts with noise in the light that we’re trying to record when taking a photo (shot noise). It extends to noise introduced by circuitry at different points in the signal processing chain (read noise, dark noise, etc). If you try to slice your signal into smaller units (with more bit-depth) and those units are smaller than your noise, you’re not gaining any fidelity.

An imperfect analogy is why Olympic swimmers are only timed to 1/100th of a second. In a 50m freestyle, a 1/1000th of a second equates to about 2.39mm of travel. But Olympic pool regulations allow 3cm of variation per lane. So although timing devices can record more fidelity, you can’t guarantee that the silver medal hadn’t traveled farther than the gold. The variation in pool length is like noise. There’s no point in recording finer detail if you can’t get around the noise problem.

Bottom line:  Wanting more bit-depth is like wanting more megapixels. If image quality is your ultimate goal, then there are more factors at play than a single variable.

There is a perfect exposure for a given photo

No, but there is an optimal signal-to-noise ratio (SNR).

Are you trying to expose a backlit face or throw it into silhouette? The perfect exposure to a human is subjective, but from an electronics standpoint, you want to have the best SNR. It sounds really nerdy, but a strong SNR gives you the most latitude to post-process the image. This is especially true for photographers who ETTR (expose to the right).

Your light meter typically selects an exposure based on a 18% gray, which won't necessarily correspond to ETTR.

Your light meter typically selects an exposure based on a 18% gray, which won’t necessarily correspond to ETTR.

DPReview’s Richard Butler writes, “once captured, the signal-to-noise ratio of any tone can’t be improved upon. It can get worse as electronic noise is added, but if you try boosting or pushing the signal, you end up boosting the noise by the same amount and the ratio stays the same. This is why your initial exposure is so important.”

Even though an ETTR image might look too bright, it’s actually better to record an optimal signal then take down the brightness (or curve adjustments) in post.

Bottom line: Want the best quality image from your equipment? Shoot RAW at your camera’s base ISO and ETTR.

The equivalent focal length lens on two different sensor sizes are not equivalent

With all the various sensor sizes, photographers seem obsessed with “equivalence” – how does this camera and lens compare to traditional 35mm? Most photographers know that if a sensor has a 2x crop size (Micro 4/3), you need to multiply the lens focal length to get the full-frame equivalent. Lesser known is that the aperture needs to be multiplied to get an equivalent depth-of-field (DOF). Tony Northrup explains:

Sensor affects the DOF with larger sensors yielding shallower depth of field. So to get the same DOF as a 200mm f/5.6 on a full frame camera, you’d need a 100m f/2.8 on a micro 4/3rd camera.

Bottom line: If shallow depth-of-field is your ultimate goal, choose a larger sensor with fast glass.

Larger pixels yield better image quality

In low light, it’s true that larger pixels typically have a higher SNR because they can capture more light. But larger pixels trade resolution (that is, the number of pixels on subject) for more light per pixel. Interestingly, it turns out in brightly lit scenes, smaller pixels have a higher SNR and better resolving power.

Although the sensors used in many types of astrophotography have larger pixel (the Kodak KAI 11002 used on the Atik 11000 is 9µM), most current full-frame DSLRs have settled to pixel size of about 5-6.5µM. By contrast, microscopy systems can have pixels sizes of 24µM, and Phase One’s 100MP back has a 4.6µM size. Camera manufacturers select pixel size for specific applications and there are always trade-offs to be made.

Phase One's 100MP back has modestly sized pixels, but a gigantic sensor.

Phase One’s 100MP back has modestly sized pixels, but a gigantic sensor which captures much more light yielding higher image quality.

Camera
Pixel size in µM
Apple iPhone 6
1.5
Samsung Galaxy S7 Edge
1.7
4.51
4.6
4.88
5.3
5.36
6.45
6.58
7.4
Canon 5D
8.2
9
24

As you can see in the above chart, pixel size is not a good determinant of image quality except in low light applications.

Sensor size and aperture (and the resulting etendue) are the better predictor of image quality. Simply stated, at a given focal length and f-stop, a camera with a large sensor gathers much more light than a smaller sensor. More light, more signal. Better SNR, better image quality.

Many medium format shooters claim that larger pixels, greater bit depth, and higher resolution yield better quality images. More likely is a larger sensor combined with larger entrance pupils (and glass designed for those sensors) capture much more light than 35mm at the same focal length and exposure.

Bottom line: Don’t worry about the size of your pixels. It’s what you do with them that matters.

Next Post:
Previous Post:
This article was written by

Allen Murabayashi is the co-founder of PhotoShelter.

There are 10 comments for this article
  1. Andrew Molitor at 6:41 pm

    I am convinced that the current fad for ISO invariance is an error. There is no way that shooting at base and amplifying in post doesn’t introduce some quantization noise. It may be very low, since we are getting into counting photons territory, but it’s present.

    It’s not easy to see it, when it’s present, the effects are very subtle, but the is no particular benefit to trading the analog domain amplification in for digital, and there is a slight cost.

    The main ‘benefit’ as far as I can see is a snobbish ‘I know better’ appeal.

    • Bill Ferris at 11:44 pm

      I thought the same thing until I tested it for myself. I used my Nikon D610 to photograph the same subject in constant light at the same f-stop and shutter speed. Those settings kept exposure constant between all photographs. I made a series of exposures at ISOs 100, 200, 400, 800, 1600 and 3200. The bottom line us that the ISO 100 exposure brightened 4.5 stops in Lightroom looked identical to the ISO 3200 exposure. I could achieve the same results pushing from the ISO 200 and 400 exposures, as well.

      It makes sense when you think about it. ISO has zero affect on the volume of light collected by the sensor during an exposure. The digital manipulation that ISO applies in-camera is essentially the same image brightening process that occurs in LR (or your image processing app of choice) when you apply “exposure compensation.” Again, no light from the subject is added to the image. Exposure comp in post is simply a digital enhancement of image brightness.

  2. Diane Huntress at 12:44 pm

    Thank you. You are right about these myths and the first paragraph really speaks to me. I admit, as a career photography professional, I didn’t have the knowledge behind these myths. I also appreciate Tony Northrup’s video.

    My formal education is in fine arts and not physics.

    As digital photography became the norm and I looked for books to learn about best
    practices for the transition from film, the books were more about the basics of how to
    do photography on a basic level. Can you please recommend a great published resource?

    Continued good work,

    Diane

  3. Pingback: Sony Tidbits… | sonyalpharumors
  4. John at 2:59 pm

    “…, it turns out in brightly lit scenes, smaller pixels have a higher SNR and better resolving power. ”

    Pretty sure both those points are wrong. All other things being equal (except pixel size) smaller pixels have higher S/N ratio at the same exposures across all brightness levels. Good explanation here:

    https://www.lensrentals.com/blog/2012/02/sensor-size-matters-part-2/

    And, resolution (independent of noise) is about total number of pixels, so 2 sensors each with 20mp, but different size pixels, have the same resolution if they are recording the same scene. Smaller pixels only give you more resolution if you are cramming more of them onto the sensor, say comparing a 20mp full sensor with 8.4 micron pixels, with a 36mp full sensor with 5.5 micron pixels.

    • Kate at 3:39 am

      Want to add that APS-C sensors are currently maxed out at 24mp because they are at the 3.8-4 micron limit (sweet spot according to Stanford researchers). Anything more than that then noise and dynamic range suffers.

    • JA Horsfall at 5:36 am

      That’s correct and your last sentence describes what is usually the reality. Compare FF and MFT shooters both using 300mm lenses and standing equidistant from a lion on safari. Reproduction ratio will be identical (ie at the sensor plane) but the MFT sensor can display a much higher resolution. Referring to actual cameras, the new OMD can capture 150 line-pairs/mm while the FF Nikon D4 will capture 69 line-pairs/mm so, as long as the light is not terrible, you have about twice the image resolution with the OLY (…plus you have cropped in camera rather than post!). The only way for the FF shooter to compensate for that lower resolution is to increase the reproduction ratio with a 600mm lens, and pay the extra £8000 over the OLY 300mm … and then stop down to f8 to get equivalent DoF.

  5. Bill Ferris at 11:58 pm

    Ugh, aperture does not need to be multiplied by crop factor to get an equivalent depth of field when comparing images made using different cameras.

    First, a clarification is in order: for a given photo, a lens has a focal length, an aperture and a focal ratio or f-stop. Most photographers recognize the focal length and f-stop. Many don’t know that f-stop is the ratio of focal length to aperture. A 100mm, f/2.0 lens has a focal length of 100mm and an aperture of (100/2=50) 50mm.

    Also, the depth of field captured in a photograph is largely determined by two factors: distance to subject and lens aperture.

    As you watch the Northrup video, pay close attention to the photos shown during the stretch from 1:00 to 1:30. With the 70-200mm, f/2.8 lens on a tripod and Chelsea standing in the same spot, the photos made with the various cameras display the same degree of background blur (depth of field). This is because all the photos in that section of the video were made with a 36mm aperture (100mm/2.8=35.7) at the same distance from Chelsea.

    The angles of view are all different but the same aperture at the same distance to subject delivers the same depth of field.

    Next, look closely at the images beginning at about 2:30 into the video. When Tony zooms the lens to 200mm, he continues to use the same f-stop. As a result, the lens aperture had changed. At 200mm f/2.8, the lens has about a 72mm aperture. Keeping subject distance constant while increasing lens aperture produces a more shallow depth of field and this is demonstrated in the photo made at those settings.

    When Tony changes the f-stop to 5.6, he says that action multiplies the “aperture” by the crop factor of the micro 4/3s camera. But at 200mm f/5.6, the lens has an aperture of (200/5.6=35.7) 36mm. In fact, when he increased the focal length by the crop factor without changing the f-stop, this doubled the lens aperture and produced a more shallow depth of field. When he then increased the f-stop to 5.6, the lens aperture was returned to the original 36mm. As illustrated by the resulting photo, the depth of field again matched that of the original photos.

    Regardless of sensor size, images made at the same distance to subject with the same lens aperture will show the same amount of background blur or depth of field. If you change the lens focal length on one camera to match the angle of view captured by another camera, you also need to adjust the f-stop so the lens uses the same aperture as that used by the other camera’s lens.

    When attempting to create equivalent images of the same subject from the same spot using different format cameras, it is f-stop that needs to be adjusted to keep aperture constant between the cameras.

  6. siegfried manietta at 2:40 am

    Unlike film, digital sensors have a single sensitivity.
    This is also a MYTH! People who “pushed” film believing they increased the speed merely under-exposed and tried to make up the difference by increasing gamma(contrast)! I have never measured more than 1/3 stop speed increase based on a film’s fundamental speed-point. The SAME physics applies to film as digital. Silver halide sensitivity is limited by the “4-photon” limit. ie. it takes a minimum of 4 photo-electrons to create a stable (and therefore developable) sensitivity speck. Only larger or flatter halide crystals (eg. Kodak T grains) could increase sensitivity. Even Tmax 3200 only delivered 800 ISO (when developed to the ISO standard) Same deal now, bigger sensor elements catch more light. It seems photography has always been electronic and quantised. (digital!)

Leave a Reply

Your email address will not be published. Required fields are marked *