Why Dedicated Cameras Will Always Be (Optically) Better than Smartphones

Why Dedicated Cameras Will Always Be (Optically) Better than Smartphones

It’s September which means another generation of Apple iPhones. This year, the iPhone XS (pronounced “ten ess”) adds a slightly larger sensor plus significantly more computing power via the A12 Bionic Chip to enhance the phone’s image signal processing.

Apple's A12 chip uses neural networks for better facial detection

But despite the ability to perform 5 trillion operations a second, the iPhone still can’t do something your dedicated ILP camera can, namely, collect as much light. And when it comes to photography, true exposure (i.e. the total number of photons captured) is directly related to image quality.

Of course, that much computing power does give some powerful workarounds.

Portrait Mode is just a (really good) mask

One of the iPhone’s new features is the ability to alter the depth-of-field post image capture (first generation portrait mode only provided one predetermined setting). The user interface employs an f-stop dial, but this is just a skeuomorphic feature that uses a full-frame aperture reference to roughly correlate the degree of depth-of-field.

An aperture dial allows users to adjust the depth-of-field post capture

The vast majority of smartphones have fixed apertures, and the size of the sensor relative to the focal length of the lens means everything in the scene tends to be in focus. So smartphones create depth maps either using two cameras (e.g. the iPhone) or a sub-pixel design (e.g. Pixel2) combined with a neural network to determine foreground and background elements.

Even though the iPhone XS allows you to alter the depth of field in post, it’s not a light field (aka plenoptic) camera which captures both light intensity and light direction. The iPhone’s fancy processor does allow it to more intelligently mask foreground and background elements while being contextually aware of things facial features, hair, etc. The aperture dial combined with the processor means that it can increase the blur radius of elements outside the mask in real-time.

The software will undoubtedly continue to improve, but you might still run into errors that you would never get with optical capture.

Computational photography helps make excellent photos, but…

Computational photography overcomes true exposure limitations through a number of tricks including image stacking and AI. Image stacking (most commonly used for HDR) not only preserved highlights and enhances shadow detail, but it can also be used as a noise reduction strategy using median filtering.

AI brings a whole other toolbox of options to computational photography. Although the use of AI in computational photography is still nascent, we’ve already seen proof of concepts that are stunning like NVIDIA’s noise reduction and watermark removal.

Computational photography could, in theory, combine multiple photos together to get the perfect group shot where everyone is smiling (the iPhone uses neural networks for enhanced facial detection). It could also theoretically take a single image and reconstruct parts of the image to make it appear as if everyone is smiling. This type of technology presents an ethical consideration (especially for photojournalists), but it also dramatically expands the creative possibilities.

When will traditional camera manufacturers embrace a software-centric approach to photography?

Companies like Nikon, Canon, Sony, Fuji and Hasselblad have continued to develop incredible camera systems, but they continue to focus on single exposure RAW capture for the most part. In this paradigm, toning adjustments and other image manipulation occurs outside the camera. Image stacking and panoramics have to be captured as individual exposures with a manual triggering of the shutter.

But for many photographers, speed is of the essence, and the ability to load something akin to a Lightroom preset into the camera or allow for in-camera image stacking (with instantaneous noise reduction for example) would be pretty revolutionary. Standalone cameras continue to be dedicated hardware devices rather than more flexible computing machines with light capture capabilities.

It’s hard to imagine Apple making an ILP system, and equally difficult to imagine Nikon/Canon/Sony developing computational photography software and chips with onboard neural network capabilities. But just imagine the possibilities…

Next Post:
Previous Post:
This article was written by

Allen Murabayashi is the co-founder of PhotoShelter.

There are 4 comments for this article
  1. Pingback: Why Dedicated Cameras Will Always Be (Optically) Better than Smartphones – PhotoShelter Blog – The Click
  2. Keith L. at 8:35 pm

    Interesting articles. I used to freelance for a Gannett paper until I moved . At the time I moved, they were cutting photographers left and right until they had 1- and they brought iPhones for their reporters because “a reporter is going be there to write the story and an iPhone is as capable now as a DSLR.” UGH…when when upper management that with printing presses today churning out color daily, the need for properly exposed, quality images will always supersede a smartphone pic.

  3. Tom Casey at 4:35 pm

    I do admit that I sometimes use my iPhone with the Lightroom camera app to get a raw image when I don’t have one of my cameras handy. but I think that one good thing to come from smart phone photo capability is at least all those millions of “snapshots” will look pretty good. I remember the old Kodak very tiny negative era and the mush everyone was creating.

  4. lawrence at 11:50 am

    Very good points Allen,

    so much so that i’m compelled to reply.

    I’d like to imagine in a digital camera where a “viewport” is opened up to additional computational power. sort of like an external graphics cards. which will then allow intelligence to be applied to the photograph at the point of photo taking. and applications which can speak “viewport” lingo can then apply its smarts right there.

    technically i think it’s a battle of wills — would Nikon open up their expeed to external tweaking or interface? or Bionz? or DIGIC?

    personally, i think Canon has a much higher chance, since it’s already opened up some smarts which make it possible to apply a 3rd party firmware (magic lantern / chdk). but work is slow on this front.

    so what we may need is a “hackable” nikon. which may have a hardware interface (pcie?) that an external processor can tap into.

    maybe ricoh is onto something here…

Leave a Reply

Your email address will not be published. Required fields are marked *