Pro Photographers Should Pay Attention to the Google Pixel 3

Pro Photographers Should Pay Attention to the Google Pixel 3

Google has officially launched its incredible “Night Sight” feature on the Pixel 3 camera app. Computational photography pioneer and Google Distinguished Engineer Marc Levoy co-wrote a blog describing all the different considerations that went into developing the jaw dropping technology that allows the Pixel to see in the dark. It’s worth a read.

Left: iPhone XS. Right: Pixel 3 Night Sight. Photos: Google

If you haven’t been following along, computational photography has used the camera phone landscape as ground zero for testing a fundamentally different approach to photography. Historically, photography has occurred with a single lens and single exposure paradigm. And the first two decades of digital photography with DSLR/MILC have followed this course.

But miniaturization in smartphones meant smaller sensors and lens assemblies. Playing by the historical approach meant image quality would noticeably lag behind dedicated cameras.

Modern smartphones are incredibly powerful computers loaded with external sensors (e.g. multiple cameras, barometers, inertial MEMS, etc) and networking components (LTE, WiFi, Bluetooth, etc). Combined with an application centric approach (e.g. your DSLR cannot run custom programs, Magic Lantern withstanding), smartphones are the perfect vehicle for pushing a computational approach to photography.

So what’s the big deal with Night Sight? A few key concepts and features to understand:

The goal was to see in the dark

Google published this handy lux range chart to give you a sense of illuminance values in our daily lives. The sensitivity goal was to “improve picture-taking in the regime between 3 lux and 0.3 lux, using a smartphone, a single shutter press, and no LED flash.”

According to Google, 3 lux is roughly equivalent to a sidewalk lit by a street lamp. 0.3 lux is “can’t find my keys on the floor” territory. Light sensitivity also varies dramatically with age. The average 60-year old retina only receives ⅓ as much light as a 20-year old. This explains why older people use phone flashlights to read restaurant menus, and why taking a picture with Night Sight might be more effective.

The camera takes multiple pictures when you hit the shutter

This is different from the action burst feature which yields a burst of user viewable images.

Under “normal” lighting conditions, the Pixel takes photos before and after the user hits the shutter to reduce shutter lag and enable HDR. The longest shutter speed in this mode is 66ms. By combining under and overexposed images of the same scene, software can preserve highlights and boost shadows.

Night Sight waits until the user hits the shutter and creates a sequence of longer exposures. Stacking multiple exposures using median blending yield lower noise.

The camera detects motion and adjusts shutter speed accordingly

Slow shutter speeds make motion more visible even with optical image stabilization (OIS), and even if the camera is on a tripod. The Pixel pre-analyzes the scene to determine if something is moving, then shortens the shutter speed to minimize it.

Left: 15-frame burst captured by one of two side-by-side Pixel 3 phones. Center: Night Sight shot with motion metering disabled, causing this phone to use 73ms exposures. The dog’s head is motion blurred in this crop. Right: Night Sight shot with motion metering enabled, causing this phone to notice the motion and use shorter 48ms exposures. This shot has less motion blur. (Mike Milne)

In low light, there are practical limits to how short the shutter speed can be set, but it’s still a “smart” solution built on computation.

Aligning and merging images is a computational wonder

Let’s step back for a moment and consider how time consuming it used to be to assemble images into a panorama. Image alignment algorithms date back to the advent of computer vision in the 1980s. Improvements in algorithmic approach as well as Moore’s Law allow us to stitch panoramas and stacks on our phones in near real-time. 

Since the whole approach to Night Sight is predicated on image stacking, image alignment is incredibly important. And although the problem might seem simple at first glance, image alignment and stitching is difficult because of camera and subject motion.  

The camera has a hyperfocal setting when autofocus won’t work

Any photographer working in low light knows the frustrations of a hunting autofocus system. The Pixel “solves” this problem with a “near” and “far” button. The near option pre-focuses at 4 feet, and the far option focuses at the hyperfocal distance of 12 feet. At this distance, everything from 6 feet to infinity should be in focus. It’s a smart optical solution in a highly computational system.

Any photographer can easily determine the hyperfocal distance of a lens. But do you? And more importantly, do you use regularly use it as a focusing technique?

The images are damn good

A full-frame camera has a surface area 34 times larger than the Pixel 3’s 1/2.55” Sony IMX363 sensor. The image quality won’t challenge a Nikon D5 with fast glass, but the quality is staggeringly good compared to any phone and many dedicated cameras. More importantly, Night Sight represents a consumer-ready, easy-to-use solution to low light photography.

Comparison of sensor sizes between full frame camera’s and the Pixel 3’s 1/2.55″ sensor. Full frame is 34x larger than the Pixel.

Dedicated cameras have OIS, pixel shifting and some multiple exposure capability, but there’s no camera from major manufacturers that is using computational methods as the cornerstone for improving image quality. Dedicated cameras might be optically superior, but that doesn’t necessarily mean they will have better image quality. 

Pay attention pros!

It seems antiquated and somewhat naive for the “pro” market to be so focused on form factor (i.e. mirrorless) as the cause célèbre in camera innovation while much of the technology sector is talking about A.I. And while it’s true that professional and consumer needs are different (e.g. You probably won’t find an instapot in a Michelin-starred restaurant), I can’t help but feel like the pro market is missing the point by celebrating something like Lightroom being built into the Zeiss ZX1.

It’s not that the Pixel 3 will replace your 5D or your D850. It’s that the rate of annual improvement within the smartphone ecosystem is lapping the innovation of the big camera manufacturers. It’s not that the Pixel 4 will catch up with the image quality of your camera. It’s that it will be able to do things your dedicated camera cannot. And when that happens, which camera will you use to get the shot?

Next Post:
Previous Post:
This article was written by

Allen Murabayashi is the co-founder of PhotoShelter.

Leave a Reply

Your email address will not be published. Required fields are marked *