…and replace them with lossy DNG proxies? Would I ever see a visual difference?
A) Yes. B) No.
So, a little background:
- Lightroom & the free DNG Converter added the ability to apply lossy compression when creating DNG files.
- When you apply this compression, your raw data get mapped from a higher bit depth (10-14 bits per channel) to 8 bit.
- That sounds horrible (“what about my highlight & shadow data?!”), but the mapping (quantization) is done cleverly, before a perceptual curve is applied. (See nerdy footnote if interested.)
- You retain the same white balance flexibility you always had.
- You save a lot of disk space—between 40% & 70% in my experience. (You can also elect to save at a reduced resolution, in which case you’ll obviously save a lot more.)
What I’ve always wondered—but somehow never got around to testing—is whether I’d be able to see any visual differences between original & proxy images. In short, no.
Here’s how I tested:
- I started with a typical photo taken by my wife—one with really under- and over-lit areas.
- I imported the original file into Lightroom, then exported a copy as DNG with lossy compression, then imported the copy back into LR (so that the original & proxy would sit side-by-side).
- Just to stress-test, I cranked up the Shadows to +100 and cranked down Highlights to -100.
- Then to stress things further, I used a brush to open up the shadows by another full stop.
- I copied & pasted settings from the original to the proxy.
- Failing to notice any visual differences at all, finally I opened the original & proxy versions as layers in Photoshop. I set the blending mode of the top layer to Difference in order to highlight any variation between the two versions.
- Having failed to see any difference even then (i.e. the result of Difference appeared to be pure black—i.e., identical pixels), I applied an Exposure adjustment layer and—just for yuks—cranked it up 14 stops.
You can
download the layered TIFF if you really want to compare things for yourself.
I repeated the experiment with other images, including some with subtle gradients (e.g. a moonrise at sunset). The results were the same: unless I was being pretty pathological, I couldn’t detect any visual differences at all.
I did find one case where I could see a difference between the lossy & lossless versions: My colleague Ronald Wotzlaw shot a picture of the moon, and if I opened up the exposure by more than 4 stops, I could see a difference (screenshot). For +4 stops or less, I couldn’t see any difference. Here’s the original NEF & the DNG copies (lossless, lossy) if you’d like to try the experiment yourself.
No doubt a lot of photographers will tune out these findings: “Raw is raw, lossless is lossless, the end.” Fine, though I’m bugged by some photogs’ fetishistic, gear-porn qualities (the kind of guys who insist on getting a giant lens & an offsetting full-frame camera) & old-wives’ mentalities (“You can’t reformat your memory card with your computer: this one time, in 2003, my buddy tried it and it made his house burn down…”).
So, to each his or her own. As for me, I’m really, really encouraged by these findings, and I plan to start batch-converting my DNGs to be “lossy” (a great misnomer, it seems).
——
Nerdy footnote: Zalman Stern spent many years building Camera Raw & now works with me on Google Photos. He’s added a bit more detail about how things work:
“Downsampling” is reducing the number of pixels. Reducing the bit-depth is “quantizing.” The quantization is done in a perceptual space, which results in less visible loss than doing quantization in a linear space. Raw data of the sensor is linear where the data going into a JPEG has a perceptual curve applied. (“Gamma” and sRGB tone curves are examples of the general thing around perceptual curves.)
Dynamic range should be preserved and some small amount of quantization error is introduced. (Spatial compression artifacts, as in normal JPEG, are a different form of quantization error. That happens with proxies too.) Quantization error is interesting in that if it is done without patterning, it takes a very large amount of it to be visible.
The place you’d look for errors with lossy raw technology are things like noise in the shadows and patterning via color casts in highlights after a correction. That is the quantization error gets magnified and somehow ends up happening differently for different colors.