When I re-started my interest in photography a little less than two years ago, my friend Scott Loftesness was already experimenting with HDR (high dynamic range) imaging. Scott was in turn following the groundbreaking work of Trey Ratcliff and I jumped onto Trey’s bandwagon as well. But like anyone else who has ventured into the world of HDR, I’ve struggled to perfect a workflow that yields pictures with the benefits of HDR (able to render a wide range of luminosity) without the over-the-top color artifacts we’ve all seen from Photomatix and other HDR processors. My quest for a more realistic look recently took a new course when I integrated a new phase: color correction using the Lab color space in Photoshop based on what I learned from the on-line tutorials by Dan Margulis.
Getting deep into Lab color isn’t for the faint of heart. I have a decent background in this technology, and I’m still struggling with the concepts. (I studied cinematography at the NYU Graduate Institute of Film and TV with Beda Batka, then learned the fundamentals of color correction and wrote software for motion picture film processing at DuArt Film Labs in NYC in the early ’70s.) In an nutshell, the primary advantage of working in the Lab color space (instead of RGB or CMYK) is that luminosity (the ‘L’ channel) is entirely separate from color (the ‘a’ and ‘b’ channels). Furthermore, modifying the ‘a’ and ‘b’ curves combined with Photoshop’s ‘blend if’ feature of adjustment layers allows you to control the saturation of very specific portions of the color palette.
This weekend I took advantage of a special offer from BorrowLenses.com and rented a Nikon D3S body just to see what a $5,200 camera was all about. I also wanted to check out its ISO 12,800 sensor — yes, it’s amazing — and its ability to bracket for a wide dynamic range. I went looking for challenging locations and settled on Muir Woods, only 20 minutes from home.
Muir Woods is a beautiful place, but a tremendous challenge to photographers. The dynamic range of light is phenomenal: from brilliant sunlight to deep, deep shadows in redwood trees that are already quite dark on their own. Only HDR gives you the opportunity to simultaneously capture blue sky and the details of tree trunks in shadows. The above photo is a merge of seven separate exposures, each one f-stop apart. (Nikon D3S in DX mode at ISO 200, Sigma 10-20mm, f/4-5.6 at 10mm f/5.6) If you’re new to HDR, notice that the sky is blue, not an overexposed white, while you can still see detail in the darkest part of the tree trunks. If I didn’t tell you, would you know this was an HDR image? Does it have those weird artifacts you typically associate with HDR? Note that no masks were used. Only global Lab changes were applied. Below are the two extreme originals.
My workflow (as of today) for images like this is as follows:
- Import RAW images into Lightroom 3.
- Apply camera calibration. (I use ColorChecker Passport and create a new profile for each location.)
- Merge to HDR Pro in Photoshop.
- Use the ‘Flat’ or ‘Photorealistic Low Contrast’ preset. (It won’t look good yet!)
- Change the mode to Lab Color.
- Apply Dan Margulis’ ‘no brainer’ curve in an adjustment layer.
- Increase contrast in the ‘L’ channel.
- Make final color adjustments to the ‘a’ and ‘b’ channels. (The leaves are actually yellow, almost entirely in the positive values of the ‘b’ channel’)
- Save back to Lightroom 3.
I’ve developed (and am still developing) this workflow empirically, and I’m reverse engineering it to try and understand what’s really going on. The idea is to merge the originals into a rather flat (color-wise) image, then work in Lab to recover the colors. Lab is particularly good when starting with these flat, unsaturated images. This seems to avoid a lot of the artifacts that Photomatix and Photoshop HDR Pro create if you use them alone to render your final composite image. So far, so good.