Not every outing results in a usable image, and today’s pre-dawn trip to the Golden Gate bridge proves that point. Before deleting everything on my CF card, I opened the files in Lightroom to see what I had come home with. The ugly image below is actually one of the best of today’s shots. It’s shown here straight-out-of-camera with no adjustments other than scaling for the blog. [The original is a 36.3 megapixel (7360×4912) RAW file from my Nikon D800E, shot with a 28-300mm zoom, ISO 100, 1.3 seconds, f/11 145mm.]
Aside from all the other problems with this image, I thought I saw an artifact as shown below. See that thin line above the diagonal bridge cables? It looked to me like I had some vertical camera shake or something else that caused a slight double exposure. I haven’t been able to reproduce it convincingly here, but trust me — that’s what it looked like. It was a little windy this morning, but I thought I’d used my best techniques: solid tripod, mirror-up mode, remote release, etc.
Next I zoomed in to 100% and look what I discovered: There’s a whole additional set of cables or wires draped from the tops of bridge towers that parallel the suspension cables. In all the years of shooting the Golden Gate bridge, I’d never noticed them.
Below is another version at 100% after some color adjustment and sharpening in Photoshop.
The above may not give the impression that I’ve captured much detail until you check the image below. The yellow rectangle shows the area from which the above 100% crop was made. It represents only 0.5% of the full-image pixels. This is the advantage of a 36.3 megapixel sensor. And it would have been even more dramatic if I’d used the sharper 70-200mm f/2.8 lens.
I’ve lately been talking about photo post-processing workflows on various podcasts, Google+ hangouts, classes and workshops, which has generated a lot of questions and requests for more info. It’s my workflow du jour because I’ve never had a single workflow that’s lasted for more than a few weeks. By the time you read this, it will already be out of date. For that matter, I don’t really have a formal workflow since every image is different. But I’ve developed a default sequence as the starting point for most of the images I shoot. At least for this week.
I spent some time debating whether I would publish this blog post or not. I’m actually a bit nervous about telling everyone about some parts of the workflow — not because I want to keep them secret, but because I fear you will go directly to the shortcuts I recommend without taking the time to understand the concepts behind them. You can get pretty good results without studying the underlying theories, but you’ll be shortchanging yourself. Obviously, I’ve decided to go ahead with this post in the hope the tools not only simplify your post processing but also encourage you to dig deeper into how they work and how they can be further manipulated.
I should also point out that there’s nothing of my own creation in this workflow. Everything I’m using I’ve learned from others who know a lot more about post processing than I do. Towards the end of this article I’ve linked to the original work of my personal workflow gurus so you can learn directly from the sources.
What Are the Prerequisites?
Although much of my workflow uses automated Photoshop actions, some of the steps must be done manually. This is not for Photoshop newbies. Before utilizing any of the tips and tools in this article, you should first learn the following aspects of Photoshop:
layers and layer masks
the Image->Apply Image… menu feature
What Are the Shortcuts?
My workflow du jour uses two Photoshop plugins. The first is Dan Margulis’ free Picture Postcard Workflow Panel shown below on the left. (The name is cute but misleading. It’s not just for a picture-postcard look.) The panel automates many of the steps in Dan’s Picture Postcard Workflow, which if done by hand not only make for a complex process but can’t be performed by mere mortals without pages of notes. You can really get yourself into trouble with this tool, but you can also achieve amazing results in no time at all once you get the hang of it.
The second Photoshop plugin, shown on the right, is the Channels Power Tool (€20). This is a tremendous timesaver for identifying and swapping channels and applying them as layers and masks such as in Lee Varis’ 10-Channel Workflow.
What About Lab Color?
Much of my current workflow is based on the Lab colorspace, which I’ve been using on and off for about 2.5 years. I first wrote about Lab color in 2010, specifically in conjunction with HDR. I stopped using Lab color for a while because I didn’t fully understand it. Although I’m still learning and experimenting, I now believe I’ve achieved a deeper comprehension of the true potential of working in this colorspace.
You’re probably familiar with the RGB colorspaces in which there are three channels: red, green and blue. Your digital camera records images in RGB and your browser displays images using RGB. When an image is printed using offset press inks, it’s converted to the CMYK colorspace with cyan, magenta, yellow and black channels.
Consider how you change the brightness or luminosity of an image in RGB or CMYK. In RGB you increase the levels of all three channels. 100% red, green and blue gives you white. 0% results in black. In CMYK, you reduce the amount of ink in all channels in order to get to white, the color of the underlying paper. But the problem with both RGB and CMYK is that there’s no way to change the luminosity of an image without also changing the hue and/or saturation of the colors.
When is Lab Better Than RGB or CMYK?
Lab is just another colorspace like RGB and CMYK, but it separates luminosity from color. There are three channels: The “L” channel controls luminosity but has nothing to do with color. Conversely, the “a” and “b” channels control only color and don’t affect brightness. The range or gamut of the Lab colorspace is huge. Not only can it represent every color of RGB and CMYK, it can also represent colors that are beyond reality. (Try to imagine a yellow that is simultaneously very saturated and as dark as pure black.) Lab is also an extremely accurate and standardized colorspace. Whereas we all need to calibrate our RGB monitors and adjust for our printers, papers and inks, the colors in Lab are precise. For example, when an automobile body shop wants to exactly match the color of your car’s paint, it uses the Lab color specified by the car’s manufacturer.
Unlike RGB and CMYK, the Lab colorspace is designed to approximate human vision, which has a powerful ability to segregate colors. For instance, we can perceive variations in green leaves where there may be very little luminosity difference. The channel structure of Lab allows us to manage both luminosity contrast, with which we’re all familiar, and (separately!) color contrast. The latter concept may be new to you. In RGB we have little opportunity to adjust the contrast of various colors, but in Lab color mode we can do just that. Photographs can often be improved by increasing the color contrast in order to enhance those differences.
Lab’s a and b channels specify opposing-color axes that you can visualize as a color wheel. The a channel specifies the green/magenta axis while the b channel represents the blue/yellow axis. (The image below is a bit misleading because it doesn’t show the variations from the center to the edges. Unlike RGB and CMYK, these are variations of pure saturation, not combined with luminosity.)
When you add a curves adjustment layer in Lab color you can do things you’d never be able to accomplish in RGB. For example, you can warm up the saturated blue portions of your image without affecting the other less-intense blue areas, and do so without ever creating a layer mask. You can increase the contrast in the greens without shifting the overall cast of the image. It takes a while, but once you get the hang of it, working in the Lab colorspace is extraordinarily powerful.
Why Should I Use Lab Color in My Workflow?
Let’s start with an example. Below are three versions of a single image.
The above image is pretty much straight out of the camera. Note a few problems:
There’s a blue cast in the shadows.
There’s relatively little contrast either in luminosity or color.
The lack of contrast makes the image look flat and lacking in depth and detail.
The second version above is typical of what you can achieve with basic Photoshop skills. The color and contrast are somewhat better, but there’s still a blue cast to the shadows.
The final image above is the result of spending no more than five minutes tweaking in the Lab color space. I was able to remove the blue cast from the shadows as well as increase both the luminance and color contrasts.
Did I correct this image using only plugins and clicking on buttons? No. In this case I had to resort to creating curves in Lab color mode and to changing the opacity of various layers. Although I made all the changes to this image in less than five minutes, I did rely on some of these more fundamental and manual operations. I mention this as an example of why the shortcuts alone are often insufficient and why you need to learn the underlying concepts.
So What’s Your Workflow du Jour?
As of today, here are the steps I follow for many of my images, particularly landscapes, cityscapes and abstracts. Product shots (in which colors need to be accurate), portraits or other photographs that feature faces require a rather different approach and aren’t addressed here.
Default Sharpening (minimal; just the equivalent of what the in-camera JPEG processor might do)
Chromatic Aberration removal
White Balance (overall correction)
HDR Merge: If it’s a multi-exposure HDR image, this is the point at which I use Photoshop’s Merge to HDR Pro and bring a 32-bit merged HDR image back into Lightroom. I make no adjustments while in Photoshop at this stage. [video]
Tonality: The goal here (still in Lightroom) is just to squeeze everything into a narrow dynamic range. RAW files and 32-bit HDR images typically have too wide a dynamic range and will need to be tonally compressed or tonemapped down to 16-bit RGB. I adjust the exposure, contrast, highlights, shadows, whites and blacks to create a low-contrast image in which everything (including highlights and shadows) is away from the edges of the histogram. My objective is a technical one, not to make the image look good yet.
Stop There: I do not use clarity, vibrance, saturation or anything else in Lightroom if I’m going to use the rest of this workflow.
In Photoshop, I start with a 16-bit RGB image from the above steps.
I perform any retouching other than color corrections.
Save this version as a .psd file with all layers in case I need to revert to this stage.
Dan Margulis’ Picture Postcard Workflow (PPW)
Using the Photoshop PPW Panel, I start at the top and work my way down, typically (but not always) using the actions listed below. For each action, I experiment with the visibility and opacity of the layers until I get the look I want. Occasionally I add a layer mask if the effects need to be localized. This is where it’s really helpful to understand what the PPW actions are doing behind the curtain.
Bigger Hammer for tonal contrast.
Switch from RGB to Lab color mode.
Color Boost and/or the Modern Man from Mars for color enhancement and contrast.
If I can’t get the results I want using Dan’s PPW, I return to the pre-PPW saved .psd. I haven’t spent a lot of time in PPW so far, so I don’t worry about throwing away my work.
I use the Generate Preview button on the Channels Power Tool panel to look at all ten channels available from the RGB, CMYK and Lab colorspaces.
I use one or more of the channels to replace others or as luminance mask to bring out detail or color.
Once I’ve done the best I can using individual channels, I take the image through Dan’s PPW.
After all of the above (particularly sharpening) all my sensor’s dust spots magically appear, so I often make another cleanup pass at the end.
Where Can I Learn How to Do This?
You can download Dan’s PPW Panel and start using it immediately, but as I wrote at the beginning of this article, you’ll be far more effective if you invest the time to understand how the various actions work and Lab color in particular. The PPW Panel includes extensive Help files, but they’re not the best place to start. The Help file for Dan’s 2012 Sharpen action alone is 23 pages long.
Unfortunately, there’s a huge gap and a steep learning curve separating the use of the shortcuts and truly understanding how they work. Here are my recommendations (in order) of the resources you might use to learn about channels, Lab color and the rest of these workflow concepts:
Mark Lindsay teaches and writes about Lab color and related techniques. Mark has posted some of the best introductory articles, but he hasn’t yet gone very deep into his workflow. His articles include:
The Classic Move — This is the very first thing you should learn how to do in Lab color mode.
Multiply and Layer Mask Technique for Lab Color Mode — Absolutely the second manual Lab mode technique you should learn even if you eventually decide to use Dan’s PPW panel instead. (When you figure out why blurring a layer mask sharpens your image, please explain it to the rest of us!)
Free videos covering Lee’s 10-Channel Workflow and many other topics. At this point you’ll be into swapping channels, using channels for masks and blending modes. You may never make another selection by hand again.
The False Profile Panel (free) is another plugin you’ll occasionally find useful once you get this far.
Download, install and use the latest Picture Postcard Workflow Panel for Photoshop. This automates most of what you’ve learned so far. Click on the Help button and read all of the associated PDF files. You’ll be busy.
Take a break from Dan’s extensive Help files, sit back and watch his videos on Kelby Training. If you’re not already a member, I suggest you sign up for one month (US$24.95) during which you can watch all these courses. It will be some of the best $$ you spend on photography.
For extra credit, now that you know all about channels, take your knowledge back to RGB and explore Tony Kuyper’s treatise on Luminosity Masks. (Note that the first steps in Margulis’ PPW are done in RGB, so Tony’s concepts work well here.)
You’ve got to be kidding! It will take you months to get through the resources I’ve outlined above. And then you’ll want to go over them again to truly nail down the concepts. In any case, this is the point to which my knowledge of Lab color has progressed as of late November 2012. I’m in the process of my second pass through everything. It’s geeky, but it works for me. I hope it works for you as well.
As you have questions or learn of additional resources, please leave a comment and I’ll use it to update this page.
11/25/12: Scott Loftesness and I have been collaborating together on this Lab-based workflow stuff for the past few weeks. Each of us has tried one technique or another, then to have the other adopt and improve on it. It’s been a valuable, mostly remote collaboration. Because these techniques can be so confusing at first and the learning curve so steep, I recommend you also find someone with whom you can study/learn with. More than once, Scott and I have needed the other to either explain a technique or at least remind the other what it was we’ve already learned. Scott has just posted an example of his own workflow du jour and it contains some very specific steps. I’m still digesting them myself and will see if I can (a) understand them, and (b) merge them into my own processes.
11/25/12: Since posting the original article only two days ago, I’ve added the free Advanced Local Contrast Enhancer (ALCE) Photoshop plugin by Davide Barranca to my workflow. It’s like Lightroom’s Clarity feature, but on steroids. Make sure to check out the excellent video tutorials by Marco Olivotto.
Okay, I admit it. I’m a geek. There’s an abundance of evidence for this, not the least of which is that I fine-tune the autofocus for nearly all of my body/lens combinations.
What is autofocus fine-tuning (a.k.a. focus micro adjust)? Pretty much what it sounds like. It’s a feature built into most high-end DSLRs and other interchangeable-lens cameras that allows you to correct for small inaccuracies in autofocusing on a lens-by-lens basis.
Before diving into this, let’s consider the obvious question: Do you need to do this for your camera and lenses? Generally not unless you typically shoot with your lenses at wide-open apertures. Once you stop your lenses down by two stops or more, the depth of field usually increases to the point at which the default autofocus adjustment is fine. But if you do shoot wide open and you find things aren’t as sharp as you expect, then perhaps it’s worth going through this exercise. For me, it began with portraits I shot at f/2 using a 135mm prime lens. Even after carefully aiming my center focus point on the subject’s nearest eye, I could see the sharpest point was actually at least an inch or so behind the front of the eye and the eye itself was slightly soft. An autofocus fine-tuning adjustment was part of the solution. (I say “part” because that Nikon 135mm f/2 DC lens continues to be an untamed animal in my zoo of lenses. The DC “Defocus Control” feature is both a blessing and a curse.)
I first mentioned autofocus adjustments and the LensAlign (shown above) from Michael Tapes Design back in January. This is the tool I’ve used for measuring front- and back-focusing for the past two years. The concept is simple: You focus on a vertical pattern that’s aligned with a sloping scale, make an exposure, then examine the image to see what area on the sloping scale is actually in focus. You don’t really need to spend US$80 for this. You can use almost any high-contrast pattern and just shoot it at an angle. There are even patterns in PDF format that you can download for free. (Bernard Knaepen has posted an interesting and free technique.)
But I find using the LensAlign and other similar systems to be an inaccurate process. It’s usually a challenge to interpret the results — a fairly subjective process. For the past two months I’ve been working with Nikon to try and understand the autofocus problems of my Nikon D800E and I’ve been using flat test patterns on the wall. I haven’t been concerned about fine tuning since the issue is about a variation from one autofocus point to another rather than an overall adjustment.
While I was in the middle of this D800E issue, Michael Tapes released a new tool: a desktop application (Mac and Windows) called Focus Tune. Through December 31, 2012 it costs US$29.95 or just US$19.95 if you already own a LensAlign and it’s well worth the price. In fact, it’s so useful I suggest you get Focus Tune instead of (not in addition to) a LensAlign.
Compared to LensAlign, Focus Tune works in reverse, sort of like the game of Jeopardy. Rather than a tool that suggests what the autofocus fine-tuning value should be, you start by shooting tests at a range of values and the software tells you which is best. Take a look at the chart below, which was generated by Focus Tune for tests using a Nikon 28-300mm f/3.5-5.6 lens at 100mm f/5.3 (wide open) on my Nikon D600.
The data were collected in two steps. First I shot four high-res JPEGs using each of nine autofocus fine tune settings in steps of 5: -20, -15, -10, etc., through +20. I then ran the images through Focus Tune, looked at the results and saw that the peak was somewhere between -10 and 0. So I went back and shot four images for each of the remaining eight settings in that range. From this and the associated tabular data, I determined that the optimal setting for this lens on this body is -4.
How does Focus Tune do this? You tell the application what area of your test images to analyze and it then comes up with a “sharpness value” for each image. By taking multiple exposures at each setting, Focus Tune can deliver a very consistent result and ignore the “outliers” which appear as red dots on the chart. (Note this lens/body combination was already working well. Although I did set the autofocus fine-tune value to -4, I could have shot forever and never noticed this small offset.)
Here’s an example showing a combination that requires a larger adjustment: a Nikon 14-24mm f/2.8 wide open at 18mm on my D800E. When I added tests in the 15-20 range, Focus Tune reported that +18 was the setting that yielded the sharpest results.
Focus Tune is able to extract a great deal of information from the EXIF data in your JPEGs. For example, it knows which autofocus point you used to capture the image. This means Focus Tune can generate other reports including one that compares the accuracy of different autofocus points. It’s been a great help in diagnosing the D800E autofocus problem. Here’s a chart from test images using the 28-300mm f/3.5-5.6 at 100mm and f/5.3. C-C is the center point, while C-L5 is the far-left one.
From the above chart you can see that the accuracy of the farther left focus points are decreasingly accurate and that the far-left one is significantly worse than the others. Note that the centermost C-C and CL1 are phase-detect or “cross-type” sensors whereas the other four are the less accurate contrast type. Also note this is not a test of lens sharpness. In all cases Focus Tune is evaluating the sharpness at the center of the image regardless of which autofocus point was used to set the focus.
Focus Tune is in its first release and it’s still a bit rough around the edges. There are a few user-interface peculiarities, but it doesn’t take long to understand how to use it. If you’re curious, start by taking a look at the introductory video on Michael Tapes’ website. Although Michael demonstrates Focus Tune in combination with a LensAlign, I think you can get just as good results by using a good flat autofocus chart so long as you make sure the sensor plane of your camera is parallel to the target.
I’ll be using Focus Tune as part of my toolkit as I continue to track down the D800/D800E autofocus problems.