In a to-be-published edition of All About the Gear (#9, Canon 70D) I explain the differences between all the autofocus technologies: contrast, phase-detect & cross-type and Canon’s new Dual Pixel system. It’s a complicated and important enough topic that I’ve decided it’s worth a blog post all its own.
[There are some autofocus technologies I won’t be covering such as infrared or sonic systems since these aren’t commonly used on today’s DSLRs and mirrorless cameras.]
Contrast Autofocus (Live View and Video)
The simplest autofocus technology — at least the simplest to understand — is contrast autofocus. In a video or mirrorless camera, or a DSLR with its mirror up and out of the way, the image is projected through the lens onto the image sensor. You or the camera select an autofocus region that might be on the order of one hundred thousand pixels. The processor evaluates the color and brightness variations between adjacent pixels to determine the contrast. When a region is at its sharpest focus, the adjacent pixels (eg, on either side of a line or edge in the image) will be at maximum contrast.
The only way the camera can determine whether the the image is in focus in the desired region is to send a signal to the autofocus motor, telling it to shift the focus of the lens and then see if the contrast increases or decreases. If it increases, the camera tells the motor to continue changing the focus in the same direction. If the contrast decreases, it shifts the focus in the other direction. The camera and motor keep going in that direction until the contrast starts to decrease, meaning they’ve gone too far. At that point the camera tells the motor to focus in the opposite direction. This hunting can go on for some time until (hopefully) the camera settles down and decides the image is in focus and ready for exposure.
Because contrast autofocus can’t determine the direction (front or back) in which an object is out of focus, it cannot reasonably be used for tracking focus on objects that move towards or away from the camera without more hunting. But while contrast autofocus may be slow, it can also be extremely accurate because it’s using data collected directly from the image plane. Contrast autofocus also works reasonably well in low light. So long as the camera can detect luminosity or color contrast, it can focus.
DSLRs: It’s All Done With Mirrors
We tend to think of DSLRs as having only one sensor: the image sensor that has all those megapixels. There’s a mirror in front of the image sensor to reflect light up, through the pentaprism and into the viewfinder while we focus and compose the image. But what you may not be aware of is that the mirror is slightly translucent to let some light pass through to a second mirror that in turn reflects light down into a second sensor at the bottom of the camera’s mirror box. It’s this one that is used for autofocus when you’re not in Live View or video mode. Both of these mirrors need to get up and out of the way before the exposure can be made. No wonder there’s such a loud slap every time this happens.
The autofocus sensor is composed of discrete autofocus points, typically ten to 100 of them. Their positions are approximately represented by those small squares you see superimposed in your DSLR’s viewfinder.
The dedicated autofocus sensors in your DSLR use phase detection (PD) to determine focus. PD sensors compare the light from two different angles coming out of the rear element of your lens. This comparison allows your camera to determine (a) in which direction it has to shift the focus of the lens, and (b) exactly how far to shift it. When you focus a good lens using phase detection, you can almost hear it snap into focus. There’s usually no visible hunting unless you’re in continuous/servo mode. It just focuses. Fast.
Cross-Point PD Sensors
Standard PD sensor elements are tiny strips that cover an area just a few pixels high and a less than 100 pixels wide. The light they compare comes from the left and right. For this reason standard PD elements are good at focusing on vertical edges and lines, but not on horizontal ones.
More advanced cross-type PD sensor elements have two strips at 90 degrees to one another. One strip senses left/right variations, while the other picks up the top/bottom differences. These cross-type elements can therefore focus on edges or lines that aren’t in one orientation only.
The more expensive your DSLR, the more of its PD autofocus elements are likely to be of the cross-type. You’ll often see cameras with specifications that say, “X autofocus points, Y of which are of the cross-point type.” For example, my Nikon D3s has 51 focus points of which 15 (three columns of five rows) are cross-type sensors. The new Canon 70D has 19 PD autofocus points, all of which are cross-type.
To determine if one of your focus points is cross-type, first use it to focus on a vertical line. It should work. Then rotate your camera 90 degrees. If it’s not a cross-type sensor element, the camera will have a hard time focusing. Here’s a handy trick to remember. If you’re having trouble getting your camera to focus on horizontal lines using a one-dimensional PD point, turn it 90 degrees, focus, then return it to the proper orientation and take your shot.
Although PD autofocus offers some advantages over contrast autofocus (eg, faster focusing), PD has its weaknesses, too. One is with repeating patterns such as found in architecture, some fabrics, screen doors, etc. It’s fairly common for PD-type sensors to confuse one element in the pattern with another and therefore think they’re in phase when in fact they’re not. Contrast autofocus isn’t as easily fooled.
PD autofocus also can’t work with slow or stopped-down lenses because a relatively small percentage of the light makes its way to the autofocus sensor. Typically the aperture has to be f/5.6-f/8 or wider. Note this isn’t a problem with most lenses designed for DSLR autofocus systems since they’re wide open until you press the shutter release all the way, at which time the aperture closes to its desired size. But if you use some very long lenses, which are wider than f/8 or so, they may not be fast enough (even wide open) for your autofocus system. The same is true if you stop down an older or non-native lens that has only manual aperture control.
DSLR Autofocus Fine Tuning
In a DSLR, the light from your subject takes two different paths. One is to the image sensor, the other to the autofocus sensor. It’s quite possible for there to be slight registration differences between these paths as well as differences for each lens. That is why high-end DSLRs include a feature to fine-tune the autofocusing on a lens-by-lens basis. If you typically shoot with your apertures wide open, you might want to go to the trouble to test, calibrate and adjust the fine tuning for each of your lenses. If you typically stop down two or more stops from wide open, it’s probably not worth going to the trouble. I’ve written a detailed blog post on the process I use with my DSLR cameras and lenses.
Hybrid Autofocus (Pixel Stealing)
All mirrorless cameras begin by necessity with simple contrast autofocus from the image sensor. Because they lack mirrors, these cameras can’t have dedicated autofocus sensors like those in our DSLRs.
But now most manufacturers are using hybrid technologies to add phase-detection elements directly to their image sensors. Combining contrast and PD autofocus along with some smart image processing can potentially result in an autofocus system that has the best of both worlds. At the very least, adding PD to mirrorless cameras can dramatically improve them.
In 2010, Fujifilm released its first camera with a hybrid contrast/PD autofocus system, the D300EXR compact, and are now using this technology in their X100S. I couldn’t find much documentation on the Fuji hybrid AF system — the company doesn’t seem to let much info out. It appears they’ve converted less than 0.1% (ie, less than 16,000 out of 16MP) to a simplified version of a PD element that masks light from the left or right side of the camera lens. In other words, some of these special pixels only receive light from one side or the other. This allows the camera to perform the same phase comparison as is done with dedicated PD sensors.
The disadvantage is that these pixels can no longer be used to capture the image. The camera’s processor therefore “fills in the holes” by looking at the adjacent normal pixels. Because so few pixels are used, we can’t see the difference in the resulting image.
So far Olympus, Sony and Nikon have all released mirrorless cameras that use some variation on this technology.
The Canon Dual Pixel Autofocus System
With the introduction of the 70D, Canon is showing an even more sophisticated technique that turns 80% of the camera’s 21 million pixels into PD sensor elements. (I’m guessing the remaining 20% are probably too far from the center to capture two phase-differential versions of the image.)
On some advanced image sensors, each of the millions of photodiodes is covered with a tiny microlens that pulls in light from all angles. In Canon’s new sensor, there are actually two separate photodiodes side-by-side under each microlens. Because of the optics of those microlenses, each photodiode in a pair will tend to collect more light from one side of the lens than from the other, which like the Fujifilm system means they can be used in a PD autofocus system. The difference with Canon’s system is that it has 17 million pairs of PD elements rather than Fuji’s few thousand. Nearly any area of the image can be used as the autofocus target. When it comes time to expose the image (ie, when autofocusing has completed) the signals from the two adjacent photodiodes are simply combined.
Contrast + Phase Detection
Canon claims, “Now Live View shooting will be…equal to optical viewfinder shooting.” I’m not sure it’s quite that good, but certainly the Fuji and Canon hybrid technologies are impressive improvements.
Intelligent combination of contrast and PD means we can, in theory, have the speed of phase detection and the accuracy and low-light capabilities of contrast autofocus. And we’re starting to see that in three shooting modes: video and Live View on all cameras (DSLR or mirrorless) and in normal viewfinder mode on mirrorless cameras. Since “normal viewfinder” mode on a mirrorless camera is in fact the same thing as LiveView, it’s not really a separate mode per se.
Given that these hybrid technologies are still new, we can expect much improvement over the next few years. Some of this will come from better sensors, but most I believe will be from the processors. The Canon 70D, for example, has a separate processor chip dedicated just to autofocus. This will become the norm for all cameras. Within the limits of these processors, we can even look forward to incremental firmware upgrades to improve the autofocus capabilities of our existing hybrid autofocus systems. Fujifilm has been the leader in developing and releasing such firmware improvements to its cameras.
The mirrorless revolution continues.