Why is the depth of field so narrow? What is effective aperture?

by Güray Dere

As I briefly touched on when explaining aperture, it’s worth taking a more detailed look at depth of field. Depth of field is commonly referred to by the abbreviation DOF, short for the English term “Depth of Field.” Since it’s shorter, I’ll call it DOF as well.

First I should say, this article might be a bit boring for some. There are many concepts, diagrams, formulas, and tables. This is as concise as I could make it. In fact, it’s a much deeper and broader topic.

We know that DOF (the in-focus region) narrows as magnification increases and widens as we stop down the aperture. But the physical behavior of light called “interference” creates sharpness issues at small apertures. We know we can’t stop down excessively if we want to preserve sharpness.

If we want to recall DOF according to magnification and aperture, we can look at the table below. Values are in millimeters. You can see how narrow it is.

The table above changes with certain technical characteristics of the lens and sensor sizes, but that’s not very important right now. So how does this optical behavior—namely the relationship between DOF and aperture—arise?

I’ll try to explain using a few diagrams. First, we need to clarify some concepts.

Circle of Confusion

I couldn’t find a Turkish equivalent for this expression. Best to see what it is.

Let’s imagine a point-like object. Theoretically, our lens has perfect optical properties. When we focus on this object, a perfectly focused point should form on our camera’s sensor. In practice, a point of light never appears as a true point on the sensor, but as a “blur circle.” As long as this circle does not exceed a certain size, it is perceived as sharp by the human eye (and given the print size and viewing distance). This maximum acceptable blur circle diameter is called the “Circle of Confusion (CoC).” Just as each pixel on the sensor has a physical size, the CoC value is a threshold determined by camera manufacturers and is usually slightly larger than pixel size. If the projection of a point of light on the sensor does not exceed this CoC diameter, that point is considered sharp.

Back focus

If the subject is closer than the exact focal distance of our lens, the image will hit the sensor before it fully focuses. A larger image will form on the sensor, spreading across many pixels. In this case, we see our point-like subject not as a point but as a wide bokeh ring. In other words, it’s not sharp.

Front focus

Similarly, if the subject is farther than the focal distance, we get the situation shown below. The image focuses before it reaches the sensor. Then the light continues toward the sensor and defocuses again by the time it gets there. As a result, a large but unsharp circle forms on the sensor.

The light falling on the sensor actually progresses as a cone along its path. When we are perfectly focused, the sharp tip of the cone hits the sensor. We see a single sharp pixel. As the subject moves away from the focal plane, a wider region of the cone meets the sensor instead of the tip, and the image spreads across multiple pixels.

So, depending on the subject’s back-and-forth movement, we have a cone whose diameter on the sensor changes. A single pixel on the sensor has a physical size. Whether the lens is exactly focused or very close to focus, as long as the light cone falling on a pixel does not spill beyond the boundaries of what our eyes and device perceive as a single sharp pixel, it remains within DOF (depth of field). And we call the cone width that exactly fits this one-pixel width on the sensor the “circle of confusion.”

In short, even if perfect focus isn’t achieved, if the light doesn’t spill beyond one pixel on the sensor, we perceive the point as sharp. The circle of confusion varies with the sensor; roughly, it is the width of one pixel on the sensor.

I tried to explain this first because it’s necessary to understand the diagrams I use in the explanations below.

Depth of Field in Macro Shooting

If we define depth of field: when we focus our lens on a point, we look at how close and how far from that point we can still see sharply. The entire distance range we perceive as sharp is defined as the “depth of field.” I’ll continue calling it DOF.

  • If we stop down the aperture, DOF increases; if we open up, DOF decreases.
  • If we focus farther, DOF increases; when working at close distances, DOF decreases.
  • Cameras with fewer megapixels have greater DOF; cameras with higher megapixels have less DOF.

Let’s try to understand how these occur.

Now let’s draw a lens focused on a distant subject. We’re not doing macro. The aperture is wide open. Let’s try to understand why DOF is wide:

Let’s try to understand this graphic, because I worked quite a bit to draw it—let it be worth it, right? You can click to enlarge it a little.

I had to shrink the image to fit the screen, but you can enlarge it by clicking. In this diagram, a lens with a wide-open aperture is focused on a distant subject. On the right, C denotes the “circle of confusion” on the sensor. You can think of this as one pixel. Because we’re focused far, the same C value corresponds to a much larger area on the focus plane shown on the left as a gray bar than it does on the sensor. Let’s call that C(o). As an example, imagine a man wearing a jacket on that focus plane.

If the man is far away, the jacket’s button may appear in our photo as only a one-pixel dot. Yet if we placed the actual button on the sensor, it might cover millions of pixels. Briefly: the button we see as a one-pixel C on the sensor corresponds to quite a large region C(o) on the man.

We trace light rays from three representative points on the jacket button. The three rays—top, middle, and bottom—pass through the aperture opening and fall on the sensor. In reality, an infinite number of rays arrive and pass through every point of the aperture, but they all focus similarly.

If we make such an assessment for all points we perceive as sharp, we get the very wide distance range shown on the left as DOF. This means: even if we don’t touch the focus, if our subject moves toward us or away within this range, it will appear sharp anywhere from the leftmost to the rightmost positions. The pixel we marked will change too little for our eyes to notice and will continue to give us the impression of sharpness.

Now let’s get to macro shooting. In macro, this diagram changes like this. The aperture is still wide open:

Before reading on, note that in the diagram the lens aperture is fully open. Now you can continue reading.

Our C value on the sensor doesn’t change. One pixel is always the same size. We called the width of the light cone on a single pixel C. However, as our focus plane moves closer to the lens (macro shooting), the C(o) projection on the subject becomes very small. The earlier scenario where a single button covered one pixel now becomes one where each facet of a fly’s compound eye covers one pixel. In other words, our subject-side C(o) shrank to the size of a single facet on the fly’s eye. Since the refraction angle in the lens changes by the same proportion, we end up with a much narrower DOF as shown in the figure.

As the subject gets closer to us—in other words, as we increase magnification—DOF will continue to narrow, depending also on the lens aperture.

Let’s look at how the aperture affects this situation in our new diagram:

If we look again, we now see that the aperture has been stopped down somewhat.

Our C value on the sensor side is always the same. Since the fly’s eye is still at the same distance, the subject-side C(o) on the focus plane is exactly the same size as in the previous figure. However, when we stop down, we reduce the diameter of the light cones passing through the lens and falling on the sensor. The light rays now arrive as a sharper, narrower cone.

In the previous figure, with the aperture wide open, a light cone coming from outside the DOF region created an area on the sensor wider than one pixel and appeared blurry. Now, with the same point, the stopped-down aperture sharpens the cone so it falls within the one-pixel C on the sensor. In other words, it appears sharp now. By stopping down, we’ve increased the depth of field.

If we did this with one of those “pinhole” lenses that are basically nothing more than a cap with a tiny hole in the middle, we’d have a DOF extending from zero to infinity. Everything would look sharp. We don’t ask “Then why don’t we always use extremely small apertures for wide DOF?” because as the aperture gets smaller, completely different light problems arise. The photo cannot be sharp. Very briefly, due to an effect called “diffraction,” the light waves carrying so much detail try to pass through a tiny opening, scatter due to interference, and degrade by overlapping each other.

On APS-C sensor DSLRs—for example, on my Pentax K-x—the C value on the sensor is 0.019 mm. If we shoot at a magnification like 10×, by the same principle as the figure above, the C(o) size on the subject will decrease by 10×, increase the refraction angle of the light, and (as we would expect) leave us in a very difficult situation with respect to DOF.

Effective Aperture

Turns out this is a thing as well. I learned it later and started paying attention.

The numerical aperture value we set on the lens is valid only when focused at infinity. As we begin to focus closer, the changing optical geometry effectively acts as if we’re stopping the aperture down. We end up using a value other than what we set. We call this new value the effective aperture. This is the value that actually matters. In everyday photography—that is, at normal distances—the change in effective aperture is so small that it has no effect for 3 meters or 5 meters. But in macro—especially at higher magnifications—it becomes very important. The aperture changes dramatically.

There are limits that determine DOF and how light degrades by interference in macro. When calculating these limits, we need to take effective aperture into account. The simple formula is:

Effective Aperture = Lens f-number × (1 + Magnification)

For example, if we shot at 1:1 with f/8, then in reality the effective aperture for that photo is f/8 × (1+1) = f/16. When modern macro lenses provide 1:1 magnification, the front element protrudes quite a bit. In other words, the lens behaves as if an extension tube is attached at the rear. This appears as a loss in aperture, which the effective aperture calculation reveals.

Those who don’t want to overcomplicate things can use the formula above. But for those interested, there’s a bit more detail.

To make this formula work even more accurately, we need to include a lens-specific parameter. If we call the parameter named “Pupil Magnification Ratio” by P, the formula becomes:

Effective Aperture = Lens f-number × (1 + Magnification / P)

For P = 1 we obtain the simple formula above, but P is not 1 for every lens.

Here’s what the “Pupil Magnification Ratio” is:

In lens design there are different glass groups and somewhere among them the physical position of the aperture. The optical behavior of the elements in front of and behind the diaphragm differs. To illustrate, the photos below—resembling a solar eclipse—show the front and rear views of my Pentax K 135 mm lens set to f/5.6, held by hand in front of a lamp.

If you noticed, the aperture openings look different from the front and back. From one side the opening appears larger. If we take the ratio of the rear-seen diameter to the front-seen diameter, we get what’s called the “Pupil Magnification Ratio.” In these two photos of the same lens taken from the same distance, let’s measure the aperture opening. The rear view gave 402 pixels, the front view 654 pixels. So P = Rear / Front = 402 / 654 = 0.61. Now I can use this in the formula.

Generally, at wide angles P > 1, and with telephoto lenses P < 1. This parameter has a big effect on DOF at high magnifications.

So is there any benefit in knowing these? After all, I have a lens and want to shoot macro—that’s all!

Knowing these, we can do three things.

  1. We can explain them to those who don’t know! 🙂
  2. For the magnification we choose, we know how far we should stop the lens down. Thus we can prevent loss of sharpness.
  3. When using the “focus stacking” technique, we know how many steps (step width) we need for the photo given our lens and magnification.

There’s a separate article on Focus Stacking. Those unfamiliar can take a look here: Focus stacking technique

As a reminder, “focus stacking” roughly means this: if due to shallow DOF we can show only one antenna of a fly sharp at a time, by shifting the sharp plane forward a little each time and taking many shots, we get lots of photos each carrying sharpness for a tiny region. Then we can combine them all and obtain a single photo that shows the entire fly in focus.

Returning to formulas, by assuming P = 1—that is, using the simple formula—we can produce a generally valid aperture/magnification table. This table will tell us which f-numbers we shouldn’t exceed at which magnification, and give the corresponding depth of field. But I’ll do the math for a 24 MP full-frame body, again tailoring it to myself a bit.

Let’s explain a little. The first column is magnification. The columns labeled DOF give the depth of field. In other words, when doing focus stacking, we should apply a step slightly smaller than this value. The columns labeled E show the effective aperture we’re actually facing, which changes with magnification. The colors indicate suitability with respect to interference (diffraction). Green areas indicate usable values. In yellow, diffraction has started; we’re approaching the limit. Red areas are where diffraction is high. We cannot obtain sharpness in the red zone. We should avoid working there.

For example, if we’re working at 5× magnification, f/8 puts us in the red zone. To stay entirely in green and avoid diffraction effects, we shouldn’t even use f/5.6.

Note: The table above varies with the lens and body used. The green zone doesn’t always give the sharpest photo. The lens must also perform sharply at that f-number.

If the sensor size decreases or the megapixel count increases, diffraction increases and the problem worsens. In that case, we need to use wider apertures. 

So, will we never be able to do 40× magnification according to this table, because we’re in the red for all apertures!? Yes, if we start from f/1.4, that’s what happens. I kept the table small by using the f/1.4–f/16 range. With any known regular lens we can’t get sharp images at those magnifications. For 30× we need a microscope objective. Their f-numbers are extremely small (very wide apertures), like f/0.25. If we start the table at f/0.25, 30× will remain in the green zone. Microscope lenses are made specifically for this job; they have very low f-numbers. However, regarding the narrow DOF—even an onion skin will feel thick 🙂

Note: Microscope lenses are a separate article topic as well: Macro photography with a microscope objective

I’ll wrap up with examples from real photos. The photos consist of raw frames from focus stacking work I did on the same ant with two different lenses. On the left is a single photo showing the DOF; on the other is the stacked result. I recommend clicking the images below to examine the large versions.

Red ant shot at 5× or 6× magnification with an EL-Nikkor 50 mm f/2.8 at f/5.6

The same ant shot at 12× magnification with an Otamat 101 20 mm f/2.8 fixed-aperture lens

In both studies, magnification/aperture values were used at the limits for the lenses. It’s best not to push these two lenses any further 🙂 At the time I took these shots, I didn’t have a micrometric precision rail. Since I used a bellows rail, I increased the focus-stack step width by pushing DOF to the limit. Otherwise, a bellows rail would be insufficient for such high-magnification photography.

Full-frame / APS-C difference

Full-frame sensor size is 36 mm; APS-C is around 24 mm. That’s the fundamental difference. With APS-C we always see a cropped image from the center.

For example, let’s use a 50 mm lens. Other than cropping, there shouldn’t be any difference between an image shot on a full-frame camera and one shot on APS-C with the same lens. And there isn’t.

There’s a oft-repeated calculation: it’s said that an image shot at 50 mm on APS-C is the same as 75 mm on full-frame. In fact, it’s only the framing that’s the same; if the lens changes, things change. Now let’s think of it another way and advocate both systems.

APS-C is awesome!

  • It’s cheaper.
  • The lens’s most problematic edge regions aren’t used. The quality center is cropped out; i.e., edge sharpness is better on APS-C.
  • Magnification is higher. The same lens looks closer. In tele and macro we get more magnification.

Full-frame is awesome!

  • Pixels are larger on the sensor. They gather more light and perform better in low light.
  • With the same lens on full-frame, we see wider; wide-angles become super wide.
  • Because the lens’s full image circle is used without cropping, the lens’s character is revealed better.
  • At the same framing, DOF is shallower. Accordingly (at the same framing), bokeh is creamier.

What’s my personal opinion?

Let’s have two lenses: 100 mm and 150 mm. Mount the 100 mm on the APS-C body and the 150 mm on the full-frame body. Set the same aperture on both. The photos we take will be identical in framing. But DOF will be smaller on full-frame. Great for portraits, but it makes macro a bit trickier.

When I moved from the Pentax K-x to the Sony A7II, I noticed something. When focus stacking we set step size. For the same framing, if I use the same 50-micron step size at the same aperture I used before, there are gaps in the in-focus region. The focus stack can’t provide continuous sharpness. Because to create the same framing I had to increase magnification a bit, and accordingly my DOF narrowed! Now I have to use a 30-micron step, and what I previously finished in 50 shots I now have to take to 80. My workload has increased significantly.

On top of that, because I went from 12 megapixels to 24 megapixels, I saw that the boundaries of areas I previously perceived as sharp now looked blurry. Another blow to DOF!

But when it comes to detail, I couldn’t be happier. With a bit more care in lens choice and aperture, I’m getting much better detail despite these challenges. I can’t say full-frame is always better, but I doubt I’ll give it up easily. For those devoted to APS-C, full-frame bodies also have an APS-C mode. They output a cropped image. We do lose megapixels, of course, but compared to my old Pentax body, it still gives higher resolution.

I don’t shoot only macro. Wide angle and lens character matter a lot to me. I’ll shoot full-frame first, and crop later if I want 🙂

I say full-frame!

You may also like