It would be useful to look at the depth of field in a little more detail, which I briefly touched on earlier when explaining aperture. Depth of field is known as DOF, which is an abbreviated form of the English expression “Depth of field”. Since it is short, I will call it DOF.
First of all, I have to say that this article will be a bit boring. There are many concepts, diagrams, formulas and tables. In fact, it is a much deeper and comprehensive subject. I apologize for not translating the Turkish expressions in the diagrams. You will understand most of them without translation.
We know that the focus area, or DOF, narrows as the magnification increases, and widens as we close the aperture. But the physical behavior of light called “diffraction” creates sharpness problems at small apertures (high diaphragm values). We know that we cannot close the aperture too much to preserve sharpness.
To recall the depth of field according to magnification and aperture, we can refer to the table below. The values are in millimeters. You can see how narrow it is.
The above table varies according to some technical specifications of the lens and sensor size, but it is not very important at the moment. The important thing is that you see the ratios of the numbers. So how does this optical behavior, i.e. the relationship between DOF and aperture, occur?
I will try to explain it in a few diagrams. First of all, we need to clarify some concepts.
Circle of Confusion
Let’s look at what it is. Let’s imagine a point-shaped object. Theoretically, let’s say our lens has perfect optical properties. When we focus on this object, a perfectly focused spot will appear on the sensor of our camera. This well-focused spot on the sensor, which is divided into millions of pixels, will be very small and will be perceived as a clear image on the photo because it will fall on a single pixel.
If the object is closer than the exact focus distance of our lens, the image will fall on the sensor before it is fully focused. A larger image will be formed on the sensor and scattered over many pixels. In this case, we see our object in the form of a bokeh ring rather than a point of light. So there is no clear image.
Likewise, if the object is farther away than the focal distance of our lens, the situation we see in the illustration below occurs. The image is focused before it reaches the sensor. Then the light continues to travel to the sensor. It is defocused until it reaches the sensor. The result will again be a large but blurry circle on the sensor.
The light falling on the sensor actually forms a cone shape along the way. When we focus just right, the pointed tip of the cone falls on the sensor. We see a single sharp pixel. As the object moves away from the focus of the lens, a wider area of the cone meets the sensor instead of the pointed tip, and the image spreads across multiple pixels.
So we have a cone on the sensor whose diameter changes depending on the back and forth movements of our tiny object. 1 pixel on the sensor has a physical size. Whether the lens is fully focused or very close to the focus, if the light cone falling into the pixel does not exceed the boundaries of 1 pixel that our eye and device will perceive as sharp, it is within the DOF (depth of field). And we call the width of the cone on the sensor that fits perfectly within this 1 pixel width “circle of confusion”.
In a nutshell, even without full focus, if the light does not extend beyond 1 pixel on the sensor, we perceive the spot as in focus. Circle of confusion depends on the sensor, it is roughly the width of 1 pixel on the sensor.
I tried to explain this first because it is necessary to understand the shapes I use in the other explanations below.
Macro Photography Depth of Field
Let’s define depth of field. When we focus our lens on a point, we can clearly see a little closer and a little farther away from that point. This entire range of distance that we can see clearly is defined as “depth of field”. I will call it DOF again.
- If we close the aperture, DOF increases, if we open it, DOF decreases.
- If we focus at a distance, the DOF increases, when working at close range the DOF decreases.
- Low megapixel cameras have higher DOF, high megapixel cameras have lower DOF.
Let’s try to understand how this happens.
Now let’s draw the shape of a lens focused on a distant object. We’re not shooting macro. Let the aperture be wide open. Let’s try to understand that the DOF is wide:
I had to shrink the image to fit the screen, but you can enlarge it by clicking on it. This is a lens with the aperture wide open focused on a distant object. On the right is the “circle of confusion” with the abbreviation C on the sensor. You can think of this as 1 pixel. Since we are focusing far away, the same C value corresponds to a much larger area on the focal plane, which appears as a gray bar on the left side, than on the sensor. Let’s call that C(o). For an example, let’s imagine a man wearing a jacket. If the man is far away, the button of the jacket may appear 1 pixel in the photo. However, if we put the button on the sensor, it is big enough to cover millions of pixels. In short, the button that we see as 1 pixel in size C corresponds to a very large region C(o) on the man.
If we make such an assessment for all the points we perceive as sharp, we get a very wide DOF.
In macro photography, this scheme changes as follows. The aperture is wide open again:
Our C value on the sensor does not change. 1 pixel is always the same size. This shows the diameter of the light cone C for 1 pixel. However, as our focal plane gets closer to the lens (macro photography), the projection of C(o) on the object we focus on becomes very small. The scenario where a button takes up 1 pixel of space is replaced by a scenario where each element in a fly’s compound eye takes up 1 pixel. So our C(o) on the object side is reduced to the size of 1 single eye element. Since the angle of the light refracted in the lens has also changed, we are faced with a much narrower DOF as shown in the figure.
Let’s look at our new shape to see how the diaphragm affects this:
Our C value is always the same on the sensor side. Since the fly’s eye is still at the same distance, C(o) on the focal plane (object) side is exactly the same size as in the previous figure. However, when we close the aperture, we reduce the diameter of the light cones that pass through the lens and fall on the sensor.
In the previous figure, when the aperture was open, the cone of light coming from a place outside the DOF limit formed an area larger than 1 pixel on the sensor and looked blurry. Now, when the aperture is closed, the light coming from the same place is shrunk into a 1-pixel C with a smaller light cone. So now they are clearly visible. By reducing the aperture we have increased the depth of field.
If we tried this with one of the “pinhole” lenses, which is nothing more than a cover with a tiny hole in the center, we would have a depth of field from zero to infinity. So everything would look sharp. We don’t ask, “Well, then why don’t we use such small apertures for wide DOF?” because as the aperture gets smaller, light problems arise. The picture is not sharp.
In DSLR cameras with APS-C type sensors, for example my Pentax K-x, the C value on the sensor is 0.019mm. If we take a shot with a magnification of 10x, the size of C(o) on the object will decrease by a factor of 10 due to the same principle as in the figure above, increasing the angle of refraction of light and leaving us (as expected) in a very difficult situation in terms of DOF.
There is such a thing. I found out later and started paying attention to it.
The aperture numerical value we set on our lens is only valid when focused at infinity. As we start to focus closer, the changing optical structure acts as if it is actually closing the aperture we are using. We are using a new value other than the aperture we set. We call this value the effective aperture. This is actually the real value. In daily photography, i.e. at normal distances, this has no effect. But it becomes very important in macro photography, especially at increased magnifications. The aperture changes dramatically.
In macro there are limits to the depth of field and limits to prevent light diffraction and distortion. When calculating the limits we need to consider the effective aperture value. The simple formula is as follows:
Effective Aperture = Lens Aperture Value x (1 + Magnification Amount)
For example, if we took a shot at f8 at 1:1 magnification, we actually took that photo with an effective aperture = f8 x (1+1) = f16. While modern macro lenses provide 1:1 magnification, the front element of the lens extends quite far out. So it works like an extension tube is attached to the back of the lens. This results in a loss of aperture. It reveals itself with the effective aperture calculation.
Those who do not want to get too confused can use the formula above. But there is a little more detail for those who want.
To make this formula work a little more accurately, we need to add a lens-specific parameter to the formula. If we say P to the parameter called “Pupil Magnification Ratio”,the formula turns into this:
Effective Aperture = Lens Aperture Value x (1 + Magnification Amount/P)
For P=1 we get the simple formula above but P is not 1 in every lens.
“Pupil Magnification Ratio” is something like this:
The design of the lens includes different groups of lenses and the physical location of the diaphragm somewhere between them. The optical behavior of the elements falling in front of and behind the diaphragm is different. To illustrate, the eclipse-like photos below are front and rear views of the Pentax K 135mm lens set to f5.6 held in front of a lamp.
Notice that the front and rear views of the aperture are different. From one side, the aperture looks bigger. Here, if we proportion the diameter seen from the back and the diameter seen from the front, we find the ratio we call “Pupil Magnification Ratio”. In these two photographs taken from the same distance for the lens I gave as an example, the aperture gave a value of 402 pixels in the rear view and 654 pixels in the front view. So P = Rear / Front = 402 / 654 = 0.61. Now I can use this in the formula.
Generally, P>1 for wide angles and P<1 for tele lenses. This parameter has a big impact on depth of field at high magnifications.
But is it useful to know these things? I have a lens, I want to shoot macro, that’s all!
Knowing this, we can do three things.
- We can explain it to those who don’t know! 🙂
- For the magnification we choose, we know how much aperture to stop down our lens. This way we can avoid sharpness reduction.
- When using the Focus Stacking technique, we know how many steps (step width for the next photo) we need to use with the lens and magnification value we have.
There is a separate article on focus stacking. Those who do not know can take a look here: Focus stacking technique
As a reminder, focus stacking is roughly this: If we can only get one antennae of a fly in focus at a time because of DOF constriction, if we take a large number of images, moving that focus area forward a little bit each time, we have images of each small area of the fly in focus. Then we can combine them all and get a sharp image of the whole fly.
Going back to the formulas, assuming P=1, i.e. using the simple formula, we can derive a generally valid aperture/magnification table. This table will tell us which aperture value should not be exceeded at which magnification and the depth of field for that value. But again I will generalize a bit and calculate for a 24MP full-frame body.
A little explanation. The columns indicated by DOF give the depth of field. So when focus stacking we need to apply a slightly smaller step than this value. The columns with E indicate the effective aperture value we are actually facing, which varies depending on the magnification. The colors indicate the suitability in terms of diffraction. Green colored areas indicate values suitable for use. In the yellow areas, diffraction of light has started and we have reached the limit of use. Red areas are areas where diffraction is high. We cannot achieve sharpness in red areas. We should avoid working in red areas.
For example, if we are working at 5X magnification, we enter the red zone at f8. To stay completely in the green zone and not be affected by diffraction at all, you should not even use f4.
Note: The above table varies depending on the lens and body used. The green zone does not always give the sharpest image. The lens must also be able to work sharp at that aperture.
If the sensor size gets smaller or the megapixel value increases, diffraction increases and the problem gets bigger. In this case we need to use wider apertures.
So according to this table we will never be able to do 40x magnifications because we are in the red zone for all apertures? Yes, if we start from f1.4. I made this table for the f1.4-f16 range to make it smaller. We cannot get a clear picture at these magnifications with any of the lenses we know. For 30x we need to use a microscope lens. Their f-values are also very low, like f0.25. If we start the table from f0.25, 30x will remain in the green area. Microscope lenses are made especially for this job, they have very low aperture values. However, when it comes to DOF narrowness, even the onion membrane will be too thick 🙂
Note: Microscope lenses are a separate article: Macro photography with a microscope lens
I’ll finish by giving examples from real photos. The photos are raw frames taken from a focus stacking work I did on the same ant with 2 different lenses. On the left is a single photo showing the depth of field and on the other is the stacked work. I recommend that you click on the images below to see the larger versions.
Red ant taken with El-Nikkor 50mm f2.8 at f5.6 aperture at 5x or 6x magnification
Same ant taken with Otamat 101 20mm f2.8 fixed aperture lens at 12x magnification
In both studies the magnification / aperture values for the lenses were used at the limits. These two lenses should not be pushed too far 🙂 I did not have a micrometer precision rail setup when I took these shots. Since I used the bellows rail, I increased the focus stack step length by pushing the depth of field to the limit value. Otherwise the bellows rail would have been insufficient to take such high magnification photos.
Full-frame / APS-C difference
Full-frame sensor size is 36mm, APS-C is around 24mm. That’s the main difference. In APS-C we always see a photo cropped in the center.
For example, let’s use a 50mm lens. There should be no difference between the image taken on a full-frame camera and the image taken with the same lens on an APS-C camera, except for cropping. And there isn’t.
There is a common assumption. It is said that the image taken with an APS-C 50mm is the same as the image taken with a full-frame 75mm. In fact, only the frame is the same, things change if the lens changes. Now let’s think differently. Let’s defend both systems.
APS-C is awesome!
- It is cheaper
- The edges, the most problematic part of the lens, are not used. The quality image in the center is cropped. So edge sharpness is better on APS-C.
- The magnification is higher. A 100mm lens is like 150mm. We magnify more in tele and macro
Full-frame is awesome!
- Pixels are larger on the sensor. Collects more light, more successful in low light.
- We see the world wider, especially wide angles are super wide.
- Since the entire image given by the lens is used, the lens character is better seen.
- In the same frame, the depth of field is narrower. Correspondingly (in the same frame) the bokeh is softer.
What is my personal opinion?
Let’s have 2 lenses. 100mm and 150mm. Let’s put the 100mm on an APS-C camera and the 150mm on a full-frame camera. Let’s set the same aperture on both. The photos we take will be exactly the same in framing. But the depth of field is less in full-frame. It’s great for portraits, but it makes macro a bit more difficult.
When I switched from the Pentax K-x to the Sony A7II, I noticed something. For the same photo, if I apply 50 microns again with the same aperture value that I used to use with a 50 micron step size, there are gaps in the focus area. Focus stacking cannot provide seamless sharpness. Because I had to increase the magnification a bit more to create the same frame, my depth of field has narrowed! Now I have to increase the work I used to finish in 50 shots to 80 shots by making 30 micron step size. My workload has increased significantly.
As I went from 12 megapixels to 24 megapixels, I found that the borders of what I used to see as sharp areas were now blurred. Another impact on the depth of field!
But I really enjoy the details. If I pay more attention to lens selection and aperture settings, I get much better detail despite these difficulties. I can’t say that full-frame is always better, but I don’t think I can give up easily. For those who have a passion for APS-C, full-frame cameras also have APS-C operation mode. It creates the photo by cropping. Of course, we lose megapixels, but compared to the old Pentax body, it still gives higher resolution than it.
After all, I don’t only shoot macro, wide angle and lens character are very important to me. Let me shoot full-frame first, then crop if I want 🙂
I vote full-frame!