THE MYTH OF DEPTH OF FIELD

By Bill Dobbins
www.billdobbinsphotography.com

Korona_View_Camera
A traditional view camera includes controls like shifts, swings and tilts to racially change the plane of focus falling on the film plane.

Photographers know that the term “depth of field” refers to the degree that whatever is in front of the lens is actually in focus.  DOF is affected by aperture: the smaller the aperture the larger degree of depth of field.  Also by the lens: the wider angle the lens the greater amount of DOF.

This is true, but only up to a point.  In actuality, there is only one plane of focus in which everything is actually sharp.  What we generally refer to as depth of field should actually be called apparent depth of field.  That is, how much of the image is unsharp to such a small degree it seems to the eye to be in focus.

swing-tilt

In optics, particularly as it relates to film and photography, the optical phenomenon known as depth of field (DOF), is the distance about the plane of focus (POF) where objects appear acceptably sharp in an image. Although an optical imaging system can precisely focus on only one plane at a time, the decrease in sharpness is gradual on each side of the POF, so that within the DOF the unsharpness is imperceptible under normal viewing conditions. – Wikipedia

In the past, using view cameras with shifts, swings and tilts, photographers could change the plane of focus to include different specific areas in the actual and real plane of focus.  So when Ansel Adams was creating an image in which the rocks at his feet and the distant mountains all seemed to be in very sharp focus, he not only closed the aperture down to f64, but he would tilt the back of his camera so that the plane of focus was no longer vertical but at a diagonal so as to include both the nearby ground and the distant landscape.

Using the controls on a view camera also allowed for correcting for distortion, such as shooting up at a tall building and keeping the vertical lines more parallel or more nearly parallel.

Clouds high above the skyscrappers of Tokyo, Japan.
An example of both perspective and wide angle lens distortion.

Using techniques like this tends to create images that are hyper-realistic rather than realistic.  What you see in an iconic Ansel Adams landscape photo is not the way the eye sees the world.  We do not see an entire scene mostly in sharp focus, but continually change focus as we look at a scene and then using this info to build up a composite view in our minds.

Today’s photographers are rarely using view cameras equipped with swings and tilts.  But there are alternatives.  For example, there are lenses that themselves include swing and tile controls.  The plane of focus of the camera itself remains fixed but the plane of focus that falls on it has been changed by the lens.

11_clearing_winter_storm
A hyper-realistic view in which EVERYTHING is in focus. Clearing Winter Storm, Yosemite National Park Ansel Adams (American, 1902–1984) about 1937 Photograph, gelatin silver print *The Lane Collection *© The Ansel Adams Publishing Rights Trust *Courtesy, Museum of Fine Arts, Boston

There are also software controls that do much of what view cameras with shift, swings and tilts are able to do.  For example, there is focus stacking, where multiple images, each with a specific focus and combined to only include the areas of an image that are in actual focus.  This would allow for shooting a landscape where the foreground and background are both in actual focus, or shooting tabletop product shots where objects closer to and further from the lens are all presented as super sharp.

Software also allows for the ability to correct for distortion, for example restoring parallel lines shooting upward at tall buildings or correcting for distortion shooting with very wide angle lenses.

dof
There is apparent depth of field but only one point of actual focus.
1024px-Depth_of_field_illustration.svg
The smaller the aperture, the more nearly parallel are the light rays that enter the camera and therefore the greater the apparent depth of field.

desmond-louw-03

So even though most photographers don’t use view cameras, things like extending actual depth of field and the ability to compensate for distortion are available with modern cameras, lenses and software for those interested in including these things as part of their photographic toolbox.

But to begin with it is important to realize there is really no such thing as actual depth of field – only apparent depth of field where lack of focus is so relatively small that the eye tends to be fooled.

Canon_3553B002_Wide_Tilt_Shift_TS_E_17mm_606803
Camera lenses are available that allow for the use of shift and tilt controls much like those available on a view camera.
D3S_1220-950
Nikon lens with shift and tilt controls.

*****************************************

MORE FROM WIKIPEDIA:

Effect of aperture on blur and DOF. The points in focus (2) project points onto the image plane (5), but points at different distances (1 and 3) project blurred images, or circles of confusion. Decreasing the aperture size (4) reduces the size of the blur spots for points not in the focused plane, so that the blurring is imperceptible, and all points are within the DOF.

Depth of field is the distance between the nearest and the furthest objects that are in acceptably sharp focus.[1] “Acceptably sharp focus” is defined using a property called the circle of confusion.

The depth of field is determined by focal length, distance to subject, the acceptable circle of confusion size, and aperture.[2]. The approximate depth of field can be given by:

for a given circle of confusion, C, focal length, f, F-number N, and distance to subject, u.[3][4]

As focal length, distance, or the size of the acceptable circle of confusion increases, the depth of field increases; however, increasing the size of the aperture reduces the depth of field. Depth of Field changes linearly with F-number and circle of confusion, but proportional to the square of the focal length and the distance to the subject. As a result, photos taken at extremely close range have a proportionally much smaller depth of field.

Sensor size affects DOF only in that changing the sensor size on a camera requires changing the focal length to get the same picture. It is the change in focal length that then affects the DOF.[5][6][7]

For a given subject framing and camera position, the DOF is controlled by the lens aperture diameter, which is usually specified as the f-number (the ratio of lens focal length to aperture diameter). Reducing the aperture diameter (increasing the f-number) increases the DOF because only the light travelling at shallower angles passes through the aperture. Because the angles are shallow, the light rays are within the circle of confusion for a greater distance.[8]

Points in an image produce a blur spot shaped like the aperture. Custom shaped apertures and bright points of light can be used artistically to produce images like this

Motion pictures make only limited use of this control; to produce a consistent image quality from shot to shot, cinematographers usually choose a single aperture setting for interiors and another for exteriors, and adjust exposure through the use of camera filters or light levels. Aperture settings are adjusted more frequently in still photography, where variations in depth of field are used to produce a variety of special effects.

Aperture = f/1.4. DOF=0.8 cm
Aperture = f/4.0. DOF=2.2 cm
Aperture = f/22. DOF=12.4 cm
Depth of field for different values of aperture using 50 mm objective lens and full-frame DSLR camera. Focus point is on the first blocks column.[9]


Precise focus is only possible at an exact distance from the lens;[a] at that distance, a point object will produce a point image. Otherwise, a point object will produce a blur spot shaped like the aperture, typical a circle, approximately. When this circular spot is sufficiently small, it is visually indistinguishable from a point, and appears to be in focus. The diameter of the largest circle that is indistinguishable from a point is known as the acceptable circle of confusion, or informally, simply as the circle of confusion. Points that produce a blur spot smaller than this acceptable circle of confusion are considered acceptably sharp.

The acceptable circle of confusion depends on how the final image will be used. It is generally accepted to be 0.25 mm for an image viewed from 25cm away.[10]

For 35 mm motion pictures, the image area on the film is roughly 22 mm by 16 mm. The limit of tolerable error was traditionally set at 0.05 mm (0.002 in) diameter, while for 16 mm film, where the size is about half as large, the tolerance is stricter, 0.025 mm (0.001 in).{{sfn|[11] More modern practice for 35 mm productions set the circle of confusion limit at 0.025 mm (0.001 in).[12]

Scheimpflug principle.

The plane of focus is normally parallel to the image plane. However, moving the lens relative to the sensor can rotate the plane of focus.

When the plane of focus is rotated, the near and far limits of DOF are no longer parallel; the DOF becomes wedge-shaped, with the apex of the wedge nearest the camera. [13][14]

In some cases, rotating the POF can better fit the DOF to the scene, and achieve the required sharpness at a smaller f-number. Alternatively, rotating the POF, in combination with a small f-number, can minimize the part of an image that is within the DOF.

*******************************************************

Bill Dobbins Sarah Lyons dressing room-SMALL-1

Bill Dobbins is a pro photographer located in  Los Angeles. He is a veteran photographer and videographer who has exhibited his fine art in two museums and a number of galleries and who has published eight books, including two fine art photo books:

The Women: Photographs of The Top Female Bodybuilders (Artisan)
Modern Amazons (Tashen)

WEBSITES

BILL DOBBINS PHOTOGRAPHY
www.billdobbinsphotography.com

BILL DOBBINS ART
www.billdobbinsart.com

FEMALE PHYSIQUE SITES
www.billdobbins.com

EMAIL: billdobbinsphoto@gmail.com

maxresdefault (6)
To have both lenses in focus, one technique would be focus stacking – combining two images, each with different points of focus and using only the areas actually in sharp focus.

 

 

 

Advertisements