Physical Limits in Digital Photography
Physical Limits in Digital Photography
How does the physics of light limit camera design?
We’ve an article by photographer and physicist David B. Goldstein.
David looks at how basic physics limits some aspects of where digital camera design can go.
Note – an updated and considerably longer version of this paper is now available.
Physical Limits in Digital Photography
Digital photography has made immense progress over the past 15 years, particularly in increasing resolution and improving low-light performance.
But continuing progress in these aspects of performance is not possible, because cameras are now approaching the limits imposed by the basic nature of light.
This article looks at the physics that explains these limits. It shows how the fact that light has the characteristics of both particles and waves puts caps on resolution and “film speed” that are not far beyond the performance of current high-end DSLRs, and are already impacting point-and shoot cameras.
Resolution is limited by the wave phenomenon of diffraction: the current generation of full-frame sensors begin to be diffraction-limited at about f/10 and above, and the best APS sensors at about f/7. More pixels can only improve resolution significantly at wider apertures and if new lenses are up to the task. Pocket cameras are diffraction limited at f/2.4 and above, which is faster than all but one available lens.
About the Author
David B. Goldstein is a physicist who applies scientific methods to solving practical problems. Professionally he co-directs the Energy Program of the Natural Resources Defense Council (NRDC). He contributes directly to NRDC blogs. He is the author of Saving Energy, Growing Jobs and can receive email at: firstname.lastname@example.org
He has been a photographer since the mid 1960s and was photography editor of The Daily Californian at the University of California in 1970. He has remained an active photo enthusiast.
David in photographer mode…
High ISO performance is limited by the number of photons available per pixel. ISOs above about 3200 will always result in noise, no matter how much sensors are improved. Actually, the dynamic range of current DSLRs implies that noise in the deepest shadows is limited by photon count even at low ISOs.
These limits change the nature trade-offs that photographers have been making concerning film or sensor size, depth of field, and sensitivity.
At the heart of a digital camera is a bundling of several million photon detectors arranged in a pattern that allows the user to see:
- The particle nature of light. By looking at the variation in colour and light level from pixel to pixel when the sensor is illuminated by a uniform source, the user is seeing the differing number of photon counts; and
- The wave nature of light. This is visible in the form of diffraction by a fuzzy, low contrast image.
Indeed, both of these phenomena are so pronounced that they limit the “film speed” and resolution of digital cameras to levels within a factor of two or so of the performance of current high-quality product offerings.
What is the limit on ISO speed from fundamental physics?
The calculation in the downloadable PDF version of this paper looks at the number of photons incident on a pixel and shows that for full frame 35mm cameras with 21-24 megapixels the number of photons per pixel in the darkest shadows resolved by the sensor is only about 6.
The number of photons is not the same over each pixel, because photon detection is random: this is the nature of how photons can ever be measured. If the mean number of photons received is 6, the variation is √6; this can also be expressed as a signal to noise ratio of 2.4 (=√6). Noise will be a big problem in the deep shadows and a significant problem even in the milder shadows.
These issues are plainly visible to the photographer. See Figure 1 (right) for an example that shows evident shadow noise even at ISO 800.
Image is 100% crop from a Canon 50D test image – this is discussed in more detail in the full version of the article.
Speed can only come at the expense of reducing the pixel count by making each one bigger. Or at the expense of dynamic range compared to what is expected of quality photography.
For more typical size moderately priced digital interchangeable-lens cameras the signal-to-noise ratios will be slightly (but noticeably) worse than full frame.
A typical of point-and-shoot camera provides less than 1 photon at ISO 3200 and less than 4 at ISO 800.
This is so troublesome that we can see why pocket cameras may not even offer ISO 1600 and have visible problems with noise at 400 and even lower. (The cameras that offer high ISO (3200 to 6400) do so by combining pixels, effectively meaning that there are about 1/4 as many of them as you would expect.)
It is well known (see the PDF for calculations and discussion) that waves diffract when they pass though an aperture of finite size. Light coming from a single point spreads out and creates a blurred circle.
Full frame 35mm sensors
For a typical lens set at f/10, the calculation shows that 2000 lines [The word “lines” refers here to cycles of a sine wave or line pairs.] in the vertical dimension is the most that could ever be resolved at f/10 without noticeable losses in contrast. This is significant because a vertical resolution of 2000 corresponds to 21-24 megapixels, a value that already is reached by state-of-the-art cameras. This implies that to make full use of the sensor capabilities of cameras already in production requires the use of apertures less than about f/10. And since the quality of the optics limit the resolution of most lenses today when apertures are much larger than f/5.6 or f/8, this suggests that there are steeply diminishing returns for sensors of more than about 25-35 megapixels for a 35mm camera, a limit that will apply until there are some breakthroughs in optics that can perform better at f/4 than they do at f/10.
Smaller sensors for interchangeable lens cameras
The pixel size for a smaller sensor is about 20% smaller, so diffraction effects begin to be visible at about f/7 or f/8 instead of f/10.
Typically the sensor size is 1/5.5 that of 35mm film, thus the limit for the onset of visible diffraction limit for 12 megapixels is f/2.4. Since only one small-sensor camera currently has a lens that fast, this means that small cameras are always diffraction limited and that megapixel counts much above about 12—which is currently offered—are almost pointless. This observation explains why typical small cameras do not even allow f-stops smaller than f/8: at f/8 the onset-of-diffraction limit is 450 lines, corresponding to about 1.5 megapixels.
The author has also observed this effect: pictures taken at f/8 are visibly, disappointingly less sharp than those taken at wider apertures. I have started using a pocket camera with manual override to assure that I use apertures wider than about f/5, and preferable much wider, whenever possible.
Large and Medium Format Cameras
A 4×5 (inch) film camera has about 4 times the resolving power (in terms of diffraction limit) of a 35mm camera at the same f-stop. But most large format cameras have slow lenses and are used stopped down for depth-of-field reasons, so this increase usually is not fully realized. So if we wanted our 4×5 camera to perform well at f/32, we would only get 25% greater resolution than a 35mm camera at f/10.
Both for this discussion of waves and the previous section on photons, most of this discussion is about wavelengths and energy levels of light and lens and sensor dimensions. So it also applies to human and animal vision. Diffraction would appear to impose a hard limit on how eagle-eyed an eagle can be, and the availability of photons limits what an owl can see in the dark.
A. Physics conclusions
Technology today is approaching or even at the limits of resolution, film speed, and image size imposed by the laws of physics for cameras in the 35mm format or smaller. Further significant improvements in speed or resolution will require the use of larger sensors and cameras.
Thus today’s equipment is sufficiently powerful to allow a physicist to see directly—without the need for instruments and calculation/interpretations beyond the camera itself and a computer monitor– the effects of both the photon nature of light and the wave nature of light. A single picture taken at f/22 and ISO 3200 will display uniform unsharpness unrelated to motion blur or focus error and the noise patterns of each primary colour as well as overall brightness that reflect the statistical nature of photon detection.
B. A Photographer’s Conclusions
Currently available products are pushing the limits of what is physically possible in terms of resolution and low-light performance. Therefore a photographer must relearn how to make choices about what size of camera to use for what assignment, and what trade-offs to make when shooting concerning aperture and depth of field and concerning film speed versus image quality. Past experience and rules of thumb developed in the film age will give results that are qualitatively and quantitatively different than what our intuition led us to understand.
Equipment size/sensor size means better pictures
As equipment pushes closer to the diffraction limit, image quality is directly proportional to image size. Equipment size scales with image size: by simple dimensional analysis, the weight of equipment should scale as the third power of image size. Better pictures require heavier equipment to an even greater extent than was true in the past.
Depth of field
Depth of field at a given f-stop is inversely proportional to image size. In the past, photographers got around that by using smaller apertures for large cameras: a setting of f/64 on an 8 by 10 inch camera produced the same depth of field as f/8 on a 35 mm camera, but allowed greater resolution. But with higher sensor resolution for 35mm digital cameras, both cameras are at the diffraction limit measured in angular terms. Therefore there is no real advantage to the large format. Taking advantage of the larger film size will now require larger apertures, limiting creative freedom for 8 x10 users who want more resolution in return for the weight and inconvenience.
What does speed trade off against?
With film, sensitivity traded off against many parameters: resolution, colour accuracy, and grain being the most significant. With digital, the only trade-off of importance is with noise. And even this trade-off refers mainly to noise in the highlights and middle tones, where it is not a big aesthetic problem even at ISOs of 800 or 3200 or even higher (depending on the aesthetic sense of the viewer). Looking at the performance of current best-in-class cameras, higher film speed comes mostly (or in some cases almost entirely) at the expense of dynamic range. While this trade-off may be due in part to engineering choices made by the manufacturer, much of it is fundamental: at the highest dynamic range currently available, even at ISO 100 the noise in the shadows is pushing the limits of what is acceptable artistically.
Moving from Film
Digital equipment performs much better than film of the same sensor size, which is why a discussion of the limits imposed by the physics of light was not interesting in the past. This better performance, which to the author’s eyes is a factor of 3 to 5 compared to the best 35mm slide film, means that past rules of thumb and trade-off rules must be reconsidered. For example, a rule of thumb for hand-holding the camera used to be that the shutter speed must be at least the reciprocal of the lens focal length. Thus a 50mm lens could only be hand-held reliably at 1/50 second. But if the image could be 3-5 times sharper than film, than a three-times-faster shutter speed is needed. Similarly, since depth of field is based on the smallest size of blur that could be considered equivalent to a point, it now is 3-5 times shallower. And not only that, but the small apertures that photographers used to use for large depth of field are no longer available due to diffraction. This is even more the case for smaller sensors, where for pocket cameras sharpness demands the widest aperture available. Depth of field is no longer the creative choice that it used to be.
Update 30th June 2009
David has been following the comments about the article, and will be writing an updated version of the full PDF article addressing some of the comments below.
In the meantime he’s said:
“I want to thank all of the commenters for their thoughtful and thought-provoking technical responses. I have analyzed them at some length, and will provide a revised version of the paper in response. This revision will most likely still reach the same numerical conclusions but will be clearer about both the conditions and caveats that underlie them.
My efforts to develop responses to the comments led me to the conclusion that there are some significant additional factors that should be taken into account in explaining the basis of the broad results in the article. The most important clarification is that the limits imposed by diffraction as I have specified them are not the hard limits, but rather refer to the onset of visible reductions in image quality of ordinary photographs. These problems get worse as apertures decrease beyond the thresholds I identify, but diffraction still allows resolution or contrast data to be gained as the sensor increases from ~24 megapixels potentially to the ~200 megapixel limit suggested by John Green. (However, the last factor-of-two [100 to 200 megapixels] makes at best a very small difference unless we are looking at a signal that is a very deep green and beyond the color gamut of sRGB and Adobe RGB).
The conclusion on the practical limit to megapixels will likely be revised to say that above the stated levels we will see rapidly declining returns to higher megapixel counts rather than allowing the reader to infer that more megapixels are useless.
The primary calculational basis for this conclusion is the case where we are looking at a signal in which two points are separated by two pixels—the Nyquist frequency—and we want the first minima of the two Airy disks to coincide. This case (which is already in the paper) will be presented first to clarify that it is the most interesting or relevant one. It is most interesting because its predictions are corroborated by real pictures, both the ones displayed in the article and other ones that I have taken casually or as tests. It also agrees with observations other photographers have published on line (for example, and numerous lamentations among camera reviewers and bloggers about the megapixel race in pocket cameras).
I will add clarification on the issue of noise. Noise is important not only to prevent grainy appearance but because it eliminates visual information from the picture that cannot be recovered with accuracy by post-processing. This clarification embraces both noise generated by photon statistics and noise generated by point subjects being rendered as reduced-contrast blurs.
I will add more detail and examples related to color. Color is a complex issue, because it is not the same thing as wavelength of light. All pure wavelengths are a mix of at least two primary colors, and all but one, or perhaps three (depending on which version of the chromaticity diagram you use) particular wavelengths are a mix of all three. Obviously any mixture will therefore contain all three colors. Since at the level of diffraction limits, the color of pixels changes from one to each of its neighbours, this consideration now seems more important than I (or the commenters, or anyone else I have read) thought.
This observation, which is missing from my current draft, appears to lead to some very interesting and complex interactions between the Bayer filter, the anti-aliasing filter, and the sensor’s resolution limit and artefacting problems. These do not seem to have been discussed quantitatively before now, so it will take me a little time to analyze them. It seems that I will essentially be reverse-engineering an anti-aliasing filter. And of course I will welcome comments on the derivation when I present it: since it appears to be new information (or a rediscovery of information not in the public domain), the potential for error is higher than usual. The preliminary result is that a strong anti-aliasing filter will create severe artefacting for strongly colored (in the sense of the dominant color coordinate being on the order of .7) subjects, while a weak one will usually avoid this problem while also mitigating the color-insensitive aliasing. This result would imply that the AA filter must allow some information content at above-Nyquist frequencies to pass through.
Once again, thanks to everyone for the comments. I hope they will lead us all to new insights.”
Update 18th August
An updated and considerably longer version of this paper is now available.
This not only provides some of mathematical background, but considers many more factors that may be of relevance. If you’ve found the article above of interest, then it’s well worth a read. David has said he welcomes comments and discussion…
All the latest articles and photo news items appear on Keith's Photo blog .
We've a whole section of the site devoted to Digital Black and White photography and printing. It covers all the specialist articles and reviews written by Keith Cooper.
Buying anything from Amazon (not just what's listed) via any of the links below helps Keith and Karen keep the site going - thanks if you do!