Contact us: +44 116 291 9092
Title Image

Physics and digital camera design limits

  |   Article, Articles and reviews, Photography news   |   7 Comments

Physics and camera design limits

Looking at how fundamental aspects of physics could limit some aspects of camera design

Some update notes on our original article about physical design limits

Site Notice: Like many working photographers, our work has completely dried up in these challenging times, so I'll be at home a lot. The silver lining is that I've lots of articles and reviews to write - if you've any suggestions or questions, please do let me know - Keith ...Why not sign up for our (ad free) Newsletter to keep informed about new articles.


Some of the Physical limits in Digital photography

We’ve just published a lengthy new article on the site. This short introduction dates from when we didn’t have comment features on articles, so its comments are here.

How does the physics of light limit camera design?

This article represents a bit of a sea change for our site, in that it’s by a guest author – David Goldstein from San Francisco.

We’re looking at expanding the range of photographic / technical articles and there is a limit to how many I can write and run Northlight Images’ photography business – or as I sometimes call it the ‘real work’.

Have a read of David’s article (the extended PDF version is worth making the effort to get through the bit of maths, even if it’s not your strong point) and do feel free to add a comment here with your opinions.


 

Update  30th June 2009

David has been following the comments on the article, and will be writing an updated version of the full PDF article addressing some of the comments below.

In the meantime he’s said:

“I want to thank all of the commenters for their thoughtful and thought-provoking technical responses. I have analyzed them at some length, and will provide a revised version of the paper in response. This revision will most likely still reach the same numerical conclusions but will be clearer about both the conditions and caveats that underlie them.

My efforts to develop responses to the comments led me to the conclusion that there are some significant additional factors that should be taken into account in explaining the basis of the broad results in the article. The most important clarification is that the limits imposed by diffraction as I have specified them are not the hard limits, but rather refer to the onset of visible reductions in image quality of ordinary photographs. These problems get worse as apertures decrease beyond the thresholds I identify, but diffraction still allows resolution or contrast data to be gained as the sensor increases from ~24 megapixels potentially to the ~200 megapixel limit suggested by John Green.  (However, the last factor-of-two [100 to 200 megapixels] makes at best a very small difference unless we are looking at a signal that is a very deep green and beyond the color gamut of sRGB and Adobe RGB).

The conclusion on the practical limit to megapixels will likely be revised to say that above the stated levels we will see rapidly declining returns to higher megapixel counts rather than allowing the reader to infer that more megapixels are useless.

The primary calculational basis for this conclusion is the case where we are looking at a signal in which two points are separated by two pixels—the Nyquist frequency—and we want the first minima of the two Airy disks to coincide. This case (which is already in the paper) will be presented first to clarify that it is the most interesting or relevant one. It is most interesting because its predictions are corroborated by real pictures, both the ones displayed in the article and other ones that I have taken casually or as tests. It also agrees with observations other photographers have published on line (for example, and numerous lamentations among camera reviewers and bloggers about the megapixel race in pocket cameras).

I will add clarification on the issue of noise. Noise is important not only to prevent grainy appearance but because it eliminates visual information from the picture that cannot be recovered with accuracy by post-processing. This clarification embraces both noise generated by photon statistics and noise generated by point subjects being rendered as reduced-contrast blurs.

I will add more detail and examples related to color. Color is a complex issue, because it is not the same thing as wavelength of light. All pure wavelengths are a mix of at least two primary colors, and all but one, or perhaps three (depending on which version of the chromaticity diagram you use) particular wavelengths are a mix of all three. Obviously any mixture will therefore contain all three colors. Since at the level of diffraction limits, the color of pixels changes from one to each of its neighbours, this consideration now seems more important than I (or the commenters, or anyone else I have read) thought.

This observation, which is missing from my current draft, appears to lead to some very interesting and complex interactions between the Bayer filter, the anti-aliasing filter, and the sensor’s resolution limit and artefacting problems. These do not seem to have been discussed quantitatively before now, so it will take me a little time to analyze them. It seems that I will essentially be reverse-engineering an anti-aliasing filter. And of course I will welcome comments on the derivation when I present it: since it appears to be new information (or a rediscovery of information not in the public domain), the potential for error is higher than usual. The preliminary result is that a strong anti-aliasing filter will create severe artefacting for strongly colored (in the sense of the dominant color coordinate being on the order of .7) subjects, while a weak one will usually avoid this problem while also mitigating the color-insensitive aliasing. This result would imply that the AA filter must allow some information content at above-Nyquist frequencies to pass through.

Once again, thanks to everyone for the comments. I hope they will lead us all to new insights.”

BTW If there’s anyone reading who feels like writing an article, then do let me know what you’re thinking of and we can see how it fits in with what’s already on the site (see the writing an article page for a bit more info) This site had over 5 million visitors last year, so your article will certainly get noticed – We’ll sort out web layout etc. You just need to supply text and some images. Of course, all submitted content remains copyright the author.

Keith Cooper

Never miss a new article or review - Sign up for our occasional (ad-free) Newsletter

Enjoyed this article?

Other areas of our site that may be of interest...

All the latest articles/reviews and photo news items appear on Keith's Photo blog 

We've a whole section of the site devoted to  Digital Black and White photography and printing. It covers all of Keith's specialist articles and reviews.
For All about using tilt and shift - articles/reviews about tilt/shift lenses

Other sections include Colour management and Keith's camera hacks - there are over 1100 articles/reviews here...

Articles below by Keith (Google's picks for matching this page)


Site Notice: Like many working photographers, our work has completely dried up in these challenging times, so I'll be at home a lot. The silver lining is that I've lots of articles and reviews to write - if you've any suggestions or questions, please do let me know - Keith ...Why not sign up for our (ad free) Newsletter to keep informed about new articles.


7 Comments
  • David Graham | Jul 12, 2009 at 1:54 am

    One problem with the math in the pdf is the assumption that a point will reflect equally through a half-hemisphere. A diffuse white reflection from a limited flat surface (like a 8.5×11 inch sheet of paper) appears equally bright from various angles, but does not present the same apparent width (solid angle) when viewed from different angles.

    This indicates that the light energy is distributed in proportion to the cos of the angle from a perpendicular, not equally. In short, more light is passing through the aperture than the pdf indicates.

    Thanks,
    David Graham

  • Mark | Jul 10, 2009 at 12:55 pm

    I would recommend those interested in these matters to explore Roger Clark’s site. An excellent introduction to the analysis of sensor performance is here:

    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/

    with lots of references on how the data were obtained.

    I’m not sure that I agree with David Goldstein about limits of low light performance or resolution criteria. The role of image processing has been largely ignored in his paper, yet there are clear differences in the performance of RAW converters depending on the algorithms they employ – some are illustrated at clarkvision with particular reference to extracting shadow detail. So far as diffraction is concerned, the Raleigh criterion is far from the only choice that could be made – there are also the Dawes and Sparrow criteria:

    http://www.licha.de/astro_article_mtf_telescope_resolution.php

    Armed with deconvolution algorithms, such as the Richardson-Lucy adaptive algorithm, much more detail can be reconstructed (an interesting example at clarkvision). With highly specialised processing you could go further. Nyquist limits apply only if you assume you have no prior knowledge of the data being analysed. They also assume that you are making point samples of the data, rather than taking an average reading over an interval – a pixel has positive dimensions. Once you are able to narrow down the nature of the subject, you can design a purpose built algorithm that can extract detail way beyond the Nyquist limit. A very simple example would be interpolating the precise location of a sharp boundary between dark and light areas, where sub pixel resolution is easy to achieve (indeed only a few pixels are required to determine a linear boundary that may extend a considerable distance, just as only two points are required to define a line in geometry).

    Incidentally, the anti-alias filter uses cross oriented birefringent layers that spread a notional point into a square. Performance at wide apertures of all sensor optics becomes more problematic because of the wide range of angles at which the light is incident. Problems range from vignetting by the microlenses through light bouncing around between the various sensor optical elements, and also the diffraction grating formed by the regular pattern of microlenses.

  • Keith | Jul 9, 2009 at 4:05 pm

    Thanks very much for your comments John – I’ve passed them on to the author.

    As I’ve mentioned, having additional articles on the site is a new step for us, so I’m glad to see such considered responses.

  • John Green | Jul 9, 2009 at 3:48 pm

    I just noticed the PDF version of this article and it corrects the glaring error in the synopsis of ignoring the Nyquist criteria but it leaves intact the incorrect conclusion that we have already reached the pixel size limit. The PDF concentrates only on effects that support the conclusion that the pixel limit has been reached while ignoring the reasons that it hasn’t. Here are some examples:

    The PDF points out that at the Rayleigh criteria spacing the contrast is low without mentioning that sharpening can be used to boost the contrast. Sharpening does increase the visibility of noise but that effect is not a large problem at base ISO.

    The PDF contains an argument that a spacing more coarse than Nyquist is acceptable because there remains some sampled signal- without realizing that the aliased false detail that results is completely wrong: it has the wrong number of cycles in a sine wave and causes stair-stepping of fine lines. Proper artifact-free imaging requires that the Nyquist limit be met so the statement that a lower sampling rate is acceptable is simply false.

    The PDF also doesn’t properly consider that to recover artifact free detail from a Bayer array means that the pixel count needs to be doubled as compared to a monochrome sensor because unambiguous luminance information is available only at half the pixel sites and unambiguous color information is at one fourth the pixel count. Raw converters that attempt to recover detail above these counts generally only succeed for test charts while sometimes generating ugly fine scale digital artifacts for real world images.

    The problem of some colors having different resolutions is a non-issue for color sensors since each pixel contains a filter preventing it from sensing other wavelengths. Mismatch of color scale and focus is caused primarily by lens chromatic aberrations but these can be mostly corrected in software and color details are only measured in a Bayer sensor at half the pixel pitch anyway.

    An example of the effect of the Bayer sensor’s anti-aliasing filter is attempted without justification for the poor low pass effect of the filter coefficients chosen. A better analysis would start with the frequency response of a filter that actually prevents aliasing before attempting to show its effect. The result would show the advantage that increasing pixel count has for reducing the softening effect of the low pass filter.

    The PDF also mentions that binning of small pixels can be used to increase dynamic range without realizing that the same effect happens even without binning through area averaging of the eye when viewing a very fine pixel pitch print. That is why the dynamic range used should be adjusted for pixel pitch as it is for the print dynamic range at the DXOmark site. Notice that the camera with the greatest dynamic range is the Nikon D3x which also happens to be one of the full frame cameras with the highest pixel count. When adjusted for sensor size though we find the Fuji FinePix S100FS with only 2.3 micron pixels has one stop more dynamic range per area than the D3x. (Care must be used when comparing DXOmark values for sensors of different area ; quadrupling sensor area increases print dynamic range by one stop).

    Putting together all of the effects above means that for best base ISO performance with sharpening we need a sensor with twice the pixel count of a Nyquist spacing sensor at the Rayleigh criteria which results in a 192 Megapixels sensor at f/9. Of course this would only be best if the quantum efficiency can be kept high and the read noise kept low enough but technology improvements should eventually achieve these.

  • John Green | Jul 9, 2009 at 2:14 pm

    Unfortunately the math in this article is not correct. The first glaring error concerns the difference between line pairs and pixel pitch. It takes two pixels to sample a line pair cycle (the Nyquist criteria) so “at f/10 4000 lines [The word “lines” refers here to cycles of a sine wave or line pairs.] in the vertical dimension” corresponds to 8000 vertical pixels not 4000 for 96 megapixels on a full frame camera. All the other diffraction limited pixel counts need similar corrections.

    There is a second error in the analysis of the effect of pixel pitch on dynamic range. To compare the dynamic range between two same size sensors with different pixel pitches it is necessary to calculate the dynamic range for an equivalent part of the image (which would include a different number of pixels for the two sensors). If one sensor has half the pixel pitch of the other the fine pitch sensor has four pixels in the area of a single coarse pitch pixel. When the saturation limits and noise floors are correctly adjusted for this difference the dynamic range of the fine pitch sensor over the area of a coarse pixel is one stop greater than it is over just a single fine pitch pixel. Using this adjustment with real sensors we find that the finest pixel pitch sensors have the greatest base ISO dynamic range per sensor area.

  • Don Green | Jul 8, 2009 at 8:27 pm

    Great article. Please make it required reading for the marketing departments of all camera manufacturers. Then perhaps they will forget the megapixel race, and start letting the designers make cameras that can be used to take better photos – i.e. less megapixels, faster lenses, and larger sensors (especially for compact cameras). In other words, exactly what most photographers have been asking for!

  • Robin Sinton | Jul 8, 2009 at 8:04 am

    A really good thorough article. It follows up on even more technical articles by Roger Clark (www.clarkvision.com). One thing that has always puzzled me however is the concern with high ISO noise. Whilst there is always a case for technical perfection in purely commercial photography such as the work that Keith does, in aesthetic terms grain is not always a bad thing. In fact a little grain can assist if you want to up-res. An excellent article on this is, “The Art of the Up-res” by Jeff Schewe in which he advocates adding some noise to give better results when up-ressing. Anyone who has ever used Tri-X and pushed it knows all about high ISO noise. I personally find that the digital image is so clean that a little bit of added noise(grain) often gives a more aesthetically pleasing result especially with black and white. Apart from that, David Goldstein’s article give a refreshing scientific view on something that most photographers take for granted.

Post A Comment