Photo Friend questions & answers

Q. What EV or Exposure Value stands for?

A. Exposure value is an absolute unit of measurement for light, suitable for photography.
It is also known as LV or EV_{100}. An EV of 15 corresponds to a 'perfect' shiny day.
The Sun delivers 1.05kW per square meter, but only 43% of that is visible light. EV is
related to the power and the color of the lighting source.

Q. What is better for light metering: the camera or the light sensor?

A. It depends a lot on the phone, namely the quality of the components. For scenes with illumination within the ranges of both sensors, EV estimation should be very similar for reflected light and incident light (give or take one stop). It is possible that one sensor is more capable than the other at very low light (below EV2); try them out to see which one goes further down. Light sensors may also have a maximum reading of EV14, so it might be fooled by e.g. a sunny snow scene (EV16).

Q. How is depth-of-field (DoF) estimated?

A. The app uses the classic "exact" formula that can be found in many places, including Wikipedia.

Q. Why the DoF values in your app are not equal to this XYZ DoF table that I have here?

A. Every DoF calculator and DoF table may use a different formula. Some use "approximation" formulas that are simplified versions of the "exact" formula. Some take diffraction into consideration; this app does not. If you think you've found an egregious error in DoF calculation, or you'd argue for the adoption of different formulae, please send an e-mail or post a comment on this page.

Q. How the sensor size configured at Settings can affect DoF calculation?

A. In theory, a smaller sensor crams more megapixels in a smaller space, so the
circle of confusion becomes smaller, and the DoF range is actually **decreased**
when a smaller sensor is used.

I know, I know, this goes against the generally-accepted mantra that smaller sensor increases
DoF range. This is because a smaller sensor asks for a smaller focal length given
the same field-of-view. Reducing the focal length has a **quadratic increasing** impact on DoF.
Also, read the next question as well.

Q. What is the "circle of confusion" in Settings? Why d/number?

A. Circle of confusion (CoC) is the smallest sharp circle on a picture. Any feature smaller than CoC will be blurred, even with perfect focus. Since even perfectly focused objects are slightly blurred due to CoC, objects slightly out-of-focus look as sharp as perfectly focused ones. This creates the range of depth-of-field. A perfect optical system would have a CoC of zero and no DoF range.

The CoC is commonly estimated by dividing the **diagonal sensor size (d)** by an
arbitrary number. The well-known "Zeiss formula" is d/1730.
Other typical values for APS-C and full-frame sensors are d/1000, d/1300
and d/1500.

So, if you choose d/1500 in Settings, the CoC will be estimated as 1/1500th of the sensor's diagonal size. Of course, you need to choose the sensor size correctly as well.

Q. Which divider should I choose for the circle of confusion?

A. You should experiment to find the value that best matches the DoF you get with your equipment. Even though I adopted the classic d/1730 as default, my own equipment (APS-C DSLR) is more like d/1300, and some sources say that d/1730 is too optimistic even for full-frame.

Q. How do sensor size and circle of confusion relate?

A. Generally speaking, smaller sensors have more trouble producing sharp images. Optical limitations of the lens become relatively more important, there is more noise, and so on. All these factors make the circle of confusion bigger, therefore the divider should be smaller. For example, d/1730 might be reasonable for a full-frame DSLR, but it is too optimistic for a 1" sensor camera; d/1000 would be a better bet.

Of course, testing is needed to determine the best CoC value for your equipment. This article of mine suggests a method to determine the actual resolution of your camera.

Q. Why don't you take diffraction into account for DoF calculation? Smaller sensors are more affected by diffraction.

A. I have experimented with this. Diffraction only overtakes CoC at very high apertures. I feel that manufacturers already avoid bad combinations of sensors and apertures. For example, new prime lenses for full-frame don't go below f/16, and phone cameras are fixed at f/2.

Q. Why do you mention effective megapixels (MP) along with the CoC dividers?

A. Because MP is a number that people can grasp. There is a close relationship between CoC size and effective megapixels.

For example, if the Zeiss formula (d/1730) is a good CoC estimate for my camera, it means that my camera can resolve 1730 distinct "pixels" diagonally. Using the Pythagorean theorem, we can convert this to vertical and horizontal pixel counts, and then find the total megapixels. There are additional considerations; in case you are curious, check the final part of this article.

Note the difference betweeen sensor megapixels and **effective** megapixels. If the
sensor has 100MP but the circle of confusion has a diameter equivalent to 4 pixels,
the effective resolution is just 13.5MP. (The extra sensor resolution is not completely
useless; it gives us better color resolution.)

Q. Are the eleven zones of the histogram related to Ansel Adams' zone system?

A. Yes.

Q. How the spot mode finds out the exposure for each piece of the picture?

A. It assumes the general exposure set by the phone camera is "zone 5" that corresponds to middle gray (sRGB 110).

A pitch black portion would be -5.5EV, and all-white whould be +5.5EV. The total EV span is 11 stops, which is consistent with the 11-zone segmentation of the histogram.

Q. The phone camera generates pictures in 8-bit sRGB. How can 8 bits cover 11 stops?

A. The sRGB encoding is non-linear and the theoretical dynamic range is 13 stops. In theory, the conversion from sRGB to zone system is something like

sRGB (0..255) → linear (0..1) → ×2^{13}+1 → log_{2}

But, in our own tests, the most sensible results were obtained by assuming a dynamic range of 11 stops, and using a inverse-gamma mapping instead of log:

sRGB (0..255) → linear (0..1) → n^{1/2.2} → zone (0..1) → ×10