The object of this assignment is to carefully measure and document a number of characteristics for different cameras and lenses. In particular, we are interested in quantifying the noise characteristics and photometric response for several different chip/digitizer combinations, and the off-axis dimming and non-perspective distortion for several different lens/camera combinations.
In order to get the work done, the class will be split into teams of 2 to 3 people, with each team taking responsibility for one problem. Each team should hand in a writeup of their methodology and results. Each team will also give a short (5-10 minutes) presentation of their results at the beginning of class on the due date. Currently only people who were in class on Thursday, Sept. 13 are assigned to a team. If you are still in the course, but were not present on the 13th, let me know ASAP, and I will assign you to a team.
Part of the exercise is figuring out what it is reasonable to measure, how to measure it, and how the measurements should be represented. There are some ideas about possible methods in the following description of each problem, and some issues came up in the class discussion, but these should not be taken as representing the only, the best, a complete, or even (necessarily) a workable solution. Ultimately, coming up with something sensible and useful is up to the creativity of your team.
Your job is to explore the dependence of uncertainty on some of these parameters. At a minimum, you should look at 3 or 4 different cameras, and the dependence on illumination and type of lighting. For some of the cameras, you may be able to explore high and low gain regimes by turning on the automatic gain control and varying the overall lighting. You might want to distinguish between intrinsic variation in a single pixel response and local (deterministic) variation in the sensitivity of neighboring pixels. The latter is an effect that could potentially be corrected for (on a chip by chip basis), but more generally will just be lumped into an overall uncertainty.
Some properties you might consider quantifying are temporal uncertainty of single pixel response (as a function of illumination level and gain); local spatial variation in pixel (temporal mean) responses at a given brightness - and is this stable and correctable; and characteristics of camera dark current (cameras produce a signal even in total darkness). And there are others.
Standard deviation (as a function of various factors) is a frequently
used measure of uncertainty, but you might want to check whether the
variation actually matches a gaussian model, or is better described
by some other function.
Things to watch out for include aliasing artifacts, variation in the
sensitivity of individual pixels in a neighborhood, variation over large
neighborhoods due to off-axis dimming, digitization artifacts
(does hardware really produce 256 levels?), effects of
automatic gain control (you can shut this off in some of the cameras
with a switch) effects of dark current, and camera warm-up effects.
Team One:
The challenge is to figure out how to make this measurement with equipment you can easily obtain or make. Some suggestions came up in class, including light meters, commercial reflectance calibration charts, and various methods of directly adjusting the light on a patch in known proportion. There may be other possibilities.
Complicating factors include automatic gain control mechanisms
in the camera and digitizer electronics and software,
and possible automatic contrast adjustment mechanisms.
If either or both of these are operating anywhere in the
transduction chain, it will be impossible to determine the
photometric function from separate, uniform images of different
intensity. A single image with regions of different (known) intensity
is probably a better bet, though in that case you have to watch out
for potential problems due to off-axis dimming.
If you make single-pixel measurements, then uncertainty associated
with these can be an issue.
Automatic gain control may also change the functional form of
the photometric response for different levels of overall illumination.
At any rate, it is something to be aware of.
Team Two:
An obvious methodolgy is to analyze pictures of a uniformly illuminated white surface. Getting this uniformly illuminated surface is not as easy as you might think. There are a lot of variations due to shadows, differing distances from various light sources and light-reflecting objects, and low-level specularity, that are essentially invisible to the eye due to its own adaptive processing, but that can dominate the measurements you take. Moving the camera to check different regions with the central neighborhood is one way of checking (or compensating) for this.
Complicating factors include possible interference from automatic
gain control systems, especially if the camera is moved around, and
problems with pixel-level uncertainty if values are determined
from individual pixels.
Be careful if the aspect ratio of your pixels is not 1 to 1.
You need to determine the field of view, and at least approximate
the relationship of angle to pixel index in order to compare with the
cos^4 function (though accurate determination of this is the job
the distortion team).
If the lens center (center of projection) is not in the center of the
chip, then this will affect your measurements, producing an offset of the
pattern.
A slight tilt of the lens with respect to the chip can also shift
the pattern. Untangling the two is extremely difficult, but you should
be aware that it is quite likely that the center of your pattern will not
be exactly in the center of the chip.
Team Three:
A challenge is to come up with an intuitve and useful way of representing the distortion. The idea of relating local x and y metrics to the distance from the center of projection was brought up in class. Another idea is simply to functionally relate ideal x and y coordinates to actual x and y coordinates (this function is probably separable). The first representation is related to the partial derivatives of this one. Various graphical representations could also prove useful. Again, they should show the distortion with respect to the perspective model.
An obvious methodology is to take pictures of a regular grid or spot pattern, for which you can compute the perspective projection for your camera. Doing this will probably involve some careful measurements near the center of the field of view (where the perspective model tends to be good for any camera).
Complicating factors include accurately measuring the position of
grid lines on a limited resolution image.
You might want to contrive the situation so this can be done automatically
with simple techniques you dream up (since we haven't studied line or dot
detection etc. yet).
With care, you should be able to get sub-pixel accuracy in your
measurements.
Make sure you know and properly account for the aspect ratio.
Offset of the center of projection from the center of the image
can cause complications by shifting the center of distortion.
Other issues arise if your grid is not far enough away to be effectively
at infinity, and even in that case, certain (cheap) wide-angle lenses
may not be able to achieve simultaneous sharp focus over the field of view.