Clean image data was obtained automatically using a combination of a robot-mounted camera, and a computer controlled turntable covered in black velvet. Training data consisted of 53 images per hemisphere, spread fairly uniformly, with approximately 20 degrees between neighboring views. The test data consisted of 24 images per hemisphere, positioned in between the training views, and taken under the same good conditions. Note that this is essentially a test of invariance under out-of-plane rotations, the most difficult of the 6 orthographic freedoms. The planar invariances are guaranteed by the representation, once above the level of feature extraction, and experiments testing this have shown no degradation due to translation, rotation, and scaling up to 50%. Larger changes in scale have been accommodated using a multi-resolution feature finder, which gives us 4 or 5 octaves at the cost of doubling the size of the database.
We ran tests with databases built for 6, 12, 18 and 24 objects, and obtained
overall success rates (correct classification on forced choice) of 99.6%,
98.7% 97.4% and 97.0% respectively.
The worst cases were the horse and the wolf in the 24 object test,
with 19/24 and 20/24 correct respectively.
On inspection, some of these pictures were difficult for human subjects.
None of the other examples had more than 2 misses out of the 24 (hemisphere)
or 48 (full sphere) test cases.
Results are shown below.
Overall, the performance is fairly good. In fact, as of the the 1999 date of these experiments, this represents the best results presented anywhere for this sort of problem. A naive estimate of the theoretical error trends in this sort of matching system would lead us to expect a linear increase in the error rates as the size of the database increased (best-case). Our results are consistent with this, though we don't have enough data points to provide convincing support for a linear trend.
The resource requirements are high, but scale more or less linearly with the size of the database. The system is memory intensive, and currently uses about 3 Mbytes per hemisphere. This could be reduced using a number of schemes, since many of the patterns stored have similarities. The time to identify an object depends more or less linearly on the number of key features fed to the system, and the size of the database. At the moment, overall recognition times on a single processor Ultrasparc are about 20 seconds for the 6 object database, and about 2 minutes for the 24 object database. This could also be improved substantially by pushing on the indexing methods. The process is also efficiently parallelizable, simply by splitting the database among processors.
Out of 264 test cases, 252 were classified correctly which gives
a recognition rate of about 96%, compared to 99%
for uncluttered test images.
The following table shows the results.
In a second experiment, we took pictures of the objects against a light
background. Clutter in these images arises from shadows, from wrinkles in
the fabric, and from a substantial shading discontinuity between the
turntable and the background.
Unlike the dark-field pictures, the object in many of these pictures is
not trivially segmentable.
In addition, many of the images produce substantial numbers of
clutter curves as shown below.
Out of 264 test cases, 236 were classified correctly which gives
an overall recognition rate of about 90%, which is not as good as
some of our other results.
However, almost half the errors were due to instances of the toy bear,
the reason being that the gray level of the bear's body was so close to
the upper background in low-level shots that many of the main boundaries
could not be found.
If this case is excluded, the rate is about 94%.
Overall results are shown in the following table,
Background clutter, and particularly texture that seriously disrupts the performance of the contour extraction system is a more serious problem. The biggest problem arises with ``checkerboard'' like backgrounds, where frequent contrast reversals occur along object boundaries. The underlying model of our contour finder does not deal with this situation, with the result that external boundaries are badly fragmented. The solution is to use a contour extraction algorithm that is resistant to this sort of disturbance (e.g. various perceptual grouping methods).
The combination of robustness to clutter and occlusion gives the system
considerable ability to identify objects in ordinary environmental settings.
The images below illustrate examples of situations in which system is able
to correctly identify known objects in the scene. We estimate that forced
choice accuracy in scenes of this "complexity" is on the order of 90%
with a 6 object database.
Multiple known objects in a scene do not pose any difficulty. The system
simply reports them both.
For the experiment, gathered multiple examples of objects from
five generic classes, (11 cups, 6 ``normal'' airplanes,
6 fighter jets, 9 sports cars, and 8 snakes).
The recognition system was trained on a subset of each class, and tested
on the remaining elements. The training sets consisted of 4 cups, 3 airplanes,
3 jet fighters, 4 sports cars, and 4 snakes.
These classes are shown with the training
objects on the left of each picture, and the test objects on the right.
The training and test views were taken according to the same protocol as
in the previous experiment.
The cups, planes, and fighter jets were sampled over the full sphere;
the cars and snakes over the top hemisphere (the bottom sides
were not realistically sculpted).
The objects used are shown below. Training objects are on the left,
test objects on the right.
Overall performance on forced choice classification for 792 test images was 737 correct, or 93.0%. If we average performance for each group so that the fact that the best group, the cups, does not get weighted more because we had more samples, we get 92% (91.96%) performance. The error matrix is shown in Table \ref{fig:results6}. The performance is best for the cups at about 98%, and the planes, sports cars and snakes came in around 92%-94%. The fighter planes were the worst by a significant factor, at about 83%. The reason seems to be that there is quite a bit of difference between the exemplars in some views in terms of armament carried, which tends to break up some of the lines in a way the current boundary finder does not handle. Two of the test cases also have camouflage patterns painted on them. We expect that a few more training cases would help.
The snakes were a surprise, given the degree of flexibility, and the fact that none of the curves are actually rigidly similar. On close examination, the success seems to be effectively an accidental case of "default" reasoning. The snake model has high variability, and a random complex object that does not resemble anything in the database is more likely to get a strong match to a snake exemplar than anything else. Thus snakes get classed as snakes in a forced choice experiment, despite the fact that ROC curves for the snake class display poor absolute discrimination.
These results do not say anything conclusive about the nature of ``generic'' recognition, but they do suggest a route by which generic capability could arise in an appearance based system that was initially targeted at recognizing specific objects, but needed enough flexibility to be able to deal with inter-pose variability and environmental lighting effects. They also suggest that one way of viewing generic classes is that they correspond to clusters in a (relatively) spatially uniform metric space defined by a general, context-free, classification process. This is in contrast to distinctions, such as those needed to tell a cow from a bull, an F16 from an F18, or distinguish faces, that, though they may become fast and automatic in people, involve focusing attention on specific small areas, and assigning disproportionate weight to differences in those regions.