CSC 249/449 Computer Vision: Face Verification Assignment
Weighting factor 4.
Due: Thursday, April 22, 2004.
The goal of this project is to use a pan-tilt zoom controlled camera
to visually verify the identity of people who sit down in front of a terminal.
The system will monitor activity in front of a terminal, and decide
on the basis of simple motion/change detection, when a new person
sits down in front of a terminal.
When a new person is detected, the system attempts to acquire
a frontal facial shot that can be used for verification.
Once a person is verified, the system attempts to keep track of
that person, so as to avoid reverification as much as possible.
The project is pretty big, so I have divided the class into teams of
to provide more programming cycles.
On the due date, we will have presentation demos of all the team projects.
The teams are as follows:
Team 1: Dominic Marino, Evan Merz, Istvan Csapo.
Team 2: Dasum Peramunage, Peter Barnum, Scott Cragg.
Team 3: Phil Michalek, Bijun He, Chuanpeng Li.
In more detail, the project involves implementing and integrating
the following 4 processes.
Note that some of these must execute concurrently, in order to
achieve the desired tracking performance.
In some cases the processing could be shipped to another machine.
For example, the verification/identification step is somewhat compute
intensive, but does not have low-latency requirements.
The get-face process, on the other hand, needs more frequent access to
frame buffer information, and the watch and track processes need
the fastest access of all.
- Watch process:
System watches space in front of terminal for someone to sit down.
When trigger condition is met, process initiates the track process, and
retires (in this application, we don't try to keep track of multiple
individuals)
- Track process:
System tries to keep track of initial subject, even if they
turn or move away, or hold still for a time.
Some attempt should be made to provide robustness to other people
walking through in the background, or entering the foreground for a time.
If your are feeling ambitious, you might try moving camera or changing
zoom to maintain view (though motion processing and change detection is
considerably more involved if the camera is moving).
The process should be able
to provide bounding box, and possibly other useful information
such as component pixels, likely head size and location, etc. on request,
as well as current id status (e.g. face not found, unknown face,
verified as Harry, etc.)
(Tracker might display information on live screen image)
Tracker tries to establish ID of tracked individual by
calling get-face process, and then Verify face process on the
returned image. If verify returns not-a-face, the attempt repeats.
These processes must run concurrently with the tracking process
so as not to lose track.
If track is lost, watch mode is restarted.
Make sure that possible returns from get-face and verify
processes after reversion to watch mode are handled robustly.
- Get face process:
System tries to acquire a good frontal face image of person who is
being tracked. May involve repeated tries, and/or attempted interaction
with user.
Depending on how fast your identification process is, the process
might make some attempt to verify that
it has a likely face, before returning, so that an expensive
identification/verification process is not called on obvious garbage.
This might involve use of color information, or template matching,
possibly along the lines you used in the earlier project for detection
of eyes.
Must be able to run concurrently with track mode, and make use of
location information provided by that process.
- Verify/Identify face process:
Takes a candidate face image produced by the get face process,
extracts the face from the background (e.g. by finding the eyes)
normalizes the face (location, size, orientation, intensity)
and attempts to verify/identify it.
Possible responses are: not a face, unknown face, verified/identified
as John/Jill/Jack.
The system should be able to recognize at least everyone on your team.
Must be able to run concurrently with track mode.
Comments on motion detection
The tracking system should run in real time, which means in practice, at least
4-5 frames a second in order to make tracking the user feasible.
In order to get image data from the frame buffer fast enough,
you will want to run the system at reduced resolution,
(e.g. one half or one quarter).
Once inside the machine, you may find it makes sense to run different
operations at different resolutions both for efficiency, and algorithmic
reasons.
An important step is to detect moving pixels in the image.
In this world, we will assume that anything moving is a person.
The easiest way to do this detection is by a technique known
as background subtraction.
Basically, you have a reference image that represents the background,
and any pixel in the current image that differs significantly from
the background is marked. (Assumption: it must have moved in there from
somewhere). This is really change detection rather than motion detection,
but in indoor environments (and even outdoors with a little massaging)
the two phenomena are highly correlated.
Issues include: What constitudes "significant" difference.
You can use a fixed threshold, but it is also possible to use a spatially
varying threshold (or more complex model) that will permit the system
to ignore fluttering leaves or flickering video displays locally.
Such models are usually learned on-line by letting the system
observe the background for a bit.
Another issue is updating the reference image.
Outdoors, and even indoors, there are occasional global changes
to the reference image, due to change in lighting, or motion of the
camera.
Most systems re-acquire the background; either continuously using
exponential time averaging, or periodically. If you do this, it
makes sense not to update the reference image in any location
where a moving object is currently being detected.
You can also check for global changes (lots of separated locations changing
simultaneously) indicating either camera movement or rapid lighting change,
and shut down change detection during these periods, and then
quickly re-acquire a reference image when global motion stops.
An approach that gracefully handles a lot of practical issues that arise
is to maintain multiple models per pixel, where a model consists
of a mean gray-value or color, and a variance (or covariance matrix).
This lets you retain the
old background when something new enters the field of view, and recover
quickly when the object exits (i.e. without having to re-adapt).
3 models per pixel is a good minimum.
Comments on person detection and tracking
Decisions about when a person sits down at the terminal
will probably involve heuristic rules based on location, number, and
persistance of moving pixels.
The same for keeping track of that person.
You need some memory of last seen movement, so that sitting still in
front of the terminal, without exiting will not trigger a subject lost
condition. You probably also need to be robust to additional people
moving in the background, or even someone moving in to confer, and then out
again. Of course there will be situations you can't handle, but try to do as
well as you can.
Some notion of a main subject region that can't change to
fast in extent or location, can't teleport, can't vanish, can't
be too strangly shaped, etc. may be helpful here.
Basically, you need to know where the subjects head is, and when the
subject leaves the field of view, and after that, when a new main subject
enters.
Color might be useful in various ways here - e.g. using flesh tones
to aid detection, or by using color histograms for models of a tracked
object (captures clothing color etc.)
Comments on getting faces
A simple way of trying for a face shot is to periodically grab a window
from the top of the current object (presumeably containing the head);
and evaluate it using some simple "faceness" measure.
Problem with using face space directly, is you have to separate a
face candidate from the background before using this method.
So you could just pass all your pictures directly to the face
verifier for analysis - presumeably it will tell you if you
don't have a face - but it may be a slow filter.
It is likely, you will want to run some faster filter to exclude
really bad takes before wasting verifier time on them.
If system tries for a while and does not succeed in getting a good
face shot, you might consider having it ask for the user's cooperation
("please look at the camera"). Of course the subject might not cooperate.
Comments on verification/identification
For verification, I suggest using some form of the eigenfaces method.
Remember, the key to getting this method to work, is careful
normalization.
You should characterize the performance of this portion of the
system in terms of (one or more) ROC curves, which plot false acceptance rate
against false rejection rate for different values of some parameter
(usually an accept/reject threshold).
The point is, it's easy to get 0% false reject (just accept everyone),
and easy to get 0% false acceptance (just reject everyone),
but the interesting issue is what the tradeoffs look like in between
(e.g., what percentage of the ringers do you have to accept in order
that true-bluers are rejected only 10% of the time).
You might also look at how often something that is not a face at
all is identified as a face, or as a particular individual.
There are other terms in the literature such as precision, rejection, etc.
You are welcome to any measures you find useful as long as you define them.
Good performance would be a 95% acceptance with less than 5% false positive
rate for all of the subjects.
Note that the face database you used previously is a source of data for
evaluating false positive rates, and possibly for testing
generic face detection.
Comments on pan-tilt cameras
The pan-tilt cameras are controlled by a serial line interface to a computer,
or by a hand-held remote.
The video output is separate and needs to go to a digitizer, which may or may
not be on the same platform that controls the serial line.
This project does not necessarily require computer control of pan-tilt-zoom
(though it might be an interesting addition to the tracking if you end up
with extra time...)
There are a lot of pan-tilt cameras hooked up to the linux boxes in the
vision lab, with bt848 digitizers.
There are also two pan_tilt cameras in the software lab with the
serial interface hooked up to milli and kilo.
If you use these, you give up color, since the KTV digitizers are monchromatic
with these cameras. Also the machines are old (ultra-slow) Ultra 1s.
If you want to control the cameras from the computer, the interface
libraries are in ~nelson/programs/lib/Solaris/libcamera.a,
and the source code and demo programs are in
~nelson/programs/src/robot/pt_camera/[lib, bin].
It is in C++, so you'll have to use that language (with the g++) compiler
if you go down this road. The CC compiler has/had some odd incompatibility
with the pthreads package that causes problems with the camera control.
Reference Material
Some classic vision (and newer) vision papers relevant to the project are
listed below. This list may see additions, so check back occasionally.