Video cameras come in two types; analog and digital. This refers to the signal they produce; not to the image sensor inside, which is almost always a digital CCD chip. Currently, most of our cameras are analog, with the exception of a couple of USB cameras. This could change in the near future, as the quality and speed of the cheap USB protocols is rapidly approaching that of the analog formats described below, and the more expensive USB and Firewire cameras supposedly have superior quality. (Getting the software to exploit this higher quality, especially under linux, seems to be an entirely separate issue).
Black and white (monochrome) cameras are the simplest. We have a couple in our lab; Standalone units that plug into the wall and have a single output cable which carries an RS-170 video signal. Usually we run these signals around on the thick co-axial cable with the BNC (twist) connectors, though some devices use thinner coax with the RCA (plug-in) connectors that are familiar from your VCR. This signal contains both image and timing information. The image is sent one line at a time, encoded using analog variation. The timing information consists of horizontal synch signals at the end of every line, and vertical synch pulses at the end of each field. There are also so-called horizontal and vertical blanking periods at the end of each line and field respectively, during which no image information is sent.
The RS-170 standard specifies an image with 512 lines, of which the first 485 are displayable. The image information is actually sent in what is known as "interlaced" mode: The odd lines (1, 3, 5, ..., 485) are sent first, followed by the even lines (2, 4, 6, ..., 484). Each set of lines constitutes a "field". The non-displayable lines in each field constistute the vertical blanking period. Fields are sent at a rate of 60 per second, which means that the entire image frame is refreshed 30 times a second. The reason for the interlaced format was to reduce perceptual flicker in the image displayed on a TV.
Horizontal resolution depends on the camera. Since it is an analog signal, the exact number is not critical; it just limits the detail that can be resolved. Typical resolution specs are on the order of 400-700 elements per line. Digitizers typically produce between 512 and 640 pixels per line. The aspect (width to height) ratio for the RS-170 signal rectangle is (approximately??) 4 to 3. That means if you want square pixels, you have to digitize 646 pixels for each of the 485 lines. A fairly standard policy is to digitize 480 lines at 640 pixels per line.
The Brooktree (BT848) digitizer cards in the computational sensors will produce a gray 3-band image with all bands more or less the same if fed a mono RS-170 signal via one of the composite ports. The old KTV digitizer chips will produce a mono-chrome image if a mono signal is fed into the green channel, and the digitizer is initialized in mono mode.
Color video signals are a little more complicated. There are three versions related to the RS-170 monochrome standard running over 1, 2, and 4 wires. Cameras and digitizers may use one or more of these. For example, the Sony pan-tilt-zoom cameras have a composite (1-wire) output (with an RCA connection) as well as the better S-video (2-wire) output.
The one-wire format is known as "composite video" or NTSC standard. We run composite around on the same coax cables we use for mono signals. Basically it contains intensity, color, and timing information on the same line. The intensity and timing information is basically consistent with the RS-170 monochrome signal. The color information is combined with the analog intensity signal using some sort of mixture encoding. Resolution of the color signal is considerably less than that of the intensity signal. The mixture encoding was designed so that in general, a monochrome RS-170 device, when fed a composite signal, will produce an acceptable monochrome output. There are exceptions. Our old KTV color digitizers work fine as monochrome digitizers if a RS-170 mono signal is fed into the green input. If you feed in a composite signal, the color mix produces some high-frequency noise in the digitized signal (looks like a very fine checkerboard overlaid on the picture). They cannot produce a color digital image from composite input. The BT848 digitizers have three selectable composite inputs, and digitize color images from them just fine.
The two-wire version is known as "S-video". In this format, one coax pair of wires, the Y channel, carries combined intensity and timing signals consistent with RS-170 mono. A second pair, the C channel, carries a separate color signal. The color resolution in this case is potentially higher than for the composite format (and sometimes it is actually higher). S-video is usually carried on a single, often lightweight, bundled cable with 4-pin connectors on either end. These cables are different than the thick coax cables we use for monchrome and composite video. Conversion plugs to BNC (twist) or RCA (plug-in) connectors are available, though a bit hard to find.
The four-wire format is known as RGB or RGBS for Red, Green, Blue, Synch. In this case the color signal is broken into three separate and equal channels, each carrying high-resolution information. Timing information is provided on a separate wire - the synch channel. Timing information is sometimes also present on the green channel, but more often not. RGB is the highest resolution of the three formats in terms of color information. We typically run it around on four parallel thick coax cables - the ones with the BNC (twist) connectors. This is quite cumbersome - the wire bundle can weigh more per unit length that the camera.
We don't have a standard set up for using DV in the lab at present. The program "coriander" will let you see the output of our USB cams, and grab some frames, but I have found it to be a bit buggy; and have even managed to crash the OS when trying to use it and the Brooktree digitizers at the same time. When someone markets a USB/Firewire Pan-tilt-zoom camera, we will probably bite the bullet and come up with a working standard system for getting DV images into programs, but as of 08/2005, we are still waiting.