Signal Processing -- The Frequency Domain
Topics and Methods
Read, or have read by now, the "Informal Introduction".
- Classic Comic Book Intro to Frequency Domain
- "Passive" Math (no math. manipulations).
- Very Little Math Justification...have faith.
- Dirac Delta, Sines, Inner Product
- Basis sets of functions.
- Notion of Linear transforms (systems).
- (Invertible) Transform Domains, especially the frequency domain.
- Fourier transformation; properties and use.
- Practical transform-domain signal-processing techniques.
- Web resources
A Primary Goal
-
See how to transform a signal into a different representation
(specifically here, a representation of frequencies in the signal).
- See why such a transform is a useful practical and abstract thing to do.
Dirac Delta Function
- Also called the unit impulse function.
- Not technically a "function"
- Has zero width, unit integral (as a vector [...000000100000...].)
- Integral of &delta(x-c)f(x) "samples" f(x)...
returns f(c).
- Dirac comb: [...1000100010001..
Sinusoids
Some math leads to: cos(ux +vy) and
sin(ux +vy) are 2-D sinusoids as in the figure. Their ridges
and troughs fall along the parallel lines ux+vy = k &pi for
integer
k, and their wavelength is
2 &pi / &radic (u2 + v2).
So we can write a 2-D wave as
ei(ux + vy).
Inner Product
Dot Product from high school vectors:
Generalize to n-vectors:
x · y = &Sigma n x(i)y(i).
Generalize to continuous functions:
f · g = &int f(x) g(x) dx
- Dot product measures similarity of unit-magnitude vectors (=
cosine of their angular difference).
- Dot product projects one vector onto another.
Again, can think of similarity or matching:
same-direction, projection is large. Orthogonal vectors, projection is zero.
Basis Functions
A weighted sum of basis functions
is equal to some other function of interest. Very similar to
expressing a vector's position in another coordinate system.
Family of something like shifted Dirac Deltas (but not quite):
"obviously" true.
Family of Sinusoids: not obviously true (unless your name's Fourier).
But easier to formalize.
In 2-D, implies you can build any image out of gratings(!!)
Basis Functions Cont.
Orthogonal
Basis functions have zero inner product with any other function
in the family, but positive i.p. with selves (Dirac, Sinusoids both are
orthogonal sets.)
Lots of useful basis functions: Bessel (vibrational modes of drumheads),
Legendre, Laplace (solving differential equations, e.g... damped sinusoids),
spherical harmonics (vib. modes in spheres):
animation and
live.
Linear and Linear Shift-Invariant Systems: Definition
Here, systems of Linear operators, not systems of linear equations
(related, but...). Generally a linear system can be represented as a
matrix (or some generalization involving continuous functions) operating
on an input vector. y = Mx.
Let f1(t) and f2(t) be input
functions and
g1(t) and g2(t)
their corresponding output functions,
and &alpha and &beta be scalar weights.
Then for
input function &alpha f1(t) + &beta f2(t)
the output is
&alpha g1(t) + &beta g2(t).
This is the Superposition Principle.
In a linear, shift-invariant (LSI) system,
input f(t-h) produces g(t-h).
LSI Systems as Matrix Multiplication
LSI systems are a specialized subcase of linear systems: they are
characterized by a (shiftable) vector or function rather than a full
matrix. An example: a linear imaging system---
If x is a one-dimensional ``scene'' and y its image,
and M models a camera,
0 1 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0
1 0 0 1 0 0 0 0 1
2 = 0 0 0 1 0 0 0 * 2
4 0 0 0 0 1 0 0 4
0 0 0 0 0 0 1 0 0
0 0 0 0 0 0 0 1 0
Can imagine these as infinite objects.
Each row is a shifted version of the Dirac delta.
An out-of-focus camera C2 might be modeled like this:
1 1 0 0 0 ...
0 1 1 0 0 ...
0 0 1 1 0 ...
0 0 0 1 1 ...
0 0 0 0 1 ...
....
A camera C3 that behaves nicely in the middle of its field of view but
goes blurry around the edges could be:
0 1/2 1/4 0 0 0 0 0 0
1/4 1/4 1/2 1/4 0 0 0 0 0
1 0 0 1 0 0 0 0 1
2 = 0 0 0 1 0 0 0 * 2
4 0 0 0 0 1 0 0 4
1 0 0 0 0 1/4 1/2 1/4 0
0 0 0 0 0 0 1/4 1/2 0
Observations
-
LSIs are even nicer than linear systems because a shifted version of the
same input is always the same output, shifted.
Their matrices have a simple form: just one row, repeated and shifted.
-
The output vector can be considered to be indexed by the amount
the single matrix row has been shifted.
-
For LSIs, the matrix is a wasteful notation.
Each output element is a dot product of the input with a
differently-shifted copy of the ``one row'' in the matrix.
The operation that creates the shifted dot products, and indexes the
output by the shift, is the discrete
convolution operation.
-
All extends to continuous domain: a continuous
function as input, operated on by a shifting continuous function instead of a
matrix row. Continuous addition is integration, so we obtain
the (continuous) colvolution integral.
-
LSI systems all compute a convolution, and if a system computes a
convolution it's LSI!
Sinusoids as Eigenfunctions
Sinusoid input to ANY linear, shift-invariant system
(any h(t)) yields sinusoid output of
same frequency (wavelength), possibly shifted (in phase) and
changed in amplitude (amplified or attenuated).
This beautiful fact seems surprising, but arises from
the definition of convolution and
the integral properties of eikt (sinusoids).
This is essentially what it means to be an eigenvector
(or eigenfunction) of a linear operator.
An eigenvector of a matrix remains in the same "direction" when
multiplied by the matrix: though it may be lengthened or shortened
(technically it may be multiplied by a scalar).
A sinusoid is still a sinusoid of the same frequency when operated on by a LSI.
The possible phase shift is a consequence of the vectors actually being
complex-valued.
Linear Systems: Examples
Our LSI systems will operate both in the time-domain (e.g. sound),
and in the 2-D spatial domain (e.g. images).
We draw them using f(t) as the input,
h(t) as the impulse response or point spread function,
and g(t) as the output.
The box performs a convolution.
The differential d/dt is an LSI operator.
Physical systems can often be approximated by linear systems.
Linear Systems and Reality
An ideal camera's film frame extends infinitely and
exactly records all levels of input, from full dark to
infinitely bright.
A ideal camera's point spread function is a Dirac Delta (impulse).
A real PSF is always more complicated!
Linear output response to a linearly increasing input is desired, but...
Convolution and Correlation
A linear system convolves its impulse response (or PSF) with the input.
g(t) = f(t) &otimes h(t), or g(t) = f(t) * h(t).
Formal def:
(f &otimes g )(t) = &int - &infin
&infin f(&tau)g(t - &tau)) d &tau .
Note one input is reversed in time (or space) (see camera fig above).
Correlation
Correlation is the same as convolution, only one of the inputs
is not flipped backwards. It is often notated just the same way,
with * or &otimes .
g(t) = f(t) &otimes h(t), or g(t) = f(t) * h(t).
Formal def:
(f &otimes g )(t) = &int - &infin
&infin f(&tau)g( &tau - t)) d &tau .
-
Correlation is commutative:
f &otimes g = g &otimes f
-
Array correlation:
N-vector &otimes M-vector = (M+N-1)-vector.
-
In Matlab,
conv(fliplr([ 1 2 3 4]), [2,4,6])
gives
[8 22 40 28 16 6]
-
The correlation also looks like a shifting dot-product, and without
the reversal it's easy to consider it as a matching function being tried
out at all shifts. Where the arguments best match, the correlation is
highest. The autocorrelation of a function with itself is highest at
0 offset.
In-class Examples
The correlation of the delta function with itself &delta(t)
&otimes
&delta(t) -- its autocorrelation
The autocorrelation of the delta-comb function.
The correlation of the delta function with an arbitrary function &delta(t)
&otimes
h(t) -- sometimes correlation of two different functions is called their cross-correlation when it
could get mixed up with the autocorrelation in the surrounding prose.
Note the last exercise shows that to discover the impulse-response
(PSF) of an unknown system, just give it an impulse: the output is the PSF!
Like 'kicking the tyres' but more informative.
The correlation of [1 2 3 2 1] with [1 1 1].
Fourier Transform
The Fourier transform is a linear mathematical operation that
takes time- or space-domain input (sound wave, voltage waveform,
image,...) and outputs an equivalent
(spatial) frequency-domain representation.
The operation is lossless and invertible.
Essentially, it
decomposes the input into a number of sinusoids of varying magnitude and phase
(and in two dimensions, directions).
The inverse transform reverses the process.
The formal definition is
F(&nu) =&int f(t) e - 2 &pi i &nu t dt ,
where t is time or space and &nu is frequency. The inverse is simply
related: (We have left out some normalization constants).
f(t) =&int F(&nu) e 2 &pi i &nu t dt .
The Fourier transform (FT) is an inner-product integral that answers the
question:
How much of the particular
sine wave e- 2 &pi i &nu t
is in this input function f(t).
In mathematical terms,
it is projecting f(t) into the transform basis
space of sinusoids.
The result is generally it is a complex function (real and imaginary parts).
FT Properties
- FT is linear, i.e., the FT of a weighted sum of functions is the
weighted sum of their FTs.
- FT of any symmetric (even) function is real and even: (the
function is a sum of cosines).
- FT of any antisymmetric (odd) function is imaginary and odd:
(function is a sum of sines).
- FT of a real function is Hermitian:
F(t) = conj(F(-t)).
- In the FT of a shifted function, the magnitude of all the components
(the complex numbers) stays the same, but they rotate (their phase changes)
as a function of the shift and their frequency.
- Scaling or similarity: FT[f(ax)] = (1/a) F(&nu/a)
- Thus a time-reversal property: FT[f(-x)] = F(- &nu)
- FT of a Gaussian is a Gaussian.
- FT of a Dirac comb is a Dirac comb.
- Scaling examples:
FT of a narrower (wider) Gaussian is a wider (narrower) one,
FT of a higher (lower) frequency
Dirac comb is a lower (higher) frequency one,
FT of a (single) Dirac function is flat, with
waves of all frequencies and the same magnitude.
- FT has an equivalent for vector (discrete) inputs. It
has a very clever implementation called
the Fast Fourier Transform, or FFT, which we'll be using.
(Fast) Fourier Transform Issues
- FFT takes N log(N) time for an N- vector (matrix
multiplication, the "Slow FT"), takes N 2 time.
- FFT works on each dimension separately, so the FFT of
multi-dimensional arrays can be computed by repeatedly
using the 1-D algorithm.
The total time required scales exponentially
in number of dimensions (as does the number of data points),
but retains its N log(N) time with respect to the total
number of data points.
- FFT works fastest for arrays whose dimensions are all some power
of two: 2D i, but can be adjusted to
work for any size..
- Matlab FFT and FFT -1 commands:
fft, ifft, fft2, ifft2
- fft(X,N) creates a complex N -long output vector. Input is
truncated if longer than N, else padded with 0's.
- Periodic and aperiodic functions. For aperiodic for N-vector
one could use fft(X, 2*N), which effectively embeds the function
in a larger space of zeros. There may still be high-frequency artifacts
arising from discontinuities at the embedding boundaries
- Generally FT takes complex input. That happens especially in
intermediate steps, but most real-world input is real.
So for N-vector of doubles in, get N-vector of complex out.
Twice the numbers but only
need N since output is symmetrical for real input.
- The FT (of anything but symmetric real input function ) is
complex. This is difficult to visualize.
The power spectrum provides a real signal representing
quantities of engineering interest that is easy to visualize
(i.e. by plotting it).
- Takes a little experience, so experiment.
Power Spectrum
Analyzing time series: one issue is dominant frequencies.
Power spectrum is magnitude of FT: (F .* conj(F)).
Below, the PS is displayed with its 0-frequency origin in the middle of the
X-axis (at 32).
PS tells how much power the signal contains at a given frequency.
Sampling Issues: bandlimited input, sample at least twice
the maximum signal frequency.
More Power Spectrum
60, 150, 350 Hz. sines plus 0-mean Gaussian noise.
Power Spectrum.
function thePS = PowSpec1D(X,n)
Y = fft(X,n);
thePS = (Y .* conj(Y)) / n;
end
... % and in the calling script
plot(xaxis, fftshift(thePS));
Complex Numbers
Complex number a + bi
is a 2-vector (a,b) living in the complex plane, which has a
real axis and an imaginary axis.
Or, a complex number is an ordered pair (a,b) that is a
mathematical object having rather funny rules of operation.
The conjugate of a complex number negates the imaginary part:
conj(a + bi) = (a - bi).
You can work out that
(a + bi) · conj(a+bi) is a2 +
b2,
the squared magnitude (length) of the (a,b) vector in the
complex plane.
Complex numbers are added, subtracted, multiplied, and divided by
formally applying the associative, commutative and distributive laws
of algebra, together with the equation i2 = -1 :
- Addition: (a + bi) + (c+di) = (a+c) + (b+d)i
- Subtraction: (a + bi) - (c+di) = (a-c) + (b-d)i
- Multiplication: (a + bi) (c+di) = ac+bci +adi +bdi2 =
(ac-bd) + (bc +ad)i
- Division: (a+bi)/(c+di) = (ac+bd)/(c2+d2)
+ (bc-ad)/(c2+ d2,
where c and d
are not both zero. Derive this by multiplying both
the numerator and the denominator by the conjugate of the denominator
c + di, which is (c - di).
Phasors
A phasor is a complex number, considered as a vector, and
considered to rotate around the origin without changing its length.
This alters the magnitude of its real and imaginary parts, and is said
to change its phase. FT entries are complex. Considered as phasors,
each corresponds to a sinusoid at a frequency given by its
coordinates in the FT, amplitude equal to its length, and phase given
by its the angle to the real axis.
Looking at individual phasors: there are two of them for a sine
wave, 180 degrees apart
One frequency but two symmetrical conjugate elements in FT.
Can see phasor rotate as phase changes.
In Image below, for sine (red, 0 phase angle),
we get the imaginary (0.0000 +-32.0000i).
Shifted sine (blue) gives (29.5641 +-12.2459i).
The Convolution Theorem
Another FT symmetry property:
Convolution in the time (space) domain is dual to elementwise
multiplication in the frequency domain.
Let FT denote the Fourier transform operation. Then
FT{f &otimes g} = FT{f} · FT{g}
where · denotes point-wise multiplication.
Also vice-versa:
FT{f · g} = FT{f} &otimes FT{g},
And applying the inverse Fourier transform
FT -1 to the first equation , we get:
f &otimes g= FT -1 {FT{f} · FT{g}}
As convolution and correlation are important,
The Convolution Theorem is a key idea and technique.
We'll see some applications later.
Linear Systems and the Frequency Domain
Use convolution theorem
to make a frequency-domain version of the linear system block diagram:
Inputs and outputs are
FT{f(t)} and
FT{g(t)},
The function in the box is
FT{h(t)},
and the
operation of the box is elementwise multiplication (Matlab's .*).
Box is a "graphic equalizer": sound (say) comes in as a pressure wave,
the sum of sinusoids of many frequencies. Equalizer amplifies
or attenuates (and changes phase) according to H(&nu), the
box's function.
FT{h(t)} =H(\nu) is called the
Modulation Transfer Function (MTF).
So graphic equalizer is just an MTF, basically. It's a linear system
that's easier to think about in the frequency domain than in the
temporal
domain.
The Sampling Theorem
Basic Question: Can we exactly
reproduce a continuous signal from a finite
number of discrete samples?
Yes (!!), if...
-
There is a limit to the signal frequency: that is,
it's a band-limited function.
- The sampling is done often enough to reconstruct the
highest-frequency sinusoid in the signal.
The sampling rate must clearly be at least twice the highest signal
frequency. It is called the Nyquist frequency. Turns out
any rate above that works (given enough signal).
If sample at high and low peaks of red sine, can get its amplitude, phase, and
frequency.
But higher-frequency blue sine is undetected, being 0 at all sampling
points.
The Sampling Theorem: Frequency Domain
-
The FT of the Dirac comb is
another
Dirac comb
- Scaling Property: the closer the time (space) domain peaks are
together, the farther apart the frequency domain peaks are.
-
Multiplying a function f(t) by the Dirac comb effectively gives a
sampled version of f(t).
- Convolution Theorem:
FT{f · g} = FT{f} &otimes FT{g},
Sampling at too low a rate means copies of FT{signal} overlap
(causing 'aliasing') and our strategy fails due to addition
and confusion between copies. Faster sampling separates the
FT copies in freq. space, and we may imagine snipping one out
and retrieving
f(t) = FT -1 {F(&nu)}.
Three Frequency-domain Operations
Matching
- Matching known signal in Gaussian noise: optimal is
correlation detection or matched filter detection.
- Autocorrelation (f(t) &otimes f(t)) is highest when
f(t) lines up with itself.
- Crosscorrelation (f(t) &otimes g(t)) has peak if
f(t)
lines up with a version of itself itself in g(t). So peaks
in cross-correlation can signal positive detections.
- Works for any shape anywhere in an image (like airplanes seen from
above in aerial photo): shift invariant. BUT: scale and rotation
variant :-{.
- For best peaks, autocorrelation of 'shape' should be as close as
possible to an impulse. Leads to design of such functions, and to
finding them to assure good results when matching points in images
(e.g. for automatic panaroma construction from multiple images).
A 'chirp' and the autocorrelation of a random vector of 1 and -1.
Deconvolution
Produce good image from one made with an exotic PSF (as in
some of today's cameras (see 'coded aperture' and 'computational
camera'. Or fix image made with undesireable PSF
(bad optics (e.g Hubble), bad 'seeing conditions' (Optical astronomy,
atmospheric instability), motion or focus blur,...)
How? Convolution theorem:
FT{f &otimes h} = FT{f} · FT{h}
Where LHS is the image from a camera with
point-spread
function h and scene data f.
If h is an ideal Dirac delta, the
output is the scene.
Camera motion: h
image is spread out along 2-D path.
Focus blur: PSF becomes a disk (sensor intersects cone of focused rays
NOT at its point).
General treatment:
Assume we know or can guess h.
In the equation above, divide (elementwise) by
FT{h}.
FT{f &otimes h}/FT{h} = FT{f}, so
f = FT -1 [
FT{f &otimes h} / FT{h}]
We now have FT of what we want (the input function)
using things we know (PSF and degraded
image). Inverse-transforming both sides recovers
the original input. It almost works.
Consider boxcar (or disk) blur function, which formalizes either
(in 2-D) defocus blur or (in 1-D) straight-line motion blur. Here's
FT{h}: note it crosses zero often, so it is near zero often.
Multiplying by
1/FT{h} where the FT is near zero
amplifies frequencies
there
by huge amounts (or Inf if there is a divide by zero error).
Noise at these amplified frequencies can overwhelm
the signal. Some care (thresholding before multiplying, say) is needed.
The Gaussian is a user-friendly blur function since its FT is Gaussian,
so always positive. Dividing by it amplifies high frequencies.
Time-Domain Signal Processing
- Probably most SP done in original (time, space) domain:
Photoshop, guitar effects...
- e.g. Spatial derivative of image in x-direction
('vertical edge finder'): correlate with [-1 1]: easy.
OR can take FT{d/dt}, get an NxN filter, FT the image,
multiply the two, inverse FT the result... whew. Too much
computation.
- Data analysis usually involves
fitting mathematical models (curves) to experimental data,
smoothing or interpolating data, creating statistics and
visualizations: all spatial or time-domain operations.
Last update: 04/22/2011: RN