DSPRelated.com
Free Books

Example Applications of the DFT

This chapter gives a start on some applications of the DFT. First, we work through a progressive series of spectrum analysis examples using an efficient implementation of the DFT in Matlab or Octave. The various Fourier theorems provide a ``thinking vocabulary'' for understanding elements of spectral analysis. Next, the basics of linear systems theory are presented, relying heavily on the convolution theorem and properties of complex numbers. Finally, some applications of the DFT in statistical signal processing are introduced, including cross-correlation, matched filtering, system identification, power spectrum estimation, and coherence function measurement. A side topic in this chapter is practical usage of matlab for signal processing, including display of signals and spectra.

Why a DFT is usually called an FFT in practice

Practical implementations of the DFT are usually based on one of the Cooley-Tukey ``Fast Fourier Transform'' (FFT) algorithms [16].8.1 For this reason, the matlab DFT function is called `fft', and the actual algorithm used depends primarily on the transform length $ N$.8.2 The fastest FFT algorithms generally occur when $ N$ is a power of 2. In practical audio signal processing, we routinely zero-pad our FFT input buffers to the next power of 2 in length (thereby interpolating our spectra somewhat) in order to enjoy the power-of-2 speed advantage. Finer spectral sampling is a typically welcome side benefit of increasing $ N$ to the next power of 2. Appendix A provides a short overview of some of the better known FFT algorithms, and some pointers to literature and online resources.


Spectrum Analysis of a Sinusoid:
Windowing, Zero-Padding, and FFT

The examples below give a progression from the most simplistic analysis up to a proper practical treatment. Careful study of these examples will teach you a lot about how spectrum analysis is carried out on real data, and provide opportunities to see the Fourier theorems in action.

FFT of a Simple Sinusoid

Our first example is an FFT of the simple sinusoid

$\displaystyle x(n) = \cos(\omega_x n T)
$

where we choose $ \omega_x=2\pi(f_s/4)$ (frequency $ f_s/4$ Hz) and $ T=1$ (sampling rate $ f_s$ set to 1). Since we're using a Cooley-Tukey FFT, the signal length $ N$ should be a power of $ 2$ for fastest results. Here is the Matlab code:

% Example 1: FFT of a DFT-sinusoid

% Parameters:
N = 64;              % Must be a power of two
T = 1;               % Set sampling rate to 1
A = 1;               % Sinusoidal amplitude
phi = 0;             % Sinusoidal phase
f = 0.25;            % Frequency (cycles/sample)
n = [0:N-1];         % Discrete time axis
x = A*cos(2*pi*n*f*T+phi); % Sampled sinusoid
X = fft(x);          % Spectrum

% Plot time data:
figure(1);
subplot(3,1,1);
plot(n,x,'*k');
ni = [0:.1:N-1];     % Interpolated time axis
hold on;
plot(ni,A*cos(2*pi*ni*f*T+phi),'-k'); grid off;
title('Sinusoid at 1/4 the Sampling Rate');
xlabel('Time (samples)');
ylabel('Amplitude');
text(-8,1,'a)');
hold off;

% Plot spectral magnitude:
magX = abs(X);
fn = [0:1/N:1-1/N];  % Normalized frequency axis
subplot(3,1,2);
stem(fn,magX,'ok'); grid on;
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (Linear)');
text(-.11,40,'b)');

% Same thing on a dB scale:
spec = 20*log10(magX); % Spectral magnitude in dB
subplot(3,1,3);
plot(fn,spec,'--ok'); grid on;
axis([0 1 -350 50]);
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (dB)');
text(-.11,50,'c)');
cmd = ['print -deps ', '../eps/example1.eps'];
disp(cmd); eval(cmd);

Figure 8.1: Sampled sinusoid at frequency $ f=f_s/4$. a) Time waveform. b) Magnitude spectrum. c) DB magnitude spectrum.
\includegraphics[width=\twidth]{eps/example1}

The results are shown in Fig.8.1. The time-domain signal is shown in the upper plot (Fig.8.1a), both in pseudo-continuous and sampled form. In the middle plot (Fig.8.1b), we see two peaks in the magnitude spectrum, each at magnitude $ 32$ on a linear scale, located at normalized frequencies $ f= 0.25$ and $ f= 0.75 =
-0.25$. A spectral peak amplitude of $ 32 = (1/2) 64$ is what we expect, since

$\displaystyle \hbox{\sc DFT}_k(\cos(\omega_x n)) \isdef \sum_{n=0}^{N-1}
\frac{e^{j\omega_x n} + e^{-j\omega_x n}}{2} e^{-j\omega_k n},
$

and when $ \omega_k=\pm\omega_x$, this reduces to

$\displaystyle \sum_{n=0}^{N-1}\frac{e^{j 0 n}}{2} = \frac{N}{2}.
$

For $ N=64$ and $ \omega_x=2\pi f_s/4$, this happens at bin numbers $ k =
0.25 N = 16$ and $ k = 0.75N = 48$. However, recall that array indexes in matlab start at $ 1$, so that these peaks will really show up at indexes $ 17$ and $ 49$ in the magX array.

The spectrum should be exactly zero at the other bin numbers. How accurately this happens can be seen by looking on a dB scale, as shown in Fig.8.1c. We see that the spectral magnitude in the other bins is on the order of $ 300$ dB lower, which is close enough to zero for audio work $ (\stackrel{\mbox{.\,.}}{\smile})$.


FFT of a Not-So-Simple Sinusoid

Now let's increase the frequency in the above example by one-half of a bin:

% Example 2 = Example 1 with frequency between bins

f = 0.25 + 0.5/N;   % Move frequency up 1/2 bin

x = cos(2*pi*n*f*T); % Signal to analyze
X = fft(x);          % Spectrum
...                  % See Example 1 for plots and such

Figure 8.2: Sinusoid at Frequency $ f=0.25+0.5/N$. a) Time waveform. b) Magnitude spectrum. c) DB magnitude spectrum.
\includegraphics[width=\twidth]{eps/example2}

The resulting magnitude spectrum is shown in Fig.8.2b and c. At this frequency, we get extensive ``spectral leakage'' into all the bins. To get an idea of where this is coming from, let's look at the periodic extension7.1.2) of the time waveform:

% Plot the periodic extension of the time-domain signal
plot([x,x],'--ok');
title('Time Waveform Repeated Once');
xlabel('Time (samples)'); ylabel('Amplitude');
The result is shown in Fig.8.3. Note the ``glitch'' in the middle where the signal begins its forced repetition.

Figure 8.3: Time waveform repeated to show discontinuity introduced by periodic extension (see midpoint).
\includegraphics[width=\twidth,height=2in]{eps/waveform2}


FFT of a Zero-Padded Sinusoid

Looking back at Fig.8.2c, we see there are no negative dB values. Could this be right? Could the spectral magnitude at all frequencies be 1 or greater? The answer is no. To better see the true spectrum, let's use zero padding in the time domain (§7.2.7) to give ideal interpolation7.4.12) in the frequency domain:

zpf = 8;            % zero-padding factor
x = [cos(2*pi*n*f*T),zeros(1,(zpf-1)*N)]; % zero-padded
X = fft(x);         % interpolated spectrum
magX = abs(X);      % magnitude spectrum
...                 % waveform plot as before
nfft = zpf*N;       % FFT size = new frequency grid size
fni = [0:1.0/nfft:1-1.0/nfft]; % normalized freq axis
subplot(3,1,2);
% with interpolation, we can use solid lines '-':
plot(fni,magX,'-k'); grid on;
...
spec = 20*log10(magX); % spectral magnitude in dB
% clip below at -40 dB:
spec = max(spec,-40*ones(1,length(spec)));
...                 % plot as before

Figure 8.4: Zero-padded sinusoid at frequency $ f=0.25+0.5/N$ cycles/sample. a) Time waveform. b) Magnitude spectrum. c) DB magnitude spectrum.
\includegraphics[width=\twidth]{eps/example3}

Figure 8.4 shows the zero-padded data (top) and corresponding interpolated spectrum on linear and dB scales (middle and bottom, respectively). We now see that the spectrum has a regular sidelobe structure. On the dB scale in Fig.8.4c, negative values are now visible. In fact, it was desirable to clip them at $ -40$ dB to prevent deep nulls from dominating the display by pushing the negative vertical axis limit to $ -300$ dB or more, as in Fig.8.1c (page [*]). This example shows the importance of using zero padding to interpolate spectral displays so that the untrained eye will ``fill in'' properly between the spectral samples.


Use of a Blackman Window

As Fig.8.4a suggests, the previous example can be interpreted as using a rectangular window to select a finite segment (of length $ N$) from a sampled sinusoid that continues for all time. In practical spectrum analysis, such excerpts are normally analyzed using a window that is tapered more gracefully to zero on the left and right. In this section, we will look at using a Blackman window [70]8.3on our example sinusoid. The Blackman window has good (though suboptimal) characteristics for audio work.

In Octave8.4or the Matlab Signal Processing Toolbox,8.5a Blackman window of length $ M=64$ can be designed very easily:

M = 64;
w = blackman(M);
Many other standard windows are defined as well, including hamming, hanning, and bartlett windows.

In Matlab without the Signal Processing Toolbox, the Blackman window is readily computed from its mathematical definition:

w = .42 - .5*cos(2*pi*(0:M-1)/(M-1)) ...
       + .08*cos(4*pi*(0:M-1)/(M-1));

Figure 8.5 shows the Blackman window and its magnitude spectrum on a dB scale. Fig.8.5c uses the more ``physical'' frequency axis in which the upper half of the FFT bin numbers are interpreted as negative frequencies. Here is the complete Matlab script for Fig.8.5:

M = 64;
w = blackman(M);
figure(1);
subplot(3,1,1); plot(w,'*'); title('Blackman Window');
xlabel('Time (samples)'); ylabel('Amplitude'); text(-8,1,'a)');

% Also show the window transform:
zpf = 8;                      % zero-padding factor
xw = [w',zeros(1,(zpf-1)*M)]; % zero-padded window
Xw = fft(xw);                 % Blackman window transform
spec = 20*log10(abs(Xw));     % Spectral magnitude in dB
spec = spec - max(spec);      % Normalize to 0 db max
nfft = zpf*M;
spec = max(spec,-100*ones(1,nfft)); % clip to -100 dB
fni = [0:1.0/nfft:1-1.0/nfft];   % Normalized frequency axis
subplot(3,1,2); plot(fni,spec,'-'); axis([0,1,-100,10]);
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (dB)'); grid; text(-.12,20,'b)');

% Replot interpreting upper bin numbers as frequencies<0:
nh = nfft/2;
specnf = [spec(nh+1:nfft),spec(1:nh)];  % see fftshift()
fninf = fni - 0.5;
subplot(3,1,3);
plot(fninf,specnf,'-'); axis([-0.5,0.5,-100,10]); grid;
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (dB)');
text(-.62,20,'c)');
cmd = ['print -deps ', '../eps/blackman.eps'];
disp(cmd); eval(cmd);
disp 'pausing for RETURN (check the plot). . .'; pause

Figure 8.5: The Blackman window: a) window itself in the time domain, b) dB magnitude spectrum plotted over normalized frequencies $ [0,1)$, and c) same thing plotted over $ [-0.5,0.5)$.
\includegraphics[width=\twidth]{eps/blackman}

Applying the Blackman Window

Now let's apply the Blackman window to the sampled sinusoid and look at the effect on the spectrum analysis:

% Windowed, zero-padded data:
n = [0:M-1];          % discrete time axis
f = 0.25 + 0.5/M;     % frequency
xw = [w .* cos(2*pi*n*f),zeros(1,(zpf-1)*M)];

% Smoothed, interpolated spectrum:
X = fft(xw);

% Plot time data:
subplot(2,1,1);
plot(xw);
title('Windowed, Zero-Padded, Sampled Sinusoid');
xlabel('Time (samples)');
ylabel('Amplitude');
text(-50,1,'a)');

% Plot spectral magnitude:
spec = 10*log10(conj(X).*X);  % Spectral magnitude in dB
spec = max(spec,-60*ones(1,nfft)); % clip to -60 dB
subplot(2,1,2);
plot(fninf,fftshift(spec),'-');
axis([-0.5,0.5,-60,40]);
title('Smoothed, Interpolated, Spectral Magnitude (dB)');
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (dB)'); grid;
text(-.6,40,'b)');
Figure 8.6 plots the zero-padded, Blackman-windowed sinusoid, along with its magnitude spectrum on a dB scale. Note that the first sidelobe (near $ -40$ dB) is nearly 60 dB below the spectral peak (near $ +20$ dB). This is why the Blackman window is considered adequate for many audio applications. From the dual of the convolution theorem discussed in §7.4.6, we know that windowing in the time domain corresponds to smoothing in the frequency domain. Specifically, the complex spectrum with magnitude displayed in Fig.8.4b (p. [*]) has been convolved with the Blackman window transform (dB magnitude shown in Fig.8.5c). Thus, the Blackman window Fourier transform has been applied as a smoothing kernel to the Fourier transform of the rectangularly windowed sinusoid to produce the smoothed result in Fig.8.6b. This topic is pursued in detail at the outset of Book IV in the music signal processing series [70].

Figure 8.6: Effect of the Blackman window on the sinusoidal data.
\includegraphics[width=\twidth]{eps/xw}


Hann-Windowed Complex Sinusoid

In this example, we'll perform spectrum analysis on a complex sinusoid having only a single positive frequency. We'll use the Hann window (also known as the Hanning window) which does not have as much sidelobe suppression as the Blackman window, but its main lobe is narrower. Its sidelobes ``roll off'' very quickly versus frequency. Compare with the Blackman window results to see if you can see these differences.

The Matlab script for synthesizing and plotting the Hann-windowed sinusoid is given below:

% Analysis parameters:
M = 31;         % Window length
N = 64;         % FFT length (zero padding factor near 2)

% Signal parameters:
wxT = 2*pi/4;   % Sinusoid frequency (rad/sample)
A = 1;          % Sinusoid amplitude
phix = 0;       % Sinusoid phase

% Compute the signal x:
n = [0:N-1];    % time indices for sinusoid and FFT
x = A * exp(j*wxT*n+phix); % complex sine [1,j,-1,-j...]

% Compute Hann window:
nm = [0:M-1];   % time indices for window computation
% Hann window = "raised cosine", normalization (1/M)
% chosen to give spectral peak magnitude at 1/2:
w = (1/M) * (cos((pi/M)*(nm-(M-1)/2))).^2;

wzp = [w,zeros(1,N-M)]; % zero-pad out to the length of x
xw = x .* wzp;          % apply the window w to signal x

figure(1);
subplot(1,1,1);

% Display real part of windowed signal and Hann window
plot(n,wzp,'-k'); hold on; plot(n,real(xw),'*k'); hold off;
title(['Hann Window and Windowed, Zero-Padded, ',...
       'Sinusoid (Real Part)']);
xlabel('Time (samples)'); ylabel('Amplitude');
The resulting plot of the Hann window and its use on sinusoidal data are shown in Fig.8.7.

Figure 8.7: A length 31 Hann window (``raised cosine'') overlaid with the real part of the Hann-windowed complex sinusoid. Zero-padding is also shown. The sampled sinusoid is plotted using `*' with no connecting interpolation lines. You must now imagine the continuous real sinusoid (windowed) threading through the asterisks.
\includegraphics[width=\twidth]{eps/hanning}

Hann Window Spectrum Analysis Results

Finally, the Matlab for computing the DFT of the Hann-windowed complex sinusoid and plotting the results is listed below. To help see the full spectrum, we also compute a heavily interpolated spectrum (via zero padding as before) which we'll draw using solid lines.

% Compute the spectrum and its alternative forms:
Xw = fft(xw);              % FFT of windowed data
fn = [0:1.0/N:1-1.0/N];    % Normalized frequency axis
spec = 20*log10(abs(Xw));  % Spectral magnitude in dB
% Since the nulls can go to minus infinity, clip at -100 dB:
spec = max(spec,-100*ones(1,length(spec)));
phs = angle(Xw);           % Spectral phase in radians
phsu = unwrap(phs);        % Unwrapped spectral phase

% Compute heavily interpolated versions for comparison:
Nzp = 16;                   % Zero-padding factor
Nfft = N*Nzp;               % Increased FFT size
xwi = [xw,zeros(1,Nfft-N)]; % New zero-padded FFT buffer
Xwi = fft(xwi);             % Compute interpolated spectrum
fni = [0:1.0/Nfft:1.0-1.0/Nfft]; % Normalized freq axis
speci = 20*log10(abs(Xwi)); % Interpolated spec mag (dB)
speci = max(speci,-100*ones(1,length(speci))); % clip
phsi = angle(Xwi);          % Interpolated phase
phsiu = unwrap(phsi);       % Unwrapped interpolated phase

figure(1);
subplot(2,1,1);

plot(fn,abs(Xw),'*k'); hold on;
plot(fni,abs(Xwi),'-k'); hold off;
title('Spectral Magnitude');
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Amplitude (linear)');

subplot(2,1,2);

% Same thing on a dB scale
plot(fn,spec,'*k'); hold on; plot(fni,speci,'-k'); hold off;
title('Spectral Magnitude (dB)');
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Magnitude (dB)');

cmd = ['print -deps ', 'specmag.eps']; disp(cmd); eval(cmd);
disp 'pausing for RETURN (check the plot). . .'; pause

figure(1);
subplot(2,1,1);
plot(fn,phs,'*k'); hold on; plot(fni,phsi,'-k'); hold off;
title('Spectral Phase');
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Phase (rad)'); grid;
subplot(2,1,2);
plot(fn,phsu,'*k'); hold on; plot(fni,phsiu,'-k'); hold off;
title('Unwrapped Spectral Phase');
xlabel('Normalized Frequency (cycles per sample))');
ylabel('Phase (rad)'); grid;
cmd = ['print -deps ', 'specphs.eps']; disp(cmd); eval(cmd);
Figure 8.8 shows the spectral magnitude and Fig.8.9 the spectral phase.

Figure 8.8: Spectral magnitude on linear (top) and dB (bottom) scales.
\includegraphics[width=\twidth]{eps/specmag}

There are no negative-frequency components in Fig.8.8 because we are analyzing a complex sinusoid $ x=[1,j,-1,-j,1,j,\ldots\,]$, which has frequency $ f_s/4$ only, with no component at $ -f_s/4$.

Notice how difficult it would be to correctly interpret the shape of the ``sidelobes'' without zero padding. The asterisks correspond to a zero-padding factor of 2, already twice as much as needed to preserve all spectral information faithfully, but not enough to clearly outline the sidelobes in a spectral magnitude plot.


Spectral Phase

As for the phase of the spectrum, what do we expect? We have chosen the sinusoid phase offset to be zero. The window is causal and symmetric about its middle. Therefore, we expect a linear phase term with slope $ -(M-1)/2=-15$ samples (as discussed in connection with the shift theorem in §7.4.4). Also, the window transform has sidelobes which cause a phase of $ \pi $ radians to switch in and out. Thus, we expect to see samples of a straight line (with slope $ -15$ samples) across the main lobe of the window transform, together with a switching offset by $ \pi $ in every other sidelobe away from the main lobe, starting with the immediately adjacent sidelobes.

In Fig.8.9(a), we can see the negatively sloped line across the main lobe of the window transform, but the sidelobes are hard to follow. Even the unwrapped phase in Fig.8.9(b) is not as clear as it could be. This is because a phase jump of $ \pi $ radians and $ -\pi$ radians are equally valid, as is any odd multiple of $ \pi $ radians. In the case of the unwrapped phase, all phase jumps are by $ +\pi$ starting near frequency $ 0.3$. Figure 8.9(c) shows what could be considered the ``canonical'' unwrapped phase for this example: We see a linear phase segment across the main lobe as before, and outside the main lobe, we have a continuation of that linear phase across all of the positive sidelobes, and only a $ \pi $-radian deviation from that linear phase across the negative sidelobes. In other words, we see a straight linear phase at the desired slope interrupted by temporary jumps of $ \pi $ radians. To obtain unwrapped phase of this type, the unwrap function needs to alternate the sign of successive phase-jumps by $ \pi $ radians; this could be implemented, for example, by detecting jumps-by-$ \pi $ to within some numerical tolerance and using a bit of state to enforce alternation of $ +\pi$ with $ -\pi$.

To convert the expected phase slope from $ -15$ ``radians per (rad/sec)'' to ``radians per cycle-per-sample,'' we need to multiply by ``radians per cycle,'' or $ 2\pi $. Thus, in Fig.8.9(c), we expect a slope of $ -94.2$ radians per unit normalized frequency, or $ -9.42$ radians per $ 0.1$ cycles-per-sample, and this looks about right, judging from the plot.

Figure 8.9: Spectral phase and two different phase unwrappings.

\includegraphics{eps/%
specphs-wrapped}
Raw spectral phase and its interpolation


\includegraphics{eps/%
specphs-unwrapped}
Unwrapped spectral phase and its interpolation


\includegraphics{eps/%
specphs-unwrapped-linear}
Canonically unwrapped spectral phase and its interpolation



Spectrograms

The spectrogram is a basic tool in audio spectral analysis and other fields. It has been applied extensively in speech analysis [18,64]. The spectrogram can be defined as an intensity plot (usually on a log scale, such as dB) of the Short-Time Fourier Transform (STFT) magnitude. The STFT is simply a sequence of FFTs of windowed data segments, where the windows are usually allowed to overlap in time, typically by 25-50% [3,70]. It is an important representation of audio data because human hearing is based on a kind of real-time spectrogram encoded by the cochlea of the inner ear [49]. The spectrogram has been used extensively in the field of computer music as a guide during the development of sound synthesis algorithms. When working with an appropriate synthesis model, matching the spectrogram often corresponds to matching the sound extremely well. In fact, spectral modeling synthesis (SMS) is based on synthesizing the short-time spectrum directly by some means [86].

Spectrogram of Speech

Figure 8.10: Classic spectrogram of a speech sample.
\includegraphics[width=\twidth]{eps/speechspgm}

An example spectrogram for recorded speech data is shown in Fig.8.10. It was generated using the Matlab code displayed in Fig.8.11. The function spectrogram is listed in §I.5. The spectrogram is computed as a sequence of FFTs of windowed data segments. The spectrogram is plotted by spectrogram using imagesc.

Figure 8.11: Matlab for computing a speech spectrogram.

 
[y,fs,bits] = wavread('SpeechSample.wav');
soundsc(y,fs); % Let's hear it
% for classic look:
colormap('gray'); map = colormap; imap = flipud(map);
M = round(0.02*fs);  % 20 ms window is typical
N = 2^nextpow2(4*M); % zero padding for interpolation
w = 0.54 - 0.46 * cos(2*pi*[0:M-1]/(M-1)); % w = hamming(M);
colormap(imap); % Octave wants it here
spectrogram(y,N,fs,w,-M/8,1,60);
colormap(imap); % Matlab wants it here
title('Hi - This is <you-know-who> ');
ylim([0,(fs/2)/1000]); % don't plot neg. frequencies

In this example, the Hamming window length was chosen to be 20 ms, as is typical in speech analysis. This is short enough so that any single 20 ms frame will typically contain data from only one phoneme,8.6 yet long enough that it will include at least two periods of the fundamental frequency during voiced speech, assuming the lowest voiced pitch to be around 100 Hz.

More generally, for speech and the singing voice (and any periodic tone), the STFT analysis parameters are chosen to trade off among the following conflicting criteria:

  1. The harmonics should be resolved.
  2. Pitch and formant variations should be closely followed.
The formants in speech are the resonances in the vocal tract. They appear as dark groups of harmonics in Fig.8.10. The first two formants largely determine the ``vowel'' in voiced speech. In telephone speech, nominally between 200 and 3200 Hz, only three or four formants are usually present in the band.


Filters and Convolution

A reason for the importance of convolution (defined in §7.2.4) is that every linear time-invariant system8.7can be represented by a convolution. Thus, in the convolution equation

$\displaystyle y = h\ast x \protect$ (8.1)

we may interpret $ x$ as the input signal to a filter, $ y$ as the output signal, and $ h$ as the digital filter, as shown in Fig.8.12.

Figure 8.12: The filter interpretation of convolution.
\includegraphics[scale=0.8]{eps/filterbox}

The impulse or ``unit pulse'' signal is defined by

$\displaystyle \delta(n) \isdef \left\{\begin{array}{ll}
1, & n=0 \\ [5pt]
0, & n\neq 0. \\
\end{array} \right.
$

For example, for sequences of length $ N=4$, $ \delta = [1,0,0,0]$.

The impulse signal is the identity element under convolution, since

$\displaystyle (x\ast \delta)_n \isdef \sum_{m=0}^{N-1}x(m) \delta(n-m) = x(n).
$

If we set $ x=\delta$ in Eq.$ \,$(8.1) above, we get

$\displaystyle y = h\ast \delta = h.
$

Thus, $ h$, which we introduced as the convolution representation of a filter, has been shown to be more specifically the impulse response of the filter.

It turns out in general that every linear time-invariant (LTI) system (filter) is completely described by its impulse response [68]. No matter what the LTI system is, we can feed it an impulse, record what comes out, call it $ h(n)$, and implement the system by convolving the input signal $ x$ with the impulse response $ h$. In other words, every LTI system has a convolution representation in terms of its impulse response.

Frequency Response


Definition: The frequency response of an LTI filter may be defined as the Fourier transform of its impulse response. In particular, for finite, discrete-time signals $ h\in{\bf C}^N$, the sampled frequency response may be defined as

$\displaystyle H(\omega_k) \isdef \hbox{\sc DFT}_k(h).
$

The complete (continuous) frequency response is defined using the DTFT (see §B.1), i.e.,

$\displaystyle H(\omega) \isdef \hbox{\sc DTFT}_\omega(\hbox{\sc ZeroPad}_\infty(h)) \isdef \sum_{n=0}^{N-1}h(n) e^{-j\omega n}
$

where the summation limits are truncated to $ [0,N-1]$ because $ h(n)$ is zero for $ n<0$ and $ n>N-1$. Thus, the DTFT can be obtained from the DFT by simply replacing $ \omega_k$ by $ \omega$, which corresponds to infinite zero-padding in the time domain. Recall from §7.2.10 that zero-padding in the time domain gives ideal interpolation of the frequency-domain samples $ H(\omega_k)$ (assuming the original DFT included all nonzero samples of $ h$).


Amplitude Response


Definition: The amplitude response of a filter is defined as the magnitude of the frequency response

$\displaystyle G(k) \isdef \left\vert H(\omega_k)\right\vert.
$

From the convolution theorem, we can see that the amplitude response $ G(k)$ is the gain of the filter at frequency $ \omega_k$, since

$\displaystyle \left\vert Y(\omega_k)\right\vert = \left\vert H(\omega_k)X(\omega_k)\right\vert
= G(k)\left\vert X(\omega_k)\right\vert,
$

where $ X(\omega_k)$ is the $ k$th sample of the DFT of the input signal $ x(n)$, and $ Y$ is the DFT of the output signal $ y$.


Phase Response


Definition: The phase response of a filter is defined as the phase of its frequency response:

$\displaystyle \Theta(k) \isdef \angle{H(\omega_k)}
$

From the convolution theorem, we can see that the phase response $ \Theta(k)$ is the phase-shift added by the filter to an input sinusoidal component at frequency $ \omega_k$, since

$\displaystyle \angle{Y(\omega_k)} = \angle{\left[H(\omega_k)X(\omega_k)\right]}...
... \angle{H(\omega_k)} + \angle{X(\omega_k)}
= \Theta(k) + \angle{X(\omega_k)}.
$

The topics touched upon in this section are developed more fully in the next book [68] in the music signal processing series mentioned in the preface.


Correlation Analysis

The correlation operator (defined in §7.2.5) plays a major role in statistical signal processing. For a proper development, see, e.g., [27,33,65]. This section introduces only some of the most basic elements of statistical signal processing in a simplified manner, with emphasis on illustrating applications of the DFT.

Cross-Correlation


Definition: The circular cross-correlation of two signals $ x$ and $ y$ in $ {\bf C}^N$ may be defined by

$\displaystyle \zbox {{\hat r}_{xy}(l) \isdef \frac{1}{N}(x\star y)(l)
\isdef \frac{1}{N}\sum_{n=0}^{N-1}\overline{x(n)} y(n+l), \; l=0,1,2,\ldots,N-1.}
$

(Note that the ``lag'' $ l$ is an integer variable, not the constant $ 1$.) The DFT correlation operator `$ \star$' was first defined in §7.2.5.

The term ``cross-correlation'' comes from statistics, and what we have defined here is more properly called a ``sample cross-correlation.'' That is, $ {\hat r}_{xy}(l)$ is an estimator8.8 of the true cross-correlation $ r_{xy}(l)$ which is an assumed statistical property of the signal itself. This definition of a sample cross-correlation is only valid for stationary stochastic processes, e.g., ``steady noises'' that sound unchanged over time. The statistics of a stationary stochastic process are by definition time invariant, thereby allowing time-averages to be used for estimating statistics such as cross-correlations. For brevity below, we will typically not include ``sample'' qualifier, because all computational methods discussed will be sample-based methods intended for use on stationary data segments.

The DFT of the cross-correlation may be called the cross-spectral density, or ``cross-power spectrum,'' or even simply ``cross-spectrum'':

$\displaystyle {\hat R}_{xy}(\omega_k) \isdef \hbox{\sc DFT}_k({\hat r}_{xy}) = \frac{\overline{X(\omega_k)}Y(\omega_k)}{N}
$

The last equality above follows from the correlation theorem7.4.7).


Unbiased Cross-Correlation

Recall that the cross-correlation operator is cyclic (circular) since $ n+l$ is interpreted modulo $ N$. In practice, we are normally interested in estimating the acyclic cross-correlation between two signals. For this (more realistic) case, we may define instead the unbiased cross-correlation

$\displaystyle \zbox {{\hat r}^u_{xy}(l) \isdef \frac{1}{N-l}\sum_{n=0}^{N-1-l} \overline{x(n)} y(n+l),\quad
l = 0,1,2,\ldots,L-1}
$

where we choose $ L\ll N$ (e.g., $ L\approx\sqrt{N}$) in order to have enough lagged products $ \overline{x(n)} y(n+l)$ at the highest lag $ L-1$ so that a reasonably accurate average is obtained. Note that the summation stops at $ n=N-l-1$ to avoid cyclic wrap-around of $ n$ modulo $ N$. The term ``unbiased'' refers to the fact that the expected value8.9[33] of $ {\hat r}^u_{xy}(l)$ is the true cross-correlation $ r_{xy}(l)$ of $ x$ and $ y$ (assumed to be samples from stationary stochastic processes).

An unbiased acyclic cross-correlation may be computed faster via DFT (FFT) methods using zero padding:

$\displaystyle \zbox {{\hat r}^u_{xy}(l) = \frac{1}{N-l}\hbox{\sc IDFT}_l(\overline{X}\cdot Y), \quad
l = 0,1,2,\ldots,L-1}
$

where

\begin{eqnarray*}
X &=& \hbox{\sc DFT}[\hbox{\sc CausalZeroPad}_{N+L-1}(x)]\\
Y &=& \hbox{\sc DFT}[\hbox{\sc CausalZeroPad}_{N+L-1}(y)].
\end{eqnarray*}

Note that $ x$ and $ y$ belong to $ {\bf C}^N$ while $ X$ and $ Y$ belong to $ {\bf C}^{N+L-1}$. The zero-padding may be causal (as defined in §7.2.8) because the signals are assumed to be be stationary, in which case all signal statistics are time-invariant. As usual when embedding acyclic correlation (or convolution) within the cyclic variant given by the DFT, sufficient zero-padding is provided so that only zeros are ``time aliased'' (wrapped around in time) by modulo indexing.

Cross-correlation is used extensively in audio signal processing for applications such as time scale modification, pitch shifting, click removal, and many others.


Autocorrelation

The cross-correlation of a signal with itself gives its autocorrelation:

$\displaystyle \zbox {{\hat r}_x(l) \isdef \frac{1}{N}(x\star x)(l)
\isdef \frac{1}{N}\sum_{n=0}^{N-1}\overline{x(n)} x(n+l)}
$

The autocorrelation function is Hermitian:

$\displaystyle {\hat r}_x(-l) = \overline{{\hat r}_x(l)}
$

When $ x$ is real, its autocorrelation is real and even (symmetric about lag zero).

The unbiased cross-correlation similarly reduces to an unbiased autocorrelation when $ x\equiv y$:

$\displaystyle \zbox {{\hat r}^u_x(l) \isdef \frac{1}{N-l}\sum_{n=0}^{N-1-l} \overline{x(n)} x(n+l),\quad l = 0,1,2,\ldots,L-1} \protect$ (8.2)

The DFT of the true autocorrelation function $ r_x(n)\in{\bf R}^N$ is the (sampled) power spectral density (PSD), or power spectrum, and may be denoted

$\displaystyle R_x(\omega_k) \isdef \hbox{\sc DFT}_k(r_x).
$

The complete (not sampled) PSD is $ R_x(\omega) \isdef
\hbox{\sc DTFT}_k(r_x)$, where the DTFT is defined in Appendix B (it's just an infinitely long DFT). The DFT of $ {\hat r}_x$ thus provides a sample-based estimate of the PSD:8.10

$\displaystyle {\hat R}_x(\omega_k)=\hbox{\sc DFT}_k({\hat r}_x) = \frac{\left\vert X(\omega_k)\right\vert^2}{N}
$

We could call $ {\hat R}_x(\omega_k)$ a ``sampled sample power spectral density''.

At lag zero, the autocorrelation function reduces to the average power (mean square) which we defined in §5.8:

$\displaystyle {\hat r}_x(0) \isdef \frac{1}{N}\sum_{m=0}^{N-1}\left\vert x(m)\right\vert^2 % \isdef \Pscr_x^2
$

Replacing ``correlation'' with ``covariance'' in the above definitions gives corresponding zero-mean versions. For example, we may define the sample circular cross-covariance as

$\displaystyle \zbox {{\hat c}_{xy}(n)
\isdef \frac{1}{N}\sum_{m=0}^{N-1}\overline{[x(m)-\mu_x]} [y(m+n)-\mu_y].}
$

where $ \mu_x$ and $ \mu_y$ denote the means of $ x$ and $ y$, respectively. We also have that $ {\hat c}_x(0)$ equals the sample variance of the signal $ x$:

$\displaystyle {\hat c}_x(0) \isdef \frac{1}{N}\sum_{m=0}^{N-1}\left\vert x(m)-\mu_x\right\vert^2 \isdef {\hat \sigma}_x^2
$


Matched Filtering

The cross-correlation function is used extensively in pattern recognition and signal detection. We know from Chapter 5 that projecting one signal onto another is a means of measuring how much of the second signal is present in the first. This can be used to ``detect'' the presence of known signals as components of more complicated signals. As a simple example, suppose we record $ x(n)$ which we think consists of a signal $ s(n)$ that we are looking for plus some additive measurement noise $ e(n)$. That is, we assume the signal model $ x(n)=s(n)+e(n)$. Then the projection of $ x$ onto $ s$ is (recalling §5.9.9)

$\displaystyle {\bf P}_s(x) \isdef \frac{\left<x,s\right>}{\Vert s\Vert^2} s
= \...
...}{\Vert s\Vert^2} s
= s + \frac{N}{\Vert s\Vert^2} {\hat r}_{se}(0)s
\approx s
$

since the projection of random, zero-mean noise $ e$ onto $ s$ is small with probability one. Another term for this process is matched filtering. The impulse response of the ``matched filter'' for a real signal $ s$ is given by $ \hbox{\sc Flip}(s)$.8.11 By time-reversing $ s$, we transform the convolution implemented by filtering into a sliding cross-correlation operation between the input signal $ x$ and the sought signal $ s$. (For complex known signals $ s$, the matched filter is $ \hbox{\sc Flip}(\overline{s})$.) We detect occurrences of $ s$ in $ x$ by detecting peaks in $ {\hat r}_{sx}(l)$.

In the same way that FFT convolution is faster than direct convolution (see Table 7.1), cross-correlation and matched filtering are generally carried out most efficiently using an FFT algorithm (Appendix A).


FIR System Identification

Estimating an impulse response from input-output measurements is called system identification, and a large literature exists on this topic (e.g., [39]).

Cross-correlation can be used to compute the impulse response $ h(n)$ of a filter from the cross-correlation of its input and output signals $ x(n)$ and $ y = h\ast x$, respectively. To see this, note that, by the correlation theorem,

$\displaystyle x\star y \;\longleftrightarrow\;\overline{X}\cdot Y = \overline{X}\cdot (H\cdot X) =
H\cdot\left\vert X\right\vert^2.
$

Therefore, the frequency response equals the input-output cross-spectrum divided by the input power spectrum:

$\displaystyle H = \frac{\overline{X}\cdot Y}{\left\vert X\right\vert^2} = \frac{{\hat R}_{xy}}{{\hat R}_{xx}}
$

where multiplication and division of spectra are defined pointwise, i.e., $ H(\omega_k) = \overline{X(\omega_k)}\cdot Y(\omega_k)/\vert X(\omega_k)\vert^2$. A Matlab program illustrating these relationships is listed in Fig.8.13.

Figure 8.13: FIR system identification example in matlab.

 
% sidex.m - Demonstration of the use of FFT cross-
% correlation to compute the impulse response
% of a filter given its input and output.
% This is called "FIR system identification".

Nx = 32; % input signal length
Nh = 10; % filter length
Ny = Nx+Nh-1; % max output signal length
% FFT size to accommodate cross-correlation:
Nfft = 2^nextpow2(Ny); % FFT wants power of 2

x = rand(1,Nx); % input signal = noise
%x = 1:Nx; 	% input signal = ramp
h = [1:Nh]; 	% the filter
xzp = [x,zeros(1,Nfft-Nx)]; % zero-padded input
yzp = filter(h,1,xzp); % apply the filter
X = fft(xzp);   % input spectrum
Y = fft(yzp);   % output spectrum
Rxx = conj(X) .* X; % energy spectrum of x
Rxy = conj(X) .* Y; % cross-energy spectrum
Hxy = Rxy ./ Rxx;   % should be the freq. response
hxy = ifft(Hxy);    % should be the imp. response

hxy(1:Nh) 	    % print estimated impulse response
freqz(hxy,1,Nfft);  % plot estimated freq response

err = norm(hxy - [h,zeros(1,Nfft-Nh)])/norm(h);
disp(sprintf(['Impulse Response Error = ',...
	'%0.14f%%'],100*err));

err = norm(Hxy-fft([h,zeros(1,Nfft-Nh)]))/norm(h);
disp(sprintf(['Frequency Response Error = ',...
	'%0.14f%%'],100*err));


Power Spectral Density Estimation

Welch's method [85] (or the periodogram method [20]) for estimating power spectral densities (PSD) is carried out by dividing the time signal into successive blocks, and averaging squared-magnitude DFTs of the signal blocks. Let $ x_m(n)=x(n+mN)$, $ n=0,1,\dots,N-1$, denote the $ m$th block of the signal $ x\in{\bf C}^{MN}$, with $ M$ denoting the number of blocks. Then the Welch PSD estimate is given by

$\displaystyle {\hat R}_x(\omega_k) = \frac{1}{M}\sum_{m=0}^{M-1}\left\vert DFT_...
...t\vert^2 \isdef \left\{\left\vert X_m(\omega_k)^2\right\vert\right\}_m \protect$ (8.3)

where `` $ \{\cdot\}_m$'' denotes time averaging across blocks (or ``frames'') of data indexed by $ m$. The function pwelch implements Welch's method in Octave (Octave-Forge collection) and Matlab (Signal Processing Toolbox).

Recall that $ \left\vert X_m\right\vert^2\;\leftrightarrow\;x\star x$ which is circular (cyclic) autocorrelation. To obtain an acyclic autocorrelation instead, we may use zero padding in the time domain, as described in §8.4.2. That is, we can replace $ x_m$ above by $ \hbox{\sc CausalZeroPad}_{2N-1}(x_m) =
[x_m,0,\ldots,0]$.8.12Although this fixes the ``wrap-around problem'', the estimator is still biased because its expected value is the true autocorrelation $ r_x(l)$ weighted by $ N-\vert l\vert$. This bias is equivalent to multiplying the correlation in the ``lag domain'' by a triangular window (also called a ``Bartlett window''). The bias can be removed by simply dividing it out, as in Eq.$ \,$(8.2), but it is common to retain the Bartlett weighting since it merely corresponds to smoothing the power spectrum (or cross-spectrum) with a sinc$ ^2$ kernel;8.13it also down-weights the less reliable large-lag estimates, weighting each lag by the number of lagged products that were summed.

Since $ \vert X_m(\omega_k)\vert^2=N\cdot\hbox{\sc DFT}_k({\hat r}_{x_m})$, and since the DFT is a linear operator7.4.1), averaging magnitude-squared DFTs $ \vert X_m(\omega_k)\vert^2$ is equivalent, in principle, to estimating block autocorrelations $ {\hat r}_{x_m}$, averaging them, and taking a DFT of the average. However, this would normally be slower.

We return to power spectral density estimation in Book IV [70] of the music signal processing series.


Coherence Function

A function related to cross-correlation is the coherence function, defined in terms of power spectral densities and the cross-spectral density by

$\displaystyle C_{xy}(\omega) \isdef \frac{\vert R_{xy}(\omega)\vert^2}{R_x(\omega)R_y(\omega)}.
$

In practice, these quantities can be estimated by time-averaging $ \overline{X(\omega_k)}Y(\omega_k)$, $ \left\vert X(\omega_k)\right\vert^2$, and $ \left\vert Y(\omega_k)\right\vert^2$ over successive signal blocks. Let $ \{\cdot\}_m$ denote time averaging across frames as in Eq.$ \,$(8.3) above. Then an estimate of the coherence, the sample coherence function $ {\hat
C}_{xy}(\omega_k)$, may be defined by

$\displaystyle {\hat C}_{xy}(\omega_k) \isdef
\frac{\left\vert\left\{\overline{...
...\vert^2\right\}_m\cdot\left\{\left\vert Y_m(\omega_k)\right\vert^2\right\}_m}.
$

Note that the averaging in the numerator occurs before the absolute value is taken.

The coherence $ C_{xy}(\omega)$ is a real function between zero and one which gives a measure of correlation between $ x$ and $ y$ at each frequency $ \omega$. For example, imagine that $ y$ is produced from $ x$ via an LTI filtering operation:

$\displaystyle y = h\ast x \;\implies\; Y(\omega_k) = H(\omega_k)X(\omega_k)
$

Then the magnitude-normalized cross-spectrum in each frame is

\begin{eqnarray*}
{\hat A}_{x_m y_m}(\omega_k) &\isdef &
\frac{\overline{X_m(\om...
...ht\vert}
= \frac{H(\omega_k)}{\left\vert H(\omega_k)\right\vert}
\end{eqnarray*}

so that the coherence function becomes

$\displaystyle \left\vert{\hat C}_{xy}(\omega_k)\right\vert^2 =
\left\vert\frac{H(\omega_k)}{\left\vert H(\omega_k)\right\vert}\right\vert^2 = 1.
$

On the other hand, when $ x$ and $ y$ are uncorrelated (e.g., $ y$ is a noise process not derived from $ x$), the sample coherence converges to zero at all frequencies, as the number of blocks in the average goes to infinity.

A common use for the coherence function is in the validation of input/output data collected in an acoustics experiment for purposes of system identification. For example, $ x(n)$ might be a known signal which is input to an unknown system, such as a reverberant room, say, and $ y(n)$ is the recorded response of the room. Ideally, the coherence should be $ 1$ at all frequencies. However, if the microphone is situated at a null in the room response for some frequency, it may record mostly noise at that frequency. This is indicated in the measured coherence by a significant dip below 1. An example is shown in Book III [69] for the case of a measured guitar-bridge admittance. A more elementary example is given in the next section.

Coherence Function in Matlab

In Matlab and Octave, cohere(x,y,M) computes the coherence function $ C_{xy}$ using successive DFTs of length $ M$ with a Hanning window and 50% overlap. (The window and overlap can be controlled via additional optional arguments.) The matlab listing in Fig.8.14 illustrates cohere on a simple example. Figure 8.15 shows a plot of cxyM for this example. We see a coherence peak at frequency $ 0.25$ cycles/sample, as expected, but there are also two rather large coherence samples on either side of the main peak. These are expected as well, since the true cross-spectrum for this case is a critically sampled Hanning window transform. (A window transform is critically sampled whenever the window length equals the DFT length.)

Figure 8.14: Coherence measurement example in matlab.

 
% Illustrate estimation of coherence function 'cohere'
% in the Matlab Signal Processing Toolbox
% or Octave with Octave Forge:
N = 1024;           % number of samples
x=randn(1,N);       % Gaussian noise
y=randn(1,N);       % Uncorrelated noise
f0 = 1/4;           % Frequency of high coherence
nT = [0:N-1];       % Time axis
w0 = 2*pi*f0;
x = x + cos(w0*nT); % Let something be correlated
p = 2*pi*rand(1,1); % Phase is irrelevant
y = y + cos(w0*nT+p);
M = round(sqrt(N)); % Typical window length
[cxyM,w] = cohere(x,y,M); % Do the work
figure(1); clf;
stem(w/2,cxyM,'*'); % w goes from 0 to 1 (odd convention)
legend('');         % needed in Octave
grid on;
ylabel('Coherence');
xlabel('Normalized Frequency (cycles/sample)');
axis([0 1/2 0 1]);
replot;  % Needed in Octave
saveplot('../eps/coherex.eps'); % compatibility utility

Figure 8.15: Sample coherence function.
\includegraphics[width=\twidth]{eps/coherex}

Note that more than one frame must be averaged to obtain a coherence less than one. For example, changing the cohere call in the above example to ``cxyN = cohere(x,y,N);'' produces all ones in cxyN, because no averaging is performed.


Recommended Further Reading

We are now finished developing the mathematics of the DFT and a first look at some of its applications. The sequel consists of appendices which fill in more elementary background and supplement the prior development with related new topics, such as the Fourier transform and FFT algorithm.

For further study, one may, of course, continue on to Book II (Introduction to Digital Filter Theory [68]) in the music signal processing series (mentioned in the preface). Alternatively and in addition, the references cited in the bibliography can provide further guidance.


Next Section:
Fast Fourier Transform (FFT) Algorithms
Previous Section:
Fourier Theorems for the DFT