Free Books

Fourier Theorems for the DFT

This chapter derives various Fourier theorems for the case of the DFT. Included are symmetry relations, the shift theorem, convolution theorem, correlation theorem, power theorem, and theorems pertaining to interpolation and downsampling. Applications related to certain theorems are outlined, including linear time-invariant filtering, sampling rate conversion, and statistical signal processing.

The DFT and its Inverse Restated

Let $ x(n), n=0,1,2,\ldots,N-1$, denote an $ N$-sample complex sequence, i.e., $ x\in{\bf C}^N$. Then the spectrum of $ x$ is defined by the Discrete Fourier Transform (DFT):

$\displaystyle \zbox {X(k) \isdef \sum_{n=0}^{N-1}x(n) e^{-j 2\pi nk/N},\quad k=0,1,2,\ldots,N-1}

The inverse DFT (IDFT) is defined by

$\displaystyle \zbox {x(n) = \frac{1}{N}\sum_{k=0}^{N-1}X(k) e^{j 2\pi nk/N},\quad n=0,1,2,\ldots,N-1.}

In this chapter, we will omit mention of an explicit sampling interval $ T=1/f_s$, as this is most typical in the digital signal processing literature. It is often said that the sampling frequency is $ f_s=1$. In this case, a radian frequency $ \omega_k \isdef 2\pi k/N$ is in units of ``radians per sample.'' Elsewhere in this book, $ \omega_k$ usually means ``radians per second.'' (Of course, there's no difference when the sampling rate is really $ 1$.) Another term we use in connection with the $ f_s=1$ convention is normalized frequency: All normalized radian frequencies lie in the range $ [-\pi,\pi)$, and all normalized frequencies in Hz lie in the range $ [-0.5,0.5)$.7.1 Note that physical units of seconds and Hertz can be reintroduced by the substitution

$\displaystyle e^{j 2\pi nk/N} = e^{j 2\pi k (f_s/N) nT} \isdef e^{j \omega_k t_n}.

Notation and Terminology

If $ X$ is the DFT of $ x$, we say that $ x$ and $ X$ form a transform pair and write

$\displaystyle \zbox {x\;\longleftrightarrow\;X}$   $\displaystyle \mbox{(\lq\lq $x$\ corresponds to $X$'')}$$\displaystyle . \protect$

Another notation we'll use is

\hbox{\sc DFT}(x)&\isdef & X,\quad\mbox{and} \\
\hbox{\sc DFT}_k(x)&\isdef & X(k).

If we need to indicate the length of the DFT explicitly, we will write $ \hbox{\sc DFT}_N(x) = X$ and $ \hbox{\sc DFT}_{N,k}(x) = X(k)$. As we've already seen, time-domain signals are consistently denoted using lowercase symbols such as ``$ x(n)$,'' while frequency-domain signals (spectra), are denoted in uppercase (`` $ X(\omega_k)$'').

Modulo Indexing, Periodic Extension

The DFT sinusoids $ s_k(n) \isdef e^{j\omega_k n}$ are all periodic having periods which divide $ N$. That is, $ s_k(n+mN)=s_k(n)$ for any integer $ m$. Since a length $ N$ signal $ x$ can be expressed as a linear combination of the DFT sinusoids in the time domain,

$\displaystyle x(n) = \frac{1}{N}\sum_k X(k) s_k(n),

it follows that the ``automatic'' definition of $ x(n)$ beyond the range $ [0,N-1]$ is periodic extension, i.e., $ x(n+mN)\isdef
x(n)$ for every integer $ m$.

Moreover, the DFT also repeats naturally every $ N$ samples, since

$\displaystyle X(k+mN) \isdef \left<x,s_{k+mN}\right> = \left<x,s_k\right> = X(k)

because $ s_{k+mN}(n) = e^{j2\pi n(k+mN)/N} = e^{j2\pi nk/N}e^{j2\pi n m} =
s_k(n)$. (The DFT sinusoids behave identically as functions of $ n$ and $ k$.) Accordingly, for purposes of DFT studies, we may define all signals in $ {\bf C}^N$ as being single periods from an infinitely long periodic signal with period $ N$ samples:

Definition (Periodic Extension): For any signal $ x\in{\bf C}^N$, we define

$\displaystyle x(n+mN)\isdef x(n)

for every integer $ m$.

As a result of this convention, all indexing of signals and spectra7.2 can be interpreted modulo $ N$, and we may write $ x(n\left(\mbox{mod}\;N\right))$ to emphasize this. Formally, `` $ n\left(\mbox{mod}\;N\right)$'' is defined as $ n-mN$ with $ m$ chosen to give $ n-mN$ in the range $ [0,N-1]$.

As an example, when indexing a spectrum $ X$, we have that $ X(N)=X(0)$ which can be interpreted physically as saying that the sampling rate is the same frequency as dc for discrete time signals. Periodic extension in the time domain implies that the signal input to the DFT is mathematically treated as being samples of one period of a periodic signal, with the period being exactly $ NT$ seconds ($ N$ samples). The corresponding assumption in the frequency domain is that the spectrum is exactly zero between frequency samples $ \omega_k$. It is also possible to adopt the point of view that the time-domain signal $ x(n)$ consists of $ N$ samples preceded and followed by zeros. In that case, the spectrum would be nonzero between spectral samples $ \omega_k$, and the spectrum between samples would be reconstructed by means of bandlimited interpolation [72].

Signal Operators

It will be convenient in the Fourier theorems of §7.4 to make use of the following signal operator definitions.

Operator Notation

In this book, an operator is defined as a signal-valued function of a signal. Thus, for the space of length $ N$ complex sequences, an operator $ \hbox{\sc Op}$ is a mapping from $ {\bf C}^N$ to $ {\bf C}^N$:

$\displaystyle \hbox{\sc Op}(x) \in{\bf C}^N\, \forall x\in{\bf C}^N

An example is the DFT operator:

$\displaystyle \hbox{\sc DFT}(x) = X

The argument to an operator is always an entire signal. However, its output may be subscripted to obtain a specific sample, e.g.,

$\displaystyle \hbox{\sc DFT}_k(x) = X(k).

Some operators require one or more parameters affecting their definition. For example the shift operator (defined in §7.2.3 below) requires a shift amount $ \Delta\in{\bf Z}$:7.3

$\displaystyle \hbox{\sc Shift}_{\Delta,n}(x) \isdef x(n-\Delta)

A time or frequency index, if present, will always be the last subscript. Thus, the signal $ \hbox{\sc Shift}_{\Delta}(x)$ is obtained from $ x$ by shifting it $ \Delta$ samples.

Note that operator notation is not standard in the field of digital signal processing. It can be regarded as being influenced by the field of computer science. In the Fourier theorems below, both operator and conventional signal-processing notations are provided. In the author's opinion, operator notation is consistently clearer, allowing powerful expressions to be written naturally in one line (e.g., see Eq.$ \,$(7.8)), and it is much closer to how things look in a readable computer program (such as in the matlab language).

Flip Operator

We define the flip operator by

$\displaystyle \hbox{\sc Flip}_n(x) \isdef x(-n), \protect$ (7.1)

for all sample indices $ n\in{\bf Z}$. By modulo indexing, $ x(-n)$ is the same as $ x(N-n)$. The $ \hbox{\sc Flip}()$ operator reverses the order of samples $ 1$ through $ N-1$ of a sequence, leaving sample 0 alone, as shown in Fig.7.1a. Thanks to modulo indexing, it can also be viewed as ``flipping'' the sequence about the time 0, as shown in Fig.7.1b. The interpretation of Fig.7.1b is usually the one we want, and the $ \hbox{\sc Flip}$ operator is usually thought of as ``time reversal'' when applied to a signal $ x$ or ``frequency reversal'' when applied to a spectrum $ X$.

figure[htbp] \includegraphics{eps/flip}

Shift Operator

The shift operator is defined by

$\displaystyle \hbox{\sc Shift}_{\Delta,n}(x) \isdef x(n-\Delta), \quad \Delta\in{\bf Z},

and $ \hbox{\sc Shift}_{\Delta}(x)$ denotes the entire shifted signal. Note that since indexing is modulo $ N$, the shift is circular (or ``cyclic''). However, we normally use it to represent time delay by $ \Delta$ samples. We often use the shift operator in conjunction with zero padding (appending zeros to the signal $ x$, §7.2.7) in order to avoid the ``wrap-around'' associated with a circular shift.

Figure 7.2: Successive one-sample shifts of a sampled periodic sawtooth waveform having first period $ [0,1,2,3,4]$.

Figure 7.2 illustrates successive one-sample delays of a periodic signal having first period given by $ [0,1,2,3,4]$.


  • $ \hbox{\sc Shift}_1([1,0,0,0]) = [0,1,0,0]\;$ (an impulse delayed one sample).

  • $ \hbox{\sc Shift}_1([1,2,3,4]) = [4,1,2,3]\;$ (a circular shift example).

  • $ \hbox{\sc Shift}_{-2}([1,0,0,0]) = [0,0,1,0]\;$ (another circular shift example).


The convolution of two signals $ x$ and $ y$ in $ {\bf C}^N$ may be denoted `` $ x\circledast y$'' and defined by

$\displaystyle \zbox {(x\circledast y)_n \isdef \sum_{m=0}^{N-1}x(m) y(n-m)}

Note that this is circular convolution (or ``cyclic'' convolution).7.4 The importance of convolution in linear systems theory is discussed in §8.3.

Cyclic convolution can be expressed in terms of previously defined operators as

$\displaystyle y(n) \isdef (x\circledast h)_n \isdef \sum_{m=0}^{N-1}x(m)h(n-m) =
\left<x,\hbox{\sc Shift}_n(\hbox{\sc Flip}(h))\right>$   $\displaystyle \mbox{($h$\ real)}$

where $ x,y\in{\bf C}^N$ and $ h\in{\bf R}^N$. This expression suggests graphical convolution, discussed below in §7.2.4.

Commutativity of Convolution

Convolution (cyclic or acyclic) is commutative, i.e.,

$\displaystyle \zbox {x\circledast y = y\circledast x .}


(x\circledast y)_n &\isdef & \sum_{m=0}^{N-1}x(m) y(n-m) =
...=& \sum_{l=0}^{N-1}y(l) x(n-l) \\
&\isdef & (y \circledast x)_n

In the first step we made the change of summation variable $ l\isdeftext n-m$, and in the second step, we made use of the fact that any sum over all $ N$ terms is equivalent to a sum from 0 to $ N-1$.

Convolution as a Filtering Operation

In a convolution of two signals $ x\circledast y$, where both $ x$ and $ y$ are signals of length $ N$ (real or complex), we may interpret either $ x$ or $ y$ as a filter that operates on the other signal which is in turn interpreted as the filter's ``input signal''.7.5 Let $ h\in{\bf C}^N$ denote a length $ N$ signal that is interpreted as a filter. Then given any input signal $ x\in{\bf C}^N$, the filter output signal $ y\in{\bf C}^N$ may be defined as the cyclic convolution of $ x$ and $ h$:

$\displaystyle y = h\circledast x = x \circledast h

Because the convolution is cyclic, with $ x$ and $ h$ chosen from the set of (periodically extended) vectors of length $ N$, $ h(n)$ is most precisely viewed as the impulse-train-response of the associated filter at time $ n$. Specifically, the impulse-train response $ h\in{\bf C}^N$ is the response of the filter to the impulse-train signal $ \delta\isdeftext [1,0,\ldots,0]\in{\bf R}^N$, which, by periodic extension, is equal to

$\displaystyle \delta(n) = \left\{\begin{array}{ll}
1, & n=0\;\mbox{(mod $N$)} \\ [5pt]
0, & n\ne 0\;\mbox{(mod $N$)}. \\
\end{array} \right.

Thus, $ N$ is the period of the impulse-train in samples--there is an ``impulse'' (a `$ 1$') every $ N$ samples. Neglecting the assumed periodic extension of all signals in $ {\bf C}^N$, we may refer to $ \delta$ more simply as the impulse signal, and $ h$ as the impulse response (as opposed to impulse-train response). In contrast, for the DTFTB.1), in which the discrete-time axis is infinitely long, the impulse signal $ \delta(n)$ is defined as

$\displaystyle \delta(n) \isdef \left\{\begin{array}{ll}
1, & n=0 \\ [5pt]
0, & n\ne 0 \\
\end{array} \right.

and no periodic extension arises.

As discussed below (§7.2.7), one may embed acyclic convolution within a larger cyclic convolution. In this way, real-world systems may be simulated using fast DFT convolutions (see Appendix A for more on fast convolution algorithms).

Note that only linear, time-invariant (LTI) filters can be completely represented by their impulse response (the filter output in response to an impulse at time 0). The convolution representation of LTI digital filters is fully discussed in Book II [68] of the music signal processing book series (in which this is Book I).

Convolution Example 1: Smoothing a Rectangular Pulse

Figure 7.3: Illustration of the convolution of a rectangular pulse $ x=[0 ,0 ,0 ,0,1,1,1,1,1,1,0,0,0,0]$ and the impulse response of an ``averaging filter'' $ h=[1/3,1/3,1/3,0,0,0,0,0,0,0,0,0,0,0]$ ($ N=14$).

Filter input signal $ x(n)$.

Filter impulse response $ h(n)$.

Filter output signal $ y(n)$.

Figure 7.3 illustrates convolution of

$\displaystyle x = [ 0, 0, 0,0,1,1,1,1,1,1,0,0,0,0]


$\displaystyle h = \left[\frac{1}{3},\frac{1}{3},\frac{1}{3},0,0,0,0,0,0,0,0,0,0,0\right]

to get

$\displaystyle y = x\circledast h = \left[0,0,0,0,\frac{1}{3},\frac{2}{3},1,1,1,1,\frac{2}{3},\frac{1}{3},0,0\right] \protect$ (7.2)

as graphed in Fig.7.3(c). In this case, $ h$ can be viewed as a ``moving three-point average'' filter. Note how the corners of the rectangular pulse are ``smoothed'' by the three-point filter. Also note that the pulse is smeared to the ``right'' (forward in time) because the filter impulse response starts at time zero. Such a filter is said to be causal (see [68] for details). By shifting the impulse response left one sample to get

$\displaystyle h=\left[\frac{1}{3},\frac{1}{3},0,0,0,0,0,0,0,0,0,0,\frac{1}{3}\right]

(in which case $ \hbox{\sc Flip}(h)=h$), we obtain a noncausal filter $ h$ which is symmetric about time zero so that the input signal is smoothed ``in place'' with no added delay (imagine Fig.7.3(c) shifted left one sample, in which case the input pulse edges align with the midpoint of the rise and fall in the output signal).

Convolution Example 2: ADSR Envelope

Figure 7.4: Illustration of the convolution of two rectangular pulses with a truncated exponential impulse response.

Filter input signal $ x(n)$.

Filter impulse response $ h(n)$.

Filter output signal $ y(n)$.

In this example, the input signal is a sequence of two rectangular pulses, creating a piecewise constant function, depicted in Fig.7.4(a). The filter impulse response, shown in Fig.7.4(b), is a truncated exponential.7.6

In this example, $ h$ is again a causal smoothing-filter impulse response, and we could call it a ``moving weighted average'', in which the weighting is exponential into the past. The discontinuous steps in the input become exponential ``asymptotes'' in the output which are approached exponentially. The overall appearance of the output signal resembles what is called an attack, decay, release, and sustain envelope, or ADSR envelope for short. In a practical ADSR envelope, the time-constants for attack, decay, and release may be set independently. In this example, there is only one time constant, that of $ h$. The two constant levels in the input signal may be called the attack level and the sustain level, respectively. Thus, the envelope approaches the attack level at the attack rate (where the ``rate'' may be defined as the reciprocal of the time constant), it next approaches the sustain level at the ``decay rate'', and finally, it approaches zero at the ``release rate''. These envelope parameters are commonly used in analog synthesizers and their digital descendants, so-called virtual analog synthesizers. Such an ADSR envelope is typically used to multiply the output of a waveform oscillator such as a sawtooth or pulse-train oscillator. For more on virtual analog synthesis, see, for example, [78,77].

Convolution Example 3: Matched Filtering

Figure 7.5: Illustration of convolution of $ y=[1,1,1,1,0,0,0,0]$ and ``matched filter'' $ h=\protect\hbox{\sc Flip}(y)=[1,0,0,0,0,1,1,1]$ ($ N=8$).

Figure 7.5 illustrates convolution of

y&=&[1,1,1,1,0,0,0,0] \\

to get

$\displaystyle y\circledast h = [4,3,2,1,0,1,2,3]. \protect$ (7.3)

For example, $ y$ could be a ``rectangularly windowed signal, zero-padded by a factor of 2,'' where the signal happened to be dc (all $ 1$s). For the convolution, we need

$\displaystyle \hbox{\sc Flip}(h) = [1,1,1,1,0,0,0,0]

which is the same as $ y$. When $ h=\hbox{\sc Flip}(y)$, we say that $ h$ is a matched filter for $ y$.7.7 In this case, $ h$ is matched to look for a ``dc component,'' and also zero-padded by a factor of $ 2$. The zero-padding serves to simulate acyclic convolution using circular convolution. Note from Eq.$ \,$(7.3) that the maximum is obtained in the convolution output at time 0. This peak (the largest possible if all input signals are limited to $ [-1,1]$ in magnitude), indicates the matched filter has ``found'' the dc signal starting at time 0. This peak would persist in the presence of some amount of noise and/or interference from other signals. Thus, matched filtering is useful for detecting known signals in the presence of noise and/or interference [34].

Graphical Convolution

As mentioned above, cyclic convolution can be written as

$\displaystyle y(n) \isdef (x\circledast h)_n \isdef \sum_{m=0}^{N-1}x(m)h(n-m) =
\left<x,\hbox{\sc Shift}_n(\hbox{\sc Flip}(h))\right>$   $\displaystyle \mbox{($h$\ real)}$

where $ x,y\in{\bf C}^N$ and $ h\in{\bf R}^N$. It is instructive to interpret this expression graphically, as depicted in Fig.7.5 above. The convolution result at time $ n=0$ is the inner product of $ x$ and $ \hbox{\sc Flip}(h)$, or $ y(0)=\left<x,\hbox{\sc Flip}(h)\right>$. For the next time instant, $ n=1$, we shift $ \hbox{\sc Flip}(h)$ one sample to the right and repeat the inner product operation to obtain $ y(1)=\left<x,\hbox{\sc Shift}_1(\hbox{\sc Flip}(h))\right>$, and so on. To capture the cyclic nature of the convolution, $ x$ and $ \hbox{\sc Shift}_n(\hbox{\sc Flip}(h))$ can be imagined plotted on a cylinder. Thus, Fig.7.5 shows the cylinder after being ``cut'' along the vertical line between $ n=N-1$ and $ n=0$ and ``unrolled'' to lay flat.

Polynomial Multiplication

Note that when you multiply two polynomials together, their coefficients are convolved. To see this, let $ p(x)$ denote the $ m$th-order polynomial

$\displaystyle p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_m x^m

with coefficients $ p_i$, and let $ q(x)$ denote the $ n$th-order polynomial

$\displaystyle q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n

with coefficients $ q_i$. Then we have [1]

p(x) q(x) &=& p_0 q_0 + (p_0 q_1 + p_1 q_0) x + (p_0 q_2 + p_1...
\mathop{+} p_{n+m-1} q_1 + p_{n+m} q_0) x^{n+m}.

Denoting $ p(x) q(x)$ by

$\displaystyle r(x) \isdef p(x) q(x) = r_0 + r_1 x + r_2 x^2 + \cdots + r_{m+n} x^{m+n},

we have that the $ i$th coefficient can be expressed as

r_i &=& p_0 q_i + p_1 q_{i-1} + p_2 q_{i-2} + \cdots + p_{i-1}...
...=-\infty}^\infty p_j q_{i-j}\\
&\isdef & (p \circledast q)(i),

where $ p_i$ and $ q_i$ are doubly infinite sequences, defined as zero for $ i<0$ and $ i>m,n$, respectively.

Multiplication of Decimal Numbers

Since decimal numbers are implicitly just polynomials in the powers of 10, e.g.,

$\displaystyle 3819 = 3\cdot 10^3 + 8\cdot 10^2 + 1\cdot 10^1 + 9\cdot 10^0,

it follows that multiplying two numbers convolves their digits. The only twist is that, unlike normal polynomial multiplication, we have carries. That is, when a convolution result (output digit) exceeds 10, we subtract 10 from the result and add 1 to the digit in the next higher place.


The correlation operator for two signals $ x$ and $ y$ in $ {\bf C}^N$ is defined as

$\displaystyle \zbox {(x\star y)_n \isdef \sum_{m=0}^{N-1}\overline{x(m)} y(m+n)}

We may interpret the correlation operator as

$\displaystyle (x\star y)_n = \left<\hbox{\sc Shift}_{-n}(y), x\right>

which is $ \vert\vert\,x\,\vert\vert ^2=N$ times the coefficient of projection onto $ x$ of $ y$ advanced by $ n$ samples (shifted circularly to the left by $ n$ samples). The time shift $ n$ is called the correlation lag, and $ \overline{x(m)}
y(m+n)$ is called a lagged product. Applications of correlation are discussed in §8.4.

Stretch Operator

Unlike all previous operators, the $ \hbox{\sc Stretch}_L()$ operator maps a length $ N$ signal to a length $ M\isdeftext LN$ signal, where $ L$ and $ N$ are integers. We use ``$ m$'' instead of ``$ n$'' as the time index to underscore this fact.

Figure: Illustration of $ \protect\hbox{\sc Stretch}_3(x)$.

A stretch by factor $ L$ is defined by

$\displaystyle \hbox{\sc Stretch}_{L,m}(x) \isdef
...x{ an integer} \\ [5pt]
0, & m/L\mbox{ non-integer.} \\
\end{array} \right.

Thus, to stretch a signal by the factor $ L$, insert $ L-1$ zeros between each pair of samples. An example of a stretch by factor three is shown in Fig.7.6. The example is

$\displaystyle \hbox{\sc Stretch}_3([4,1,2]) = [4,0,0,1,0,0,2,0,0].

The stretch operator is used to describe and analyze upsampling, that is, increasing the sampling rate by an integer factor. A stretch by $ K$ followed by lowpass filtering to the frequency band $ \omega\in(-\pi/K,\pi/K)$ implements ideal bandlimited interpolation (introduced in Appendix D).

Zero Padding

Zero padding consists of extending a signal (or spectrum) with zeros. It maps a length $ N$ signal to a length $ M>N$ signal, but $ N$ need not divide $ M$.


$\displaystyle \hbox{\sc ZeroPad}_{M,m}(x) \isdef \left\{\begin{array}{ll} x(m),...
...ert m\vert < N/2 \\ [5pt] 0, & \mbox{otherwise} \\ \end{array} \right. \protect$ (7.4)

where $ m=0,\pm1,\pm2,\dots,\pm M_h$, with $ M_h\isdef (M-1)/2$ for $ M$ odd, and $ M/2 - 1$ for $ M$ even. For example,

$\displaystyle \hbox{\sc ZeroPad}_{10}([1,2,3,4,5]) = [1,2,3,0,0,0,0,0,4,5].

In this example, the first sample corresponds to time 0, and five zeros have been inserted between the samples corresponding to times $ n=2$ and $ n=-2$.

Figure 7.7 illustrates zero padding from length $ N=5$ out to length $ M=11$. Note that $ x$ and $ n$ could be replaced by $ X$ and $ k$ in the figure caption.

Figure 7.7: Illustration of zero padding: a) Original signal (or spectrum) $ x=[3,2,1,1,2]$ plotted over the domain $ n\in [0,N-1]$ where $ N=5$ (i.e., as the samples would normally be held in a computer array). b) $ \protect\hbox{\sc ZeroPad}_{11}(x)$. c) The same signal $ x$ plotted over the domain $ n\in [-(N-1)/2, (N-1)/2]$ which is more natural for interpreting negative times (frequencies). d) $ \protect\hbox{\sc ZeroPad}_{11}(x)$ plotted over the zero-centered domain.

Note that we have unified the time-domain and frequency-domain definitions of zero-padding by interpreting the original time axis $ [0,1,\dots,N-1]$ as indexing positive-time samples from 0 to $ N/2-1$ (for $ N$ even), and negative times in the interval $ n\in[N-N/2+1,N-1]\equiv[-N/2+1,-1]$.7.8 Furthermore, we require $ x(N/2)\equiv
x(-N/2)=0$ when $ N$ is even, while odd $ N$ requires no such restriction. In practice, we often prefer to interpret time-domain samples as extending from 0 to $ N-1$, i.e., with no negative-time samples. For this case, we define ``causal zero padding'' as described below.

Causal (Periodic) Signals

A signal $ x\in{\bf C}^N$ may be defined as causal when $ x(n)=0$ for all ``negative-time'' samples (e.g., for $ n=-1,-2,\dots,-N/2$ when $ N$ is even). Thus, the signal $ x=[1,2,3,0,0]\in{\bf R}^5$ is causal while $ x=[1,2,3,4,0]$ is not. For causal signals, zero-padding is equivalent to simply appending zeros to the original signal. For example,

$\displaystyle \hbox{\sc ZeroPad}_{10}([1,2,3,0,0]) = [1,2,3,0,0,0,0,0,0,0].

Therefore, when we simply append zeros to the end of signal, we call it causal zero padding.

Causal Zero Padding

In practice, a signal $ x\in{\bf R}^N$ is often an $ N$-sample frame of data taken from some longer signal, and its true starting time can be anything. In such cases, it is common to treat the start-time of the frame as zero, with no negative-time samples. In other words, $ x(n)$ represents an $ N$-sample signal-segment that is translated in time to start at time 0. In this case (no negative-time samples in the frame), it is proper to zero-pad by simply appending zeros at the end of the frame. Thus, we define e.g.,

$\displaystyle \hbox{\sc CausalZeroPad}_{10}([1,2,3,4,5]) = [1,2,3,4,5,0,0,0,0,0].

Causal zero-padding should not be used on a spectrum of a real signal because, as we will see in §7.4.3 below, the magnitude spectrum of every real signal is symmetric about frequency zero. For the same reason, we cannot simply append zeros in the time domain when the signal frame is considered to include negative-time samples, as in ``zero-centered FFT processing'' (discussed in Book IV [70]). Nevertheless, in practice, appending zeros is perhaps the most common form of zero-padding. It is implemented automatically, for example, by the matlab function fft(x,N) when the FFT size N exceeds the length of the signal vector x.

In summary, we have defined two types of zero-padding that arise in practice, which we may term ``causal'' and ``zero-centered'' (or ``zero-phase'', or even ``periodic''). The zero-centered case is the more natural with respect to the mathematics of the DFT, so it is taken as the ``official'' definition of ZEROPAD(). In both cases, however, when properly used, we will have the basic Fourier theorem7.4.12 below) stating that zero-padding in the time domain corresponds to ideal bandlimited interpolation in the frequency domain, and vice versa.

Zero Padding Applications

Zero padding in the time domain is used extensively in practice to compute heavily interpolated spectra by taking the DFT of the zero-padded signal. Such spectral interpolation is ideal when the original signal is time limited (nonzero only over some finite duration spanned by the orignal samples).

Note that the time-limited assumption directly contradicts our usual assumption of periodic extension. As mentioned in §6.7, the interpolation of a periodic signal's spectrum from its harmonics is always zero; that is, there is no spectral energy, in principle, between the harmonics of a periodic signal, and a periodic signal cannot be time-limited unless it is the zero signal. On the other hand, the interpolation of a time-limited signal's spectrum is nonzero almost everywhere between the original spectral samples. Thus, zero-padding is often used when analyzing data from a non-periodic signal in blocks, and each block, or frame, is treated as a finite-duration signal which can be zero-padded on either side with any number of zeros. In summary, the use of zero-padding corresponds to the time-limited assumption for the data frame, and more zero-padding yields denser interpolation of the frequency samples around the unit circle.

Sometimes people will say that zero-padding in the time domain yields higher spectral resolution in the frequency domain. However, signal processing practitioners should not say that, because ``resolution'' in signal processing refers to the ability to ``resolve'' closely spaced features in a spectrum analysis (see Book IV [70] for details). The usual way to increase spectral resolution is to take a longer DFT without zero padding--i.e., look at more data. In the field of graphics, the term resolution refers to pixel density, so the common terminology confusion is reasonable. However, remember that in signal processing, zero-padding in one domain corresponds to a higher interpolation-density in the other domain--not a higher resolution.

Ideal Spectral Interpolation

Using Fourier theorems, we will be able to show (§7.4.12) that zero padding in the time domain gives exact bandlimited interpolation in the frequency domain.7.9In other words, for truly time-limited signals $ x$, taking the DFT of the entire nonzero portion of $ x$ extended by zeros yields exact interpolation of the complex spectrum--not an approximation (ignoring computational round-off error in the DFT itself). Because the fast Fourier transform (FFT) is so efficient, zero-padding followed by an FFT is a highly practical method for interpolating spectra of finite-duration signals, and is used extensively in practice.

Before we can interpolate a spectrum, we must be clear on what a ``spectrum'' really is. As discussed in Chapter 6, the spectrum of a signal $ x(\cdot)$ at frequency $ \omega$ is defined as a complex number $ X(\omega)$ computed using the inner product

$\displaystyle X(\omega)
\isdef \left<x,s_\omega\right>
\isdef \sum_{\mbox{all } n} x(n) e^{-j\omega nT}.

That is, $ X(\omega)$ is the unnormalized coefficient of projection of $ x$ onto the sinusoid $ s_\omega$ at frequency $ \omega$. When $ \omega=\omega_k=2\pi f_s k/N$, for $ k=0,1,\ldots,
N-1$, we obtain the special set of spectral samples known as the DFT. For other values of $ \omega$, we obtain spectral points in between the DFT samples. Interpolating DFT samples should give the same result. It is straightforward to show that this ideal form of interpolation is what we call bandlimited interpolation, as discussed further in Appendix D and in Book IV [70] of this series.

Interpolation Operator

The interpolation operator $ \hbox{\sc Interp}_L()$ interpolates a signal by an integer factor $ L$ using bandlimited interpolation. For frequency-domain signals $ X(\omega_k)$, $ k=0,1,2,\ldots,N-1$, we may write spectral interpolation as follows:

\hbox{\sc Interp}_{L,k^\prime }(X) &\isdef & X(\omega_{k^\prim...
...i k^\prime /M,\; k^\prime =0,1,2,\dots,M-1,\;\\
M&\isdef & LN.

Since $ X(\omega_k )\isdeftext \hbox{\sc DFT}_{N,k}(x)$ is initially only defined over the $ N$ roots of unity in the $ z$ plane, while $ X(\omega_{k^\prime })$ is defined over $ M=LN$ roots of unity, we define $ X(\omega_{k^\prime })$ for $ \omega_{k^\prime }\neq\omega_k $ by ideal bandlimited interpolation (specifically time-limited spectral interpolation in this case).

For time-domain signals $ x(n)$, exact interpolation is similarly bandlimited interpolation, as derived in Appendix D.

Repeat Operator

Like the $ \hbox{\sc Stretch}_L()$ and $ \hbox{\sc Interp}_L()$ operators, the $ \hbox{\sc Repeat}_L()$ operator maps a length $ N$ signal to a length $ M\isdeftext LN$ signal:

Definition: The repeat $ L$ times operator is defined for any $ x\in{\bf C}^N$ by

$\displaystyle \hbox{\sc Repeat}_{L,m}(x) \isdef x(m), \qquad m=0,1,2,\ldots,M-1,

where $ M\isdef LN$, and indexing of $ x$ is modulo $ N$ (periodic extension). Thus, the $ \hbox{\sc Repeat}_L()$ operator simply repeats its input signal $ L$ times.7.10 An example of $ \hbox{\sc Repeat}_2(x)$ is shown in Fig.7.8. The example is

$\displaystyle \hbox{\sc Repeat}_2([0,2,1,4,3,1]) = [0,2,1,4,3,1,0,2,1,4,3,1].

Figure: Illustration of $ \hbox{\sc Repeat}_2(x)$.

A frequency-domain example is shown in Fig.7.9. Figure 7.9a shows the original spectrum $ X$, Fig.7.9b shows the same spectrum plotted over the unit circle in the $ z$ plane, and Fig.7.9c shows $ \hbox{\sc Repeat}_3(X)$. The $ z=1$ point (dc) is on the right-rear face of the enclosing box. Note that when viewed as centered about $ k=0$, $ X$ is a somewhat ``triangularly shaped'' spectrum. We see three copies of this shape in $ \hbox{\sc Repeat}_3(X)$.

Figure: Illustration of $ \hbox{\sc Repeat}_3(X)$. a) Conventional plot of $ X$. b) Plot of $ X$ over the unit circle in the $ z$ plane. c) $ \hbox{\sc Repeat}_3(X)$.

The repeat operator is used to state the Fourier theorem

$\displaystyle \hbox{\sc Stretch}_L \;\longleftrightarrow\;\hbox{\sc Repeat}_L,

where $ \hbox{\sc Stretch}_L$ is defined in §7.2.6. That is, when you stretch a signal by the factor $ L$ (inserting zeros between the original samples), its spectrum is repeated $ L$ times around the unit circle. The simple proof is given on page [*].

Downsampling Operator

Downsampling by $ L$ (also called decimation by $ L$) is defined for $ x\in{\bf C}^N$ as taking every $ L$th sample, starting with sample zero:

\hbox{\sc Downsample}_{L,m}(x) &\isdef & x(mL),\\
m &=& 0,1,2,\ldots,M-1\\

The $ \hbox{\sc Downsample}_L()$ operator maps a length $ N=LM$ signal down to a length $ M$ signal. It is the inverse of the $ \hbox{\sc Stretch}_L()$ operator (but not vice versa), i.e.,

\hbox{\sc Downsample}_L(\hbox{\sc Stretch}_L(x)) &=& x \\
...L(\hbox{\sc Downsample}_L(x)) &\neq& x\quad \mbox{(in general).}

The stretch and downsampling operations do not commute because they are linear time-varying operators. They can be modeled using time-varying switches controlled by the sample index $ n$.

Figure: Illustration of $ \protect\hbox{\sc Downsample}_2(x)$.

The following example of $ \protect\hbox{\sc Downsample}_2(x)$ is illustrated in Fig.7.10:

$\displaystyle \hbox{\sc Downsample}_2([0,1,2,3,4,5,6,7,8,9]) = [0,2,4,6,8].

Note that the term ``downsampling'' may also refer to the more elaborate process of sampling-rate conversion to a lower sampling rate, in which a signal's sampling rate is lowered by resampling using bandlimited interpolation. To distinguish these cases, we can call this bandlimited downsampling, because a lowpass-filter is needed, in general, prior to downsampling so that aliasing is avoided. This topic is address in Appendix D. Early sampling-rate converters were in fact implemented using the $ \hbox{\sc Stretch}_L$ operation, followed by an appropriate lowpass filter, followed by $ \hbox{\sc Downsample}_M$, in order to implement a sampling-rate conversion by the factor $ L/M$.

Alias Operator

Aliasing occurs when a signal is undersampled. If the signal sampling rate $ f_s$ is too low, we get frequency-domain aliasing.

The topic of aliasing normally arises in the context of sampling a continuous-time signal. The sampling theorem (Appendix D) says that we will have no aliasing due to sampling as long as the sampling rate is higher than twice the highest frequency present in the signal being sampled.

In this chapter, we are considering only discrete-time signals, in order to keep the math as simple as possible. Aliasing in this context occurs when a discrete-time signal is downsampled to reduce its sampling rate. You can think of continuous-time sampling as the limiting case for which the starting sampling rate is infinity.

An example of aliasing is shown in Fig.7.11. In the figure, the high-frequency sinusoid is indistinguishable from the lower-frequency sinusoid due to aliasing. We say the higher frequency aliases to the lower frequency.

Figure 7.11: Example of frequency-domain aliasing due to undersampling in the time domain.

Undersampling in the frequency domain gives rise to time-domain aliasing. If time or frequency is not specified, the term ``aliasing'' normally means frequency-domain aliasing (due to undersampling in the time domain).

The aliasing operator for $ N$-sample signals $ x\in{\bf C}^N$ is defined by

\hbox{\sc Alias}_{L,m}(x) &\isdef & \sum_{l=0}^{L-1} x\left(m+lM\right),\\
m &=& 0,1,2,\ldots,M-1,\\

Like the $ \hbox{\sc Downsample}_L()$ operator, the $ \hbox{\sc Alias}_L()$ operator maps a length $ N=LM$ signal down to a length $ M$ signal. A way to think of it is to partition the original $ N$ samples into $ L$ blocks of length $ M$, with the first block extending from sample 0 to sample $ M-1$, the second block from $ M$ to $ 2M-1$, etc. Then just add up the blocks. This process is called aliasing. If the original signal $ x$ is a time signal, it is called time-domain aliasing; if it is a spectrum, we call it frequency-domain aliasing, or just aliasing. Note that aliasing is not invertible in general. Once the blocks are added together, it is usually not possible to recover the original blocks.


\hbox{\sc Alias}_2([0,1,2,3,4,5]) &=& [0,1,2] + [3,4,5] = [3,5...
...ox{\sc Alias}_3([0,1,2,3,4,5]) &=& [0,1] + [2,3] + [4,5] = [6,9]

The alias operator is used to state the Fourier theorem7.4.11)

$\displaystyle \hbox{\sc Downsample}_L \;\longleftrightarrow\;\frac{1}{L}\hbox{\sc Alias}_L.

That is, when you downsample a signal by the factor $ L$, its spectrum is aliased by the factor $ L$.

Figure: Illustration of aliasing in the frequency domain. a) $ \hbox{\sc Repeat}_3(X)$ from Fig.7.9c. b) First half of the original unit circle (0 to $ \pi $) wrapped around the new, smaller unit circle (which is magnified to the original size). c) Second half ($ \pi $ to $ 2\pi $), also wrapped around the new unit circle. d) Overlay of components to be summed. e) Sum of components (the aliased spectrum). f) Both sum and overlay.

Figure 7.12 shows the result of $ \hbox{\sc Alias}_2$ applied to $ \hbox{\sc Repeat}_3(X)$ from Figure 7.9c. Imagine the spectrum of Fig.7.12a as being plotted on a piece of paper rolled to form a cylinder, with the edges of the paper meeting at $ z=1$ (upper right corner of Fig.7.12a). Then the $ \hbox{\sc Alias}_2$ operation can be simulated by rerolling the cylinder of paper to cut its circumference in half. That is, reroll it so that at every point, two sheets of paper are in contact at all points on the new, narrower cylinder. Now, simply add the values on the two overlapping sheets together, and you have the $ \hbox{\sc Alias}_2$ of the original spectrum on the unit circle. To alias by $ 3$, we would shrink the cylinder further until the paper edges again line up, giving three layers of paper in the cylinder, and so on.

Figure 7.12b shows what is plotted on the first circular wrap of the cylinder of paper, and Fig.7.12c shows what is on the second wrap. These are overlaid in Fig.7.12d and added together in Fig.7.12e. Finally, Figure 7.12f shows both the addition and the overlay of the two components. We say that the second component (Fig.7.12c) ``aliases'' to new frequency components, while the first component (Fig.7.12b) is considered to be at its original frequencies. If the unit circle of Fig.7.12a covers frequencies 0 to $ f_s$, all other unit circles (Fig.7.12b-c) cover frequencies 0 to $ f_s/2$.

In general, aliasing by the factor $ K$ corresponds to a sampling-rate reduction by the factor $ K$. To prevent aliasing when reducing the sampling rate, an anti-aliasing lowpass filter is generally used. The lowpass filter attenuates all signal components at frequencies outside the interval $ [-f_s/(2K),f_s/(2K)]$ so that all frequency components which would alias are first removed.

Conceptually, in the frequency domain, the unit circle is reduced by $ \hbox{\sc Alias}_2$ to a unit circle half the original size, where the two halves are summed. The inverse of aliasing is then ``repeating'' which should be understood as increasing the unit circle circumference using ``periodic extension'' to generate ``more spectrum'' for the larger unit circle. In the time domain, on the other hand, downsampling is the inverse of the stretch operator. We may interchange ``time'' and ``frequency'' and repeat these remarks. All of these relationships are precise only for integer stretch/downsampling/aliasing/repeat factors; in continuous time and frequency, the restriction to integer factors is removed, and we obtain the (simpler) scaling theorem (proved in §C.2).

Even and Odd Functions

Some of the Fourier theorems can be succinctly expressed in terms of even and odd symmetries.

Definition: A function $ f(n)$ is said to be even if $ f(-n)=f(n)$.

An even function is also symmetric, but the term symmetric applies also to functions symmetric about a point other than 0.

Definition: A function $ f(n)$ is said to be odd if $ f(-n)=-f(n)$.

An odd function is also called antisymmetric.

Note that every finite odd function $ f(n)$ must satisfy $ f(0)=0$.7.11 Moreover, for any $ x\in{\bf C}^N$ with $ N$ even, we also have $ x(N/2)=0$ since $ x(N/2)=-x(-N/2)=-x(-N/2+N)=-x(N/2)$; that is, $ N/2$ and $ -N/2$ index the same point when $ N$ is even.

Theorem: Every function $ f(n)$ can be decomposed into a sum of its even part $ f_e(n)$ and odd part $ f_o(n)$, where

f_e(n) &\isdef & \frac{f(n) + f(-n)}{2} \\
f_o(n) &\isdef & \frac{f(n) - f(-n)}{2}.

Proof: In the above definitions, $ f_e$ is even and $ f_o$ is odd by construction. Summing, we have

$\displaystyle f_e(n) + f_o(n) = \frac{f(n) + f(-n)}{2} + \frac{f(n) - f(-n)}{2} = f(n).

Theorem: The product of even functions is even, the product of odd functions is even, and the product of an even times an odd function is odd.

Proof: Readily shown.

Since even times even is even, odd times odd is even, and even times odd is odd, we can think of even as $ (+)$ and odd as $ (-)$:

(+)\cdot(+) &=& (+)\\
(-)\cdot(-) &=& (+)\\
(+)\cdot(-) &=& (-)\\
(-)\cdot(+) &=& (-)

Example: $ \cos(\omega_k n)$, $ n\in{\bf Z}$, is an even signal since $ \cos(-\theta) = \cos(\theta)$.

Example: $ \sin(\omega_k n)$ is an odd signal since $ \sin(-\theta) = -\sin(\theta)$.

Example: $ \cos(\omega_k n)\cdot\sin(\omega_l n)$ is an odd signal (even times odd).

Example: $ \sin(\omega_k n)\cdot\sin(\omega_l n)$ is an even signal (odd times odd).

Theorem: The sum of all the samples of an odd signal $ x_o$ in $ {\bf C}^N$ is zero.

Proof: This is readily shown by writing the sum as $ x_o(0) + [x_o(1) + x_o(-1)]
+ \cdots + x(N/2)$, where the last term only occurs when $ N$ is even. Each term so written is zero for an odd signal $ x_o$.

Example: For all DFT sinusoidal frequencies $ \omega_k=2\pi k/N$,

$\displaystyle \sum_{n=0}^{N-1}\sin(\omega_k n) \cos(\omega_k n) = 0, \; k=0,1,2,\ldots,N-1.

More generally,

$\displaystyle \sum_{n=0}^{N-1}x_e(n) x_o(n) = 0,

for any even signal $ x_e$ and odd signal $ x_o$ in $ {\bf C}^N$. In terms of inner products5.9), we may say that the even part of every real signal is orthogonal to its odd part:

$\displaystyle \left<x_e,x_o\right> = 0

Fourier Theorems

In this section the main Fourier theorems are stated and proved. It is no small matter how simple these theorems are in the DFT case relative to the other three cases (DTFT, Fourier transform, and Fourier series, as defined in Appendix B). When infinite summations or integrals are involved, the conditions for the existence of the Fourier transform can be quite difficult to characterize mathematically. Mathematicians have expended a considerable effort on such questions. By focusing primarily on the DFT case, we are able to study the essential concepts conveyed by the Fourier theorems without getting involved with mathematical difficulties.


Theorem: For any $ x,y\in{\bf C}^N$ and $ \alpha,\beta\in{\bf C}$, the DFT satisfies

$\displaystyle \zbox {\alpha x + \beta y \;\longleftrightarrow\;\alpha X + \beta Y}

where $ X\isdeftext \hbox{\sc DFT}(x)$ and $ Y\isdeftext \hbox{\sc DFT}(y)$, as always in this book. Thus, the DFT is a linear operator.


\hbox{\sc DFT}_k(\alpha x + \beta y) &\isdef & \sum_{n=0}^{N-1...
...n=0}^{N-1}y(n) e^{-j 2\pi nk/N} \\
&\isdef & \alpha X + \beta Y

Conjugation and Reversal

Theorem: For any $ x\in{\bf C}^N$,

$\displaystyle \zbox {\overline{x} \;\longleftrightarrow\;\hbox{\sc Flip}(\overline{X}).}


\hbox{\sc DFT}_k(\overline{x})
&\isdef & \sum_{n=0}^{N-1}\ov...
...n) e^{-j 2\pi n(-k)/N}}
\isdef \hbox{\sc Flip}_k(\overline{X})

Theorem: For any $ x\in{\bf C}^N$,

$\displaystyle \zbox {\hbox{\sc Flip}(\overline{x}) \;\longleftrightarrow\;\overline{X}.}

Proof: Making the change of summation variable $ m\isdeftext N-n$, we get

\hbox{\sc DFT}_k(\hbox{\sc Flip}(\overline{x}))
&\isdef & \s...
...sum_{m=0}^{N-1}x(m) e^{-j 2\pi m k/N}}
\isdef \overline{X(k)}.

Theorem: For any $ x\in{\bf C}^N$,

$\displaystyle \zbox {\hbox{\sc Flip}(x) \;\longleftrightarrow\;\hbox{\sc Flip}(X).}


\hbox{\sc DFT}_k[\hbox{\sc Flip}(x)] &\isdef & \sum_{n=0}^{N-1...
...-1}x(m) e^{j 2\pi mk/N} \isdef X(-k) \isdef \hbox{\sc Flip}_k(X)

Corollary: For any $ x\in{\bf R}^N$,

$\displaystyle \zbox {\hbox{\sc Flip}(x) \;\longleftrightarrow\;\overline{X}}$   $\displaystyle \mbox{($x$\ real).}$

Proof: Picking up the previous proof at the third formula, remembering that $ x$ is real,

$\displaystyle \sum_{m=0}^{N-1}x(m) e^{j 2\pi mk/N}
= \overline{\sum_{m=0}^{N-1}...
= \overline{\sum_{m=0}^{N-1}x(m) e^{-j 2\pi mk/N}}
\isdef \overline{X(k)}

when $ x(m)$ is real.

Thus, conjugation in the frequency domain corresponds to reversal in the time domain. Another way to say it is that negating spectral phase flips the signal around backwards in time.

Corollary: For any $ x\in{\bf R}^N$,

$\displaystyle \zbox {\hbox{\sc Flip}(X) = \overline{X}}$   $\displaystyle \mbox{($x$\ real).}$

Proof: This follows from the previous two cases.

Definition: The property $ X(-k)=\overline{X(k)}$ is called Hermitian symmetry or ``conjugate symmetry.'' If $ X(-k)=-\overline{X(k)}$, it may be called skew-Hermitian.

Another way to state the preceding corollary is

$\displaystyle \zbox {x\in{\bf R}^N\;\longleftrightarrow\;X\;\mbox{is Hermitian}.}


In the previous section, we found $ \hbox{\sc Flip}(X) = \overline{X}$ when $ x$ is real. This fact is of high practical importance. It says that the spectrum of every real signal is Hermitian. Due to this symmetry, we may discard all negative-frequency spectral samples of a real signal and regenerate them later if needed from the positive-frequency samples. Also, spectral plots of real signals are normally displayed only for positive frequencies; e.g., spectra of sampled signals are normally plotted over the range 0 Hz to $ f_s/2$ Hz. On the other hand, the spectrum of a complex signal must be shown, in general, from $ -f_s/2$ to $ f_s/2$ (or from 0 to $ f_s$), since the positive and negative frequency components of a complex signal are independent.

Recall from §7.3 that a signal $ x(n)$ is said to be even if $ x(-n)=x(n)$, and odd if $ x(-n)=-x(n)$. Below are are Fourier theorems pertaining to even and odd signals and/or spectra.

Theorem: If $ x\in{\bf R}^N$, then re$ \left\{X\right\}$ is even and im$ \left\{X\right\}$ is odd.

Proof: This follows immediately from the conjugate symmetry of $ X$ for real signals $ x$.

Theorem: If $ x\in{\bf R}^N$, $ \left\vert X\right\vert$ is even and $ \angle{X}$ is odd.

Proof: This follows immediately from the conjugate symmetry of $ X$ expressed in polar form $ X(k)= \left\vert X(k)\right\vert e^{j\angle{X(k)}}$.

The conjugate symmetry of spectra of real signals is perhaps the most important symmetry theorem. However, there are a couple more we can readily show:

Theorem: An even signal has an even transform:

$\displaystyle \zbox {x\;\mbox{even} \;\longleftrightarrow\;X\;\mbox{even}}

Proof: Express $ x$ in terms of its real and imaginary parts by $ x\isdeftext x_r + j
x_i$. Note that for a complex signal $ x$ to be even, both its real and imaginary parts must be even. Then

$\displaystyle X(k)$ $\displaystyle \isdef$ $\displaystyle \sum_{n=0}^{N-1}x(n) e^{-j\omega_k n}$  
  $\displaystyle =$ $\displaystyle \sum_{n=0}^{N-1}[x_r(n)+jx_i(n)] \cos(\omega_k n) - j [x_r(n)+jx_i(n)] \sin(\omega_k n)$  
  $\displaystyle =$ $\displaystyle \sum_{n=0}^{N-1}[x_r(n)\cos(\omega_k n) + x_i(n)\sin(\omega_k n)]$  
    $\displaystyle \;\,\mathop{+} j [x_i(n)\cos(\omega_k n) - x_r(n)\sin(\omega_k n)].
\protect$ (7.5)

Let even$ _n$ denote a function that is even in $ n$, such as $ f(n)=n^2$, and let odd$ _n$ denote a function that is odd in $ n$, such as $ f(n)=n^3$, Similarly, let even$ _{nk}$ denote a function of $ n$ and $ k$ that is even in both $ n$ and $ k$, such as $ f(n,k)=n^2k^2$, and odd$ _{nk}$ mean odd in both $ n$ and $ k$. Then appropriately labeling each term in the last formula above gives

+ ...
&=& \mbox{even}_k + j \cdot \mbox{even}_k = \mbox{even}_k.

Theorem: A real even signal has a real even transform:

$\displaystyle \zbox {x\;\mbox{real and even} \;\longleftrightarrow\;X\;\mbox{real and even}}$ (7.6)

Proof: This follows immediately from setting $ x_i(n)=0$ in the preceding proof. From Eq.$ \,$(7.5), we are left with

$\displaystyle X(k) = \sum_{n=0}^{N-1}x_r(n)\cos(\omega_k n).

Thus, the DFT of a real and even function reduces to a type of cosine transform,7.12

Instead of adapting the previous proof, we can show it directly:

X(k) &\isdef & \sum_{n=0}^{N-1}x(n) e^{-j\omega_k n}
= \sum_{...
= \sum_{n=0}^{N-1}\mbox{even}_{nk}
= \mbox{even}_k

Definition: A signal with a real spectrum (such as any real, even signal) is often called a zero phase signal. However, note that when the spectrum goes negative (which it can), the phase is really $ \pm\pi$, not 0. When a real spectrum is positive at dc (i.e., $ X(0)>0$), it is then truly zero-phase over at least some band containing dc (up to the first zero-crossing in frequency). When the phase switches between 0 and $ \pi $ at the zero-crossings of the (real) spectrum, the spectrum oscillates between being zero phase and ``constant phase''. We can say that all real spectra are piecewise constant-phase spectra, where the two constant values are 0 and $ \pi $ (or $ -\pi$, which is the same phase as $ +\pi$). In practice, such zero-crossings typically occur at low magnitude, such as in the ``side-lobes'' of the DTFT of a ``zero-centered symmetric window'' used for spectrum analysis (see Chapter 8 and Book IV [70]).

Shift Theorem

Theorem: For any $ x\in{\bf C}^N$ and any integer $ \Delta$,

$\displaystyle \zbox {\hbox{\sc DFT}_k[\hbox{\sc Shift}_\Delta(x)] = e^{-j\omega_k\Delta} X(k).}


\hbox{\sc DFT}_k[\hbox{\sc Shift}_\Delta(x)] &\isdef & \sum_{n...
...}x(m) e^{-j 2\pi mk/N} \\
&\isdef & e^{-j \omega_k \Delta} X(k)

The shift theorem is often expressed in shorthand as

$\displaystyle \zbox {x(n-\Delta) \longleftrightarrow e^{-j\omega_k\Delta}X(\omega_k).}

The shift theorem says that a delay in the time domain corresponds to a linear phase term in the frequency domain. More specifically, a delay of $ \Delta$ samples in the time waveform corresponds to the linear phase term $ e^{-j \omega_k \Delta}$ multiplying the spectrum, where $ \omega_k\isdeftext 2\pi k/N$.7.13Note that spectral magnitude is unaffected by a linear phase term. That is, $ \left\vert e^{-j
\Delta}X(k)\right\vert =
\left\vert X(k)\right\vert$.

Linear Phase Terms

The reason $ e^{-j \omega_k \Delta}$ is called a linear phase term is that its phase is a linear function of frequency:

$\displaystyle \angle e^{-j \omega_k \Delta} = - \Delta \cdot \omega_k

Thus, the slope of the phase, viewed as a linear function of radian-frequency $ \omega_k$, is $ -\Delta$. In general, the time delay in samples equals minus the slope of the linear phase term. If we express the original spectrum in polar form as

$\displaystyle X(k) = G(k) e^{j\Theta(k)},

where $ G$ and $ \Theta$ are the magnitude and phase of $ X$, respectively (both real), we can see that a linear phase term only modifies the spectral phase $ \Theta(k)$:

$\displaystyle e^{-j \omega_k \Delta} X(k) \isdef
e^{-j \omega_k \Delta} G(k) e^{j\Theta(k)}
= G(k) e^{j[\Theta(k)-\omega_k\Delta]}

where $ \omega_k\isdeftext 2\pi k/N$. A positive time delay (waveform shift to the right) adds a negatively sloped linear phase to the original spectral phase. A negative time delay (waveform shift to the left) adds a positively sloped linear phase to the original spectral phase. If we seem to be belaboring this relationship, it is because it is one of the most useful in practice.

Linear Phase Signals

In practice, a signal may be said to be linear phase when its phase is of the form

$\displaystyle \Theta(\omega_k)= - \Delta \cdot \omega_k\pm \pi I(\omega_k),

where $ \Delta$ is any real constant (usually an integer), and $ I(\omega_k)$ is an indicator function which takes on the values 0 or $ 1$ over the points $ \omega_k$, $ k=0,1,2,\ldots,N-1$. An important class of examples is when the signal is regarded as a filter impulse response.7.14 What all such signals have in common is that they are symmetric about the time $ n=\Delta$ in the time domain (as we will show on the next page). Thus, the term ``linear phase signal'' often really means ``a signal whose phase is linear between $ \pm\pi$ discontinuities.''

Zero Phase Signals

A zero-phase signal is thus a linear-phase signal for which the phase-slope $ \Delta$ is zero. As mentioned above (in §7.4.3), it would be more precise to say ``0-or-$ \pi $-phase signal'' instead of ``zero-phase signal''. Another better term is ``zero-centered signal'', since every real (even) spectrum corresponds to an even (real) signal. Of course, a zero-centered symmetric signal is simply an even signal, by definition. Thus, a ``zero-phase signal'' is more precisely termed an ``even signal''.

Application of the Shift Theorem to FFT Windows

In practical spectrum analysis, we most often use the Fast Fourier Transform7.15 (FFT) together with a window function $ w(n), n=0,1,2,\ldots,N-1$. As discussed further in Chapter 8, windows are normally positive ($ w(n)>0$), symmetric about their midpoint, and look pretty much like a ``bell curve.'' A window multiplies the signal $ x$ being analyzed to form a windowed signal $ x_w(n) = w(n)x(n)$, or $ x_w = w\cdot x$, which is then analyzed using an FFT. The window serves to taper the data segment gracefully to zero, thus eliminating spectral distortions due to suddenly cutting off the signal in time. Windowing is thus appropriate when $ x$ is a short section of a longer signal (not a period or whole number of periods from a periodic signal).

Theorem: Real symmetric FFT windows are linear phase.

Proof: Let $ w(n)$ denote the window samples for $ n=0,1,2,\ldots,M-1$. Since the window is symmetric, we have $ w(n)=w(M-1-n)$ for all $ n$. When $ M$ is odd, there is a sample at the midpoint at time $ n=(M-1)/2$. The midpoint can be translated to the time origin to create an even signal. As established on page [*], the DFT of a real and even signal is real and even. By the shift theorem, the DFT of the original symmetric window is a real, even spectrum multiplied by a linear phase term, yielding a spectrum having a phase that is linear in frequency with possible discontinuities of $ \pm\pi$ radians. Thus, all odd-length real symmetric signals are ``linear phase'', including FFT windows.

When $ M$ is even, the window midpoint at time $ n=(M-1)/2$ lands half-way between samples, so we cannot simply translate the window to zero-centered form. However, we can still factor the window spectrum $ W(\omega_k)$ into the product of a linear phase term $ \exp[-\omega_k(M-1)/2]$ and a real spectrum (verify this as an exercise), which satisfies the definition of a linear phase signal.

Convolution Theorem

Theorem: For any $ x,y\in{\bf C}^N$,

$\displaystyle \zbox {x\circledast y \;\longleftrightarrow\;X\cdot Y.}


\hbox{\sc DFT}_k(x\circledast y) &\isdef & \sum_{n=0}^{N-1}(x\...
...ht)Y(k)\quad\mbox{(by the Shift Theorem)}\\
&\isdef & X(k)Y(k)

This is perhaps the most important single Fourier theorem of all. It is the basis of a large number of FFT applications. Since an FFT provides a fast Fourier transform, it also provides fast convolution, thanks to the convolution theorem. It turns out that using an FFT to perform convolution is really more efficient in practice only for reasonably long convolutions, such as $ N>100$. For much longer convolutions, the savings become enormous compared with ``direct'' convolution. This happens because direct convolution requires on the order of $ N^2$ operations (multiplications and additions), while FFT-based convolution requires on the order of $ N\lg(N)$ operations, where $ \lg(N)$ denotes the logarithm-base-2 of $ N$ (see §A.1.2 for an explanation).

The simple matlab example in Fig.7.13 illustrates how much faster convolution can be performed using an FFT.7.16 We see that for a length $ N=1024$ convolution, the fft function is approximately 300 times faster in Octave, and 30 times faster in Matlab. (The conv routine is much faster in Matlab, even though it is a built-in function in both cases.)

Figure 7.13: Matlab/Octave program for comparing the speed of direct convolution with that of FFT convolution.

N = 1024;        % FFT much faster at this length
t = 0:N-1;       % [0,1,2,...,N-1]
h = exp(-t);     % filter impulse reponse
H = fft(h);      % filter frequency response
x = ones(1,N);   % input = dc (any signal will do)
Nrep = 100;      % number of trials to average
t0 = clock;      % latch the current time
for i=1:Nrep, y = conv(x,h); end      % Direct convolution
t1 = etime(clock,t0)*1000; % elapsed time in msec
t0 = clock;
for i=1:Nrep, y = ifft(fft(x) .* H); end % FFT convolution
t2 = etime(clock,t0)*1000;
    'Average direct-convolution time = %0.2f msec\n',...
    'Average FFT-convolution time = %0.2f msec\n',...
    'Ratio = %0.2f (Direct/FFT)'],...

% =================== EXAMPLE RESULTS ===================

Average direct-convolution time = 69.49 msec
Average FFT-convolution time = 0.23 msec
Ratio = 296.40 (Direct/FFT)

Average direct-convolution time = 15.73 msec
Average FFT-convolution time = 0.50 msec
Ratio = 31.46 (Direct/FFT)

A similar program produced the results for different FFT lengths shown in Table In this software environment, the fft function is faster starting with length $ 2^6=64$, and it is never significantly slower at short lengths, where ``calling overhead'' dominates.

Table 7.1: Direct versus FFT convolution times in milliseconds (convolution length = $ 2^M$) using Matlab 5.2 on an 800 MHz Athlon Windows PC.
M Direct FFT Ratio
1 0.07 0.08 0.91
2 0.08 0.08 0.92
3 0.08 0.08 0.94
4 0.09 0.10 0.97
5 0.12 0.12 0.96
6 0.18 0.12 1.44
7 0.39 0.15 2.67
8 1.10 0.21 5.10
9 3.83 0.31 12.26
10 15.80 0.47 33.72
11 50.39 1.09 46.07
12 177.75 2.53 70.22
13 709.75 5.62 126.18
14 4510.25 17.50 257.73
15 19050.00 72.50 262.76
16 316375.00 440.50 718.22

A table similar to Table 7.1 in Strum and Kirk [79, p. 521], based on the number of real multiplies, finds that the fft is faster starting at length $ 2^7=128$, and that direct convolution is significantly faster for very short convolutions (e.g., 16 operations for a direct length-4 convolution, versus 176 for the fft function).

See Appendix A for further discussion of FFT algorithms and their applications.

Dual of the Convolution Theorem

The dual7.18 of the convolution theorem says that multiplication in the time domain is convolution in the frequency domain:


$\displaystyle \zbox {x\cdot y \;\longleftrightarrow\;\frac{1}{N} X\circledast Y} $

Proof: The steps are the same as in the convolution theorem.

This theorem also bears on the use of FFT windows. It implies that windowing in the time domain corresponds to smoothing in the frequency domain. That is, the spectrum of $ w\cdot x$ is simply $ X$ filtered by $ W$, or, $ W\circledast X$. This smoothing reduces sidelobes associated with the rectangular window, which is the window one is using implicitly when a data frame $ x$ is considered time limited and therefore eligible for ``windowing'' (and zero-padding). See Chapter 8 and Book IV [70] for further discussion.

Correlation Theorem

Theorem: For all $ x,y\in{\bf C}^N$,

$\displaystyle \zbox {x\star y \;\longleftrightarrow\;\overline{X}\cdot Y}

where the correlation operation `$ \star$' was defined in §7.2.5.


(x\star y)_n
&\isdef & \sum_{m=0}^{N-1}\overline{x(m)}y(n+m)...
...t y\right)_n \\
&\;\longleftrightarrow\;& \overline{X} \cdot Y

The last step follows from the convolution theorem and the result $ \hbox{\sc Flip}(\overline{x}) \;\longleftrightarrow\;\overline{X}$ from §7.4.2. Also, the summation range in the second line is equivalent to the range $ [N-1,0]$ because all indexing is modulo $ N$.

Power Theorem

Theorem: For all $ x,y\in{\bf C}^N$,

$\displaystyle \zbox {\left<x,y\right> = \frac{1}{N}\left<X,Y\right>.}


\left<x,y\right> &\isdef & \sum_{n=0}^{N-1}x(n)\overline{y(n)}...
\isdef \frac{1}{N} \left<X,Y\right>.

As mentioned in §5.8, physical power is energy per unit time.7.19 For example, when a force produces a motion, the power delivered is given by the force times the velocity of the motion. Therefore, if $ x(n)$ and $ y(n)$ are in physical units of force and velocity (or any analogous quantities such as voltage and current, etc.), then their product $ x(n)y(n)\isdeftext
f(n)v(n)$ is proportional to the power per sample at time $ n$, and $ \left<f,v\right>$ becomes proportional to the total energy supplied (or absorbed) by the driving force. By the power theorem, $ {F(k)}\overline{V(k)}/N$ can be interpreted as the energy per bin in the DFT, or spectral power, i.e., the energy associated with a spectral band of width $ 2\pi/N$.7.20

Normalized DFT Power Theorem

Note that the power theorem would be more elegant if the DFT were defined as the coefficient of projection onto the normalized DFT sinusoids

$\displaystyle \tilde{s}_k(n) \isdef \frac{s_k(n)}{\sqrt{N}}.

That is, for the normalized DFT6.10), the power theorem becomes simply

$\displaystyle \left<x,y\right> = \langle \tilde{X},\tilde{Y}\rangle$   (Normalized DFT case)$\displaystyle . \protect$

We see that the power theorem expresses the invariance of the inner product between two signals in the time and frequency domains. If we think of the inner product geometrically, as in Chapter 5, then this result is expected, because $ x$ and $ \tilde{X}$ are merely coordinates of the same geometric object (a signal) relative to two different sets of basis signals (the shifted impulses and the normalized DFT sinusoids).

Rayleigh Energy Theorem (Parseval's Theorem)

Theorem: For any $ x\in{\bf C}^N$,

$\displaystyle \zbox {\left\Vert\,x\,\right\Vert^2 = \frac{1}{N}\left\Vert\,X\,\right\Vert^2.}


$\displaystyle \zbox {\sum_{n=0}^{N-1}\left\vert x(n)\right\vert^2 = \frac{1}{N}\sum_{k=0}^{N-1}\left\vert X(k)\right\vert^2.}

Proof: This is a special case of the power theorem.

Note that again the relationship would be cleaner ( $ \left\Vert\,x\,\right\Vert = \vert\vert\,\tilde{X}\,\vert\vert $) if we were using the normalized DFT.

Stretch Theorem (Repeat Theorem)

Theorem: For all $ x\in{\bf C}^N$,

$\displaystyle \zbox {\hbox{\sc Stretch}_L(x) \;\longleftrightarrow\;\hbox{\sc Repeat}_L(X).}

Proof: Recall the stretch operator:

$\displaystyle \hbox{\sc Stretch}_{L,m}(x) \isdef
...=\mbox{integer} \\ [5pt]
0, & m/L\neq \mbox{integer} \\
\end{array} \right.

Let $ y\isdeftext \hbox{\sc Stretch}_L(x)$, where $ y\in{\bf C}^M$, $ M=LN$. Also define the new denser frequency grid associated with length $ M$ by $ \omega^\prime_k \isdeftext 2\pi k/M$, and define $ \omega_k=2\pi k/N$ as usual. Then

$\displaystyle Y(k) \isdef \sum_{m=0}^{M-1} y(m) e^{-j\omega^\prime_k m}
= \sum_{n=0}^{N-1}x(n) e^{-j\omega^\prime_k nL}$   $\displaystyle \mbox{($n\isdef m/L$).}$


$\displaystyle \omega^\prime_k L \isdef \frac{2\pi k}{M} L = \frac{2\pi k}{N} = \omega_k .

Thus, $ Y(k)=X(k)$, and by the modulo indexing of $ X$, $ L$ copies of $ X$ are generated as $ k$ goes from 0 to $ M-1 = LN-1$.

Downsampling Theorem (Aliasing Theorem)

Theorem: For all $ x\in{\bf C}^N$,

$\displaystyle \zbox {\hbox{\sc Downsample}_L(x) \;\longleftrightarrow\;\frac{1}{L}\hbox{\sc Alias}_L(X).}

Proof: Let $ k^\prime \in[0,M-1]$ denote the frequency index in the aliased spectrum, and let $ Y(k^\prime )\isdef \hbox{\sc Alias}_{L,k^\prime }(X)$. Then $ Y$ is length $ M=N/L$, where $ L$ is the downsampling factor. We have

Y(k^\prime ) &\isdef & \hbox{\sc Alias}_{L,k^\prime }(X)
...n) e^{-j2\pi k^\prime n/N}
\sum_{l=0}^{L-1}e^{-j2\pi l n M/N}.

Since $ M/N=1/L$, the sum over $ l$ becomes

$\displaystyle \sum_{l=0}^{L-1}\left[e^{-j2\pi n/L}\right]^l =
...ht) \\ [5pt]
0, & n\neq 0 \left(\mbox{mod}\;L\right) \\
\end{array} \right.

using the closed form expression for a geometric series derived in §6.1. We see that the sum over $ L$ effectively samples $ x$ every $ L$ samples. This can be expressed in the previous formula by defining $ m\isdeftext n/L$ which ranges only over the nonzero samples:

\hbox{\sc Alias}_{L,k^\prime }(X) &=& \sum_{n=0}^{N-1}x(n) e^{...
... & L\cdot \hbox{\sc DFT}_{k^\prime }(\hbox{\sc Downsample}_L(x))

Since the above derivation also works in reverse, the theorem is proved.

An illustration of aliasing in the frequency domain is shown in Fig.7.12.

Illustration of the Downsampling/Aliasing Theorem in Matlab

>> N=4;
>> x = 1:N;
>> X = fft(x);
>> x2 = x(1:2:N);
>> fft(x2)                         % FFT(Downsample(x,2))
ans =
    4   -2
>> (X(1:N/2) + X(N/2 + 1:N))/2     % (1/2) Alias(X,2)
ans =
    4   -2

Zero Padding Theorem (Spectral Interpolation)

A fundamental tool in practical spectrum analysis is zero padding. This theorem shows that zero padding in the time domain corresponds to ideal interpolation in the frequency domain (for time-limited signals):

Theorem: For any $ x\in{\bf C}^N$

$\displaystyle \zbox {\hbox{\sc ZeroPad}_{LN}(x) \;\longleftrightarrow\;\hbox{\sc Interp}_L(X)}

where $ \hbox{\sc ZeroPad}()$ was defined in Eq.$ \,$(7.4), followed by the definition of $ \hbox{\sc Interp}()$.

Proof: Let $ M=LN$ with $ L\geq 1$. Then

\hbox{\sc DFT}_{M,k^\prime }(\hbox{\sc ZeroPad}_M(x))
&=& \su...
...ef & X(\omega_{k^\prime }) = \hbox{\sc Interp}_{L,k^\prime }(X).

Thus, this theorem follows directly from the definition of the ideal interpolation operator $ \hbox{\sc Interp}()$. See §8.1.3 for an example of zero-padding in spectrum analysis.

Periodic Interpolation (Spectral Zero Padding)

The dual of the zero-padding theorem states formally that zero padding in the frequency domain corresponds to periodic interpolation in the time domain:

Definition: For all $ x\in{\bf C}^N$ and any integer $ L\geq 1$,

$\displaystyle \zbox {\hbox{\sc PerInterp}_L(x) \isdef \hbox{\sc IDFT}(\hbox{\sc ZeroPad}_{LN}(X))} \protect$ (7.7)

where zero padding is defined in §7.2.7 and illustrated in Figure 7.7. In other words, zero-padding a DFT by the factor $ L$ in the frequency domain (by inserting $ N(L-1)$ zeros at bin number $ k=N/2$ corresponding to the folding frequency7.21) gives rise to ``periodic interpolation'' by the factor $ L$ in the time domain. It is straightforward to show that the interpolation kernel used in periodic interpolation is an aliased sinc function, that is, a sinc function $ \sin(\pi n/L)/(\pi n/L)$ that has been time-aliased on a block of length $ NL$. Such an aliased sinc function is of course periodic with period $ NL$ samples. See Appendix D for a discussion of ideal bandlimited interpolation, in which the interpolating sinc function is not aliased.

Periodic interpolation is ideal for signals that are periodic in $ N$ samples, where $ N$ is the DFT length. For non-periodic signals, which is almost always the case in practice, bandlimited interpolation should be used instead (Appendix D).

Relation to Stretch Theorem

It is instructive to interpret the periodic interpolation theorem in terms of the stretch theorem, $ \hbox{\sc Stretch}_L(x) \;\longleftrightarrow\;\hbox{\sc Repeat}_L(X)$. To do this, it is convenient to define a ``zero-centered rectangular window'' operator:

Definition: For any $ X\in{\bf C}^N$ and any odd integer $ M<N$ we define the length $ M$ even rectangular windowing operation by

$\displaystyle \hbox{\sc Chop}_{M,k}(X) \isdef
X(k), ...
...+1}{2} \leq \left\vert k\right\vert \leq \frac{N}{2}. \\
\end{array} \right.

Thus, this ``zero-phase rectangular window,'' when applied to a spectrum $ X$, sets the spectrum to zero everywhere outside a zero-centered interval of $ M$ samples. Note that $ \hbox{\sc Chop}_M(X)$ is the ideal lowpass filtering operation in the frequency domain. The ``cut-off frequency'' is $ \omega_c = 2\pi[(M-1)/2]/N$ radians per sample. For even $ M$, we allow $ X(-M/2)$ to be ``passed'' by the window, but in our usage (below), this sample should always be zero anyway. With this notation defined we can efficiently restate periodic interpolation in terms of the $ \hbox{\sc Stretch}()$ operator:

Theorem: When $ x\in{\bf C}^N$ consists of one or more periods from a periodic signal $ x^\prime\in {\bf C}^\infty$,

$\displaystyle \zbox {\hbox{\sc PerInterp}_L(x) = \hbox{\sc IDFT}(\hbox{\sc Chop}_N(\hbox{\sc DFT}(\hbox{\sc Stretch}_L(x)))).}

In other words, ideal periodic interpolation of one period of $ x$ by the integer factor $ L$ may be carried out by first stretching $ x$ by the factor $ L$ (inserting $ L-1$ zeros between adjacent samples of $ x$), taking the DFT, applying the ideal lowpass filter as an $ N$-point rectangular window in the frequency domain, and performing the inverse DFT.

Proof: First, recall that $ \hbox{\sc Stretch}_L(x)\leftrightarrow \hbox{\sc Repeat}_L(X)$. That is, stretching a signal by the factor $ L$ gives a new signal $ y=\hbox{\sc Stretch}_L(x)$ which has a spectrum $ Y$ consisting of $ L$ copies of $ X$ repeated around the unit circle. The ``baseband copy'' of $ X$ in $ Y$ can be defined as the $ N$-sample sequence centered about frequency zero. Therefore, we can use an ``ideal filter'' to ``pass'' the baseband spectral copy and zero out all others, thereby converting $ \hbox{\sc Repeat}_L(X)$ to $ \hbox{\sc ZeroPad}_{LN}(X)$. I.e.,

$\displaystyle \hbox{\sc Chop}_N(\hbox{\sc Repeat}_L(X)) = \hbox{\sc ZeroPad}_{LN}(X)
\;\longleftrightarrow\;\hbox{\sc Interp}_L(x).

The last step is provided by the zero-padding theorem7.4.12).

Bandlimited Interpolation of Time-Limited Signals

The previous result can be extended toward bandlimited interpolation of $ x\in{\bf C}^{N_x}$ which includes all nonzero samples from an arbitrary time-limited signal $ x^\prime\in {\bf C}^\infty$ (i.e., going beyond the interpolation of only periodic bandlimited signals given one or more periods $ x\in{\bf C}^N$) by

  1. replacing the rectangular window $ \hbox{\sc Chop}_N()$ with a smoother spectral window $ H(\omega)$, and
  2. using extra zero-padding in the time domain to convert the cyclic convolution between $ \hbox{\sc Stretch}_L(x)$ and $ h$ into an acyclic convolution between them (recall §7.2.4).
The smoother spectral window $ H$ can be thought of as the frequency response of the FIR7.22 filter $ h$ used as the bandlimited interpolation kernel in the time domain. The number of zeros needed in the zero-padding of $ x$ in the time domain is simply length of $ h$ minus 1, and the number of zeros to be appended to $ h$ is the length of $ \hbox{\sc Stretch}_L(x)$ minus 1. With this much zero-padding, the cyclic convolution of $ x$ and $ h$ implemented using the DFT becomes equivalent to acyclic convolution, as desired for the time-limited signals $ x$ and $ h$. Thus, if $ N_x$ denotes the nonzero length of $ x$, then the nonzero length of $ \hbox{\sc Stretch}_L(x)$ is $ L(N_x-1)+1$, and we require the DFT length to be $ N\geq
L(N_x-1)+N_h$, where $ N_h$ is the filter length. In operator notation, we can express bandlimited sampling-rate up-conversion by the factor $ L$ for time-limited signals $ x\in{\bf C}^{N_x}$ by

$\displaystyle \zbox {\hbox{\sc Interp}_L(x) \approx \hbox{\sc IDFT}(H\cdot(\hbox{\sc DFT}(\hbox{\sc ZeroPad}_{N}(\hbox{\sc Stretch}_L(x)))).} \protect$ (7.8)

The approximation symbol `$ \approx$' approaches equality as the spectral window $ H$ approaches $ \hbox{\sc Chop}_{N_x}([1,\dots,1])$ (the frequency response of the ideal lowpass filter passing only the original spectrum $ X$), while at the same time allowing no time aliasing (convolution remains acyclic in the time domain).

Equation (7.8) can provide the basis for a high-quality sampling-rate conversion algorithm. Arbitrarily long signals can be accommodated by breaking them into segments of length $ N_x$, applying the above algorithm to each block, and summing the up-sampled blocks using overlap-add. That is, the lowpass filter $ h$ ``rings'' into the next block and possibly beyond (or even into both adjacent time blocks when $ h$ is not causal), and this ringing must be summed into all affected adjacent blocks. Finally, the filter $ H$ can ``window away'' more than the top $ L-1$ copies of $ X$ in $ Y$, thereby preparing the time-domain signal for downsampling, say by $ M\in{\bf Z}$:

$\displaystyle {\footnotesize
\zbox {\hbox{\sc Interp}_{L/M}(x) \approx \hbox{\s...
...T}(H\cdot(\hbox{\sc DFT}(\hbox{\sc ZeroPad}_{N}(\hbox{\sc Stretch}_L(x)))))}

where now the lowpass filter frequency response $ H$ must be close to zero for all $ \vert\omega_k\vert\geq\pi/\max(L,M)$. While such a sampling-rate conversion algorithm can be made more efficient by using an FFT in place of the DFT (see Appendix A), it is not necessarily the most efficient algorithm possible. This is because (1) $ M-1$ out of $ M$ output samples from the IDFT need not be computed at all, and (2) $ \hbox{\sc Stretch}_L(x)$ has many zeros in it which do not need explicit handling. For an introduction to time-domain sampling-rate conversion (bandlimited interpolation) algorithms which take advantage of points (1) and (2) in this paragraph, see, e.g., Appendix D and [72].

DFT Theorems Problems

See http://ccrma.stanford.edu/~jos/mdftp/DFT_Theorems_Problems.html

Next Section:
Example Applications of the DFT
Previous Section:
Derivation of the Discrete Fourier Transform (DFT)