DSPRelated.com
Free Books

Signal Operators

It will be convenient in the Fourier theorems of §7.4 to make use of the following signal operator definitions.

Operator Notation

In this book, an operator is defined as a signal-valued function of a signal. Thus, for the space of length $ N$ complex sequences, an operator $ \hbox{\sc Op}$ is a mapping from $ {\bf C}^N$ to $ {\bf C}^N$:

$\displaystyle \hbox{\sc Op}(x) \in{\bf C}^N\, \forall x\in{\bf C}^N
$

An example is the DFT operator:

$\displaystyle \hbox{\sc DFT}(x) = X
$

The argument to an operator is always an entire signal. However, its output may be subscripted to obtain a specific sample, e.g.,

$\displaystyle \hbox{\sc DFT}_k(x) = X(k).
$

Some operators require one or more parameters affecting their definition. For example the shift operator (defined in §7.2.3 below) requires a shift amount $ \Delta\in{\bf Z}$:7.3

$\displaystyle \hbox{\sc Shift}_{\Delta,n}(x) \isdef x(n-\Delta)
$

A time or frequency index, if present, will always be the last subscript. Thus, the signal $ \hbox{\sc Shift}_{\Delta}(x)$ is obtained from $ x$ by shifting it $ \Delta$ samples.

Note that operator notation is not standard in the field of digital signal processing. It can be regarded as being influenced by the field of computer science. In the Fourier theorems below, both operator and conventional signal-processing notations are provided. In the author's opinion, operator notation is consistently clearer, allowing powerful expressions to be written naturally in one line (e.g., see Eq.$ \,$(7.8)), and it is much closer to how things look in a readable computer program (such as in the matlab language).


Flip Operator

We define the flip operator by

$\displaystyle \hbox{\sc Flip}_n(x) \isdef x(-n), \protect$ (7.1)

for all sample indices $ n\in{\bf Z}$. By modulo indexing, $ x(-n)$ is the same as $ x(N-n)$. The $ \hbox{\sc Flip}()$ operator reverses the order of samples $ 1$ through $ N-1$ of a sequence, leaving sample 0 alone, as shown in Fig.7.1a. Thanks to modulo indexing, it can also be viewed as ``flipping'' the sequence about the time 0, as shown in Fig.7.1b. The interpretation of Fig.7.1b is usually the one we want, and the $ \hbox{\sc Flip}$ operator is usually thought of as ``time reversal'' when applied to a signal $ x$ or ``frequency reversal'' when applied to a spectrum $ X$.

figure[htbp] \includegraphics{eps/flip}


Shift Operator

The shift operator is defined by

$\displaystyle \hbox{\sc Shift}_{\Delta,n}(x) \isdef x(n-\Delta), \quad \Delta\in{\bf Z},
$

and $ \hbox{\sc Shift}_{\Delta}(x)$ denotes the entire shifted signal. Note that since indexing is modulo $ N$, the shift is circular (or ``cyclic''). However, we normally use it to represent time delay by $ \Delta$ samples. We often use the shift operator in conjunction with zero padding (appending zeros to the signal $ x$, §7.2.7) in order to avoid the ``wrap-around'' associated with a circular shift.

Figure 7.2: Successive one-sample shifts of a sampled periodic sawtooth waveform having first period $ [0,1,2,3,4]$.
\includegraphics[width=\twidth]{eps/shift}

Figure 7.2 illustrates successive one-sample delays of a periodic signal having first period given by $ [0,1,2,3,4]$.

Examples

  • $ \hbox{\sc Shift}_1([1,0,0,0]) = [0,1,0,0]\;$ (an impulse delayed one sample).

  • $ \hbox{\sc Shift}_1([1,2,3,4]) = [4,1,2,3]\;$ (a circular shift example).

  • $ \hbox{\sc Shift}_{-2}([1,0,0,0]) = [0,0,1,0]\;$ (another circular shift example).


Convolution

The convolution of two signals $ x$ and $ y$ in $ {\bf C}^N$ may be denoted `` $ x\circledast y$'' and defined by

$\displaystyle \zbox {(x\circledast y)_n \isdef \sum_{m=0}^{N-1}x(m) y(n-m)}
$

Note that this is circular convolution (or ``cyclic'' convolution).7.4 The importance of convolution in linear systems theory is discussed in §8.3.

Cyclic convolution can be expressed in terms of previously defined operators as

$\displaystyle y(n) \isdef (x\circledast h)_n \isdef \sum_{m=0}^{N-1}x(m)h(n-m) =
\left<x,\hbox{\sc Shift}_n(\hbox{\sc Flip}(h))\right>$   $\displaystyle \mbox{($h$\ real)}$

where $ x,y\in{\bf C}^N$ and $ h\in{\bf R}^N$. This expression suggests graphical convolution, discussed below in §7.2.4.

Commutativity of Convolution

Convolution (cyclic or acyclic) is commutative, i.e.,

$\displaystyle \zbox {x\circledast y = y\circledast x .}
$


Proof:

\begin{eqnarray*}
(x\circledast y)_n &\isdef & \sum_{m=0}^{N-1}x(m) y(n-m) =
\s...
...=& \sum_{l=0}^{N-1}y(l) x(n-l) \\
&\isdef & (y \circledast x)_n
\end{eqnarray*}

In the first step we made the change of summation variable $ l\isdeftext n-m$, and in the second step, we made use of the fact that any sum over all $ N$ terms is equivalent to a sum from 0 to $ N-1$.


Convolution as a Filtering Operation

In a convolution of two signals $ x\circledast y$, where both $ x$ and $ y$ are signals of length $ N$ (real or complex), we may interpret either $ x$ or $ y$ as a filter that operates on the other signal which is in turn interpreted as the filter's ``input signal''.7.5 Let $ h\in{\bf C}^N$ denote a length $ N$ signal that is interpreted as a filter. Then given any input signal $ x\in{\bf C}^N$, the filter output signal $ y\in{\bf C}^N$ may be defined as the cyclic convolution of $ x$ and $ h$:

$\displaystyle y = h\circledast x = x \circledast h
$

Because the convolution is cyclic, with $ x$ and $ h$ chosen from the set of (periodically extended) vectors of length $ N$, $ h(n)$ is most precisely viewed as the impulse-train-response of the associated filter at time $ n$. Specifically, the impulse-train response $ h\in{\bf C}^N$ is the response of the filter to the impulse-train signal $ \delta\isdeftext [1,0,\ldots,0]\in{\bf R}^N$, which, by periodic extension, is equal to

$\displaystyle \delta(n) = \left\{\begin{array}{ll}
1, & n=0\;\mbox{(mod $N$)} \\ [5pt]
0, & n\ne 0\;\mbox{(mod $N$)}. \\
\end{array} \right.
$

Thus, $ N$ is the period of the impulse-train in samples--there is an ``impulse'' (a `$ 1$') every $ N$ samples. Neglecting the assumed periodic extension of all signals in $ {\bf C}^N$, we may refer to $ \delta$ more simply as the impulse signal, and $ h$ as the impulse response (as opposed to impulse-train response). In contrast, for the DTFTB.1), in which the discrete-time axis is infinitely long, the impulse signal $ \delta(n)$ is defined as

$\displaystyle \delta(n) \isdef \left\{\begin{array}{ll}
1, & n=0 \\ [5pt]
0, & n\ne 0 \\
\end{array} \right.
$

and no periodic extension arises.

As discussed below (§7.2.7), one may embed acyclic convolution within a larger cyclic convolution. In this way, real-world systems may be simulated using fast DFT convolutions (see Appendix A for more on fast convolution algorithms).

Note that only linear, time-invariant (LTI) filters can be completely represented by their impulse response (the filter output in response to an impulse at time 0). The convolution representation of LTI digital filters is fully discussed in Book II [68] of the music signal processing book series (in which this is Book I).


Convolution Example 1: Smoothing a Rectangular Pulse

Figure 7.3: Illustration of the convolution of a rectangular pulse $ x=[0 ,0 ,0 ,0,1,1,1,1,1,1,0,0,0,0]$ and the impulse response of an ``averaging filter'' $ h=[1/3,1/3,1/3,0,0,0,0,0,0,0,0,0,0,0]$ ($ N=14$).

\includegraphics{eps/smoother-x}
Filter input signal $ x(n)$.


\includegraphics{eps/smoother-h}
Filter impulse response $ h(n)$.


\includegraphics{eps/smoother-y}
Filter output signal $ y(n)$.


Figure 7.3 illustrates convolution of

$\displaystyle x = [ 0, 0, 0,0,1,1,1,1,1,1,0,0,0,0]
$

with

$\displaystyle h = \left[\frac{1}{3},\frac{1}{3},\frac{1}{3},0,0,0,0,0,0,0,0,0,0,0\right]
$

to get

$\displaystyle y = x\circledast h = \left[0,0,0,0,\frac{1}{3},\frac{2}{3},1,1,1,1,\frac{2}{3},\frac{1}{3},0,0\right] \protect$ (7.2)

as graphed in Fig.7.3(c). In this case, $ h$ can be viewed as a ``moving three-point average'' filter. Note how the corners of the rectangular pulse are ``smoothed'' by the three-point filter. Also note that the pulse is smeared to the ``right'' (forward in time) because the filter impulse response starts at time zero. Such a filter is said to be causal (see [68] for details). By shifting the impulse response left one sample to get

$\displaystyle h=\left[\frac{1}{3},\frac{1}{3},0,0,0,0,0,0,0,0,0,0,\frac{1}{3}\right]
$

(in which case $ \hbox{\sc Flip}(h)=h$), we obtain a noncausal filter $ h$ which is symmetric about time zero so that the input signal is smoothed ``in place'' with no added delay (imagine Fig.7.3(c) shifted left one sample, in which case the input pulse edges align with the midpoint of the rise and fall in the output signal).


Convolution Example 2: ADSR Envelope

Figure 7.4: Illustration of the convolution of two rectangular pulses with a truncated exponential impulse response.

\includegraphics{eps/adsr-x}
Filter input signal $ x(n)$.


\includegraphics{eps/adsr-h}
Filter impulse response $ h(n)$.


\includegraphics{eps/adsr-y}
Filter output signal $ y(n)$.


In this example, the input signal is a sequence of two rectangular pulses, creating a piecewise constant function, depicted in Fig.7.4(a). The filter impulse response, shown in Fig.7.4(b), is a truncated exponential.7.6

In this example, $ h$ is again a causal smoothing-filter impulse response, and we could call it a ``moving weighted average'', in which the weighting is exponential into the past. The discontinuous steps in the input become exponential ``asymptotes'' in the output which are approached exponentially. The overall appearance of the output signal resembles what is called an attack, decay, release, and sustain envelope, or ADSR envelope for short. In a practical ADSR envelope, the time-constants for attack, decay, and release may be set independently. In this example, there is only one time constant, that of $ h$. The two constant levels in the input signal may be called the attack level and the sustain level, respectively. Thus, the envelope approaches the attack level at the attack rate (where the ``rate'' may be defined as the reciprocal of the time constant), it next approaches the sustain level at the ``decay rate'', and finally, it approaches zero at the ``release rate''. These envelope parameters are commonly used in analog synthesizers and their digital descendants, so-called virtual analog synthesizers. Such an ADSR envelope is typically used to multiply the output of a waveform oscillator such as a sawtooth or pulse-train oscillator. For more on virtual analog synthesis, see, for example, [78,77].


Convolution Example 3: Matched Filtering

Figure 7.5: Illustration of convolution of $ y=[1,1,1,1,0,0,0,0]$ and ``matched filter'' $ h=\protect\hbox{\sc Flip}(y)=[1,0,0,0,0,1,1,1]$ ($ N=8$).
\includegraphics[width=2.5in]{eps/conv}

Figure 7.5 illustrates convolution of

\begin{eqnarray*}
y&=&[1,1,1,1,0,0,0,0] \\
h&=&[1,0,0,0,0,1,1,1]
\end{eqnarray*}

to get

$\displaystyle y\circledast h = [4,3,2,1,0,1,2,3]. \protect$ (7.3)

For example, $ y$ could be a ``rectangularly windowed signal, zero-padded by a factor of 2,'' where the signal happened to be dc (all $ 1$s). For the convolution, we need

$\displaystyle \hbox{\sc Flip}(h) = [1,1,1,1,0,0,0,0]
$

which is the same as $ y$. When $ h=\hbox{\sc Flip}(y)$, we say that $ h$ is a matched filter for $ y$.7.7 In this case, $ h$ is matched to look for a ``dc component,'' and also zero-padded by a factor of $ 2$. The zero-padding serves to simulate acyclic convolution using circular convolution. Note from Eq.$ \,$(7.3) that the maximum is obtained in the convolution output at time 0. This peak (the largest possible if all input signals are limited to $ [-1,1]$ in magnitude), indicates the matched filter has ``found'' the dc signal starting at time 0. This peak would persist in the presence of some amount of noise and/or interference from other signals. Thus, matched filtering is useful for detecting known signals in the presence of noise and/or interference [34].


Graphical Convolution

As mentioned above, cyclic convolution can be written as

$\displaystyle y(n) \isdef (x\circledast h)_n \isdef \sum_{m=0}^{N-1}x(m)h(n-m) =
\left<x,\hbox{\sc Shift}_n(\hbox{\sc Flip}(h))\right>$   $\displaystyle \mbox{($h$\ real)}$

where $ x,y\in{\bf C}^N$ and $ h\in{\bf R}^N$. It is instructive to interpret this expression graphically, as depicted in Fig.7.5 above. The convolution result at time $ n=0$ is the inner product of $ x$ and $ \hbox{\sc Flip}(h)$, or $ y(0)=\left<x,\hbox{\sc Flip}(h)\right>$. For the next time instant, $ n=1$, we shift $ \hbox{\sc Flip}(h)$ one sample to the right and repeat the inner product operation to obtain $ y(1)=\left<x,\hbox{\sc Shift}_1(\hbox{\sc Flip}(h))\right>$, and so on. To capture the cyclic nature of the convolution, $ x$ and $ \hbox{\sc Shift}_n(\hbox{\sc Flip}(h))$ can be imagined plotted on a cylinder. Thus, Fig.7.5 shows the cylinder after being ``cut'' along the vertical line between $ n=N-1$ and $ n=0$ and ``unrolled'' to lay flat.


Polynomial Multiplication

Note that when you multiply two polynomials together, their coefficients are convolved. To see this, let $ p(x)$ denote the $ m$th-order polynomial

$\displaystyle p(x) = p_0 + p_1 x + p_2 x^2 + \cdots + p_m x^m
$

with coefficients $ p_i$, and let $ q(x)$ denote the $ n$th-order polynomial

$\displaystyle q(x) = q_0 + q_1 x + q_2 x^2 + \cdots + q_n x^n
$

with coefficients $ q_i$. Then we have [1]

\begin{eqnarray*}
p(x) q(x) &=& p_0 q_0 + (p_0 q_1 + p_1 q_0) x + (p_0 q_2 + p_1...
...\qquad\qquad\;
\mathop{+} p_{n+m-1} q_1 + p_{n+m} q_0) x^{n+m}.
\end{eqnarray*}

Denoting $ p(x) q(x)$ by

$\displaystyle r(x) \isdef p(x) q(x) = r_0 + r_1 x + r_2 x^2 + \cdots + r_{m+n} x^{m+n},
$

we have that the $ i$th coefficient can be expressed as

\begin{eqnarray*}
r_i &=& p_0 q_i + p_1 q_{i-1} + p_2 q_{i-2} + \cdots + p_{i-1}...
...=-\infty}^\infty p_j q_{i-j}\\
&\isdef & (p \circledast q)(i),
\end{eqnarray*}

where $ p_i$ and $ q_i$ are doubly infinite sequences, defined as zero for $ i<0$ and $ i>m,n$, respectively.


Multiplication of Decimal Numbers

Since decimal numbers are implicitly just polynomials in the powers of 10, e.g.,

$\displaystyle 3819 = 3\cdot 10^3 + 8\cdot 10^2 + 1\cdot 10^1 + 9\cdot 10^0,
$

it follows that multiplying two numbers convolves their digits. The only twist is that, unlike normal polynomial multiplication, we have carries. That is, when a convolution result (output digit) exceeds 10, we subtract 10 from the result and add 1 to the digit in the next higher place.


Correlation

The correlation operator for two signals $ x$ and $ y$ in $ {\bf C}^N$ is defined as

$\displaystyle \zbox {(x\star y)_n \isdef \sum_{m=0}^{N-1}\overline{x(m)} y(m+n)}
$

We may interpret the correlation operator as

$\displaystyle (x\star y)_n = \left<\hbox{\sc Shift}_{-n}(y), x\right>
$

which is $ \vert\vert\,x\,\vert\vert ^2=N$ times the coefficient of projection onto $ x$ of $ y$ advanced by $ n$ samples (shifted circularly to the left by $ n$ samples). The time shift $ n$ is called the correlation lag, and $ \overline{x(m)}
y(m+n)$ is called a lagged product. Applications of correlation are discussed in §8.4.


Stretch Operator

Unlike all previous operators, the $ \hbox{\sc Stretch}_L()$ operator maps a length $ N$ signal to a length $ M\isdeftext LN$ signal, where $ L$ and $ N$ are integers. We use ``$ m$'' instead of ``$ n$'' as the time index to underscore this fact.

Figure: Illustration of $ \protect\hbox{\sc Stretch}_3(x)$.
\includegraphics[width=4in]{eps/stretch}

A stretch by factor $ L$ is defined by

$\displaystyle \hbox{\sc Stretch}_{L,m}(x) \isdef
\left\{\begin{array}{ll}
x(...
...x{ an integer} \\ [5pt]
0, & m/L\mbox{ non-integer.} \\
\end{array} \right.
$

Thus, to stretch a signal by the factor $ L$, insert $ L-1$ zeros between each pair of samples. An example of a stretch by factor three is shown in Fig.7.6. The example is

$\displaystyle \hbox{\sc Stretch}_3([4,1,2]) = [4,0,0,1,0,0,2,0,0].
$

The stretch operator is used to describe and analyze upsampling, that is, increasing the sampling rate by an integer factor. A stretch by $ K$ followed by lowpass filtering to the frequency band $ \omega\in(-\pi/K,\pi/K)$ implements ideal bandlimited interpolation (introduced in Appendix D).


Zero Padding

Zero padding consists of extending a signal (or spectrum) with zeros. It maps a length $ N$ signal to a length $ M>N$ signal, but $ N$ need not divide $ M$.


Definition:

$\displaystyle \hbox{\sc ZeroPad}_{M,m}(x) \isdef \left\{\begin{array}{ll} x(m),...
...ert m\vert < N/2 \\ [5pt] 0, & \mbox{otherwise} \\ \end{array} \right. \protect$ (7.4)

where $ m=0,\pm1,\pm2,\dots,\pm M_h$, with $ M_h\isdef (M-1)/2$ for $ M$ odd, and $ M/2 - 1$ for $ M$ even. For example,

$\displaystyle \hbox{\sc ZeroPad}_{10}([1,2,3,4,5]) = [1,2,3,0,0,0,0,0,4,5].
$

In this example, the first sample corresponds to time 0, and five zeros have been inserted between the samples corresponding to times $ n=2$ and $ n=-2$.

Figure 7.7 illustrates zero padding from length $ N=5$ out to length $ M=11$. Note that $ x$ and $ n$ could be replaced by $ X$ and $ k$ in the figure caption.

Figure 7.7: Illustration of zero padding: a) Original signal (or spectrum) $ x=[3,2,1,1,2]$ plotted over the domain $ n\in [0,N-1]$ where $ N=5$ (i.e., as the samples would normally be held in a computer array). b) $ \protect\hbox{\sc ZeroPad}_{11}(x)$. c) The same signal $ x$ plotted over the domain $ n\in [-(N-1)/2, (N-1)/2]$ which is more natural for interpreting negative times (frequencies). d) $ \protect\hbox{\sc ZeroPad}_{11}(x)$ plotted over the zero-centered domain.
\includegraphics[width=\twidth]{eps/zpad}

Note that we have unified the time-domain and frequency-domain definitions of zero-padding by interpreting the original time axis $ [0,1,\dots,N-1]$ as indexing positive-time samples from 0 to $ N/2-1$ (for $ N$ even), and negative times in the interval $ n\in[N-N/2+1,N-1]\equiv[-N/2+1,-1]$.7.8 Furthermore, we require $ x(N/2)\equiv
x(-N/2)=0$ when $ N$ is even, while odd $ N$ requires no such restriction. In practice, we often prefer to interpret time-domain samples as extending from 0 to $ N-1$, i.e., with no negative-time samples. For this case, we define ``causal zero padding'' as described below.


Causal (Periodic) Signals

A signal $ x\in{\bf C}^N$ may be defined as causal when $ x(n)=0$ for all ``negative-time'' samples (e.g., for $ n=-1,-2,\dots,-N/2$ when $ N$ is even). Thus, the signal $ x=[1,2,3,0,0]\in{\bf R}^5$ is causal while $ x=[1,2,3,4,0]$ is not. For causal signals, zero-padding is equivalent to simply appending zeros to the original signal. For example,

$\displaystyle \hbox{\sc ZeroPad}_{10}([1,2,3,0,0]) = [1,2,3,0,0,0,0,0,0,0].
$

Therefore, when we simply append zeros to the end of signal, we call it causal zero padding.


Causal Zero Padding

In practice, a signal $ x\in{\bf R}^N$ is often an $ N$-sample frame of data taken from some longer signal, and its true starting time can be anything. In such cases, it is common to treat the start-time of the frame as zero, with no negative-time samples. In other words, $ x(n)$ represents an $ N$-sample signal-segment that is translated in time to start at time 0. In this case (no negative-time samples in the frame), it is proper to zero-pad by simply appending zeros at the end of the frame. Thus, we define e.g.,

$\displaystyle \hbox{\sc CausalZeroPad}_{10}([1,2,3,4,5]) = [1,2,3,4,5,0,0,0,0,0].
$

Causal zero-padding should not be used on a spectrum of a real signal because, as we will see in §7.4.3 below, the magnitude spectrum of every real signal is symmetric about frequency zero. For the same reason, we cannot simply append zeros in the time domain when the signal frame is considered to include negative-time samples, as in ``zero-centered FFT processing'' (discussed in Book IV [70]). Nevertheless, in practice, appending zeros is perhaps the most common form of zero-padding. It is implemented automatically, for example, by the matlab function fft(x,N) when the FFT size N exceeds the length of the signal vector x.

In summary, we have defined two types of zero-padding that arise in practice, which we may term ``causal'' and ``zero-centered'' (or ``zero-phase'', or even ``periodic''). The zero-centered case is the more natural with respect to the mathematics of the DFT, so it is taken as the ``official'' definition of ZEROPAD(). In both cases, however, when properly used, we will have the basic Fourier theorem7.4.12 below) stating that zero-padding in the time domain corresponds to ideal bandlimited interpolation in the frequency domain, and vice versa.


Zero Padding Applications

Zero padding in the time domain is used extensively in practice to compute heavily interpolated spectra by taking the DFT of the zero-padded signal. Such spectral interpolation is ideal when the original signal is time limited (nonzero only over some finite duration spanned by the orignal samples).

Note that the time-limited assumption directly contradicts our usual assumption of periodic extension. As mentioned in §6.7, the interpolation of a periodic signal's spectrum from its harmonics is always zero; that is, there is no spectral energy, in principle, between the harmonics of a periodic signal, and a periodic signal cannot be time-limited unless it is the zero signal. On the other hand, the interpolation of a time-limited signal's spectrum is nonzero almost everywhere between the original spectral samples. Thus, zero-padding is often used when analyzing data from a non-periodic signal in blocks, and each block, or frame, is treated as a finite-duration signal which can be zero-padded on either side with any number of zeros. In summary, the use of zero-padding corresponds to the time-limited assumption for the data frame, and more zero-padding yields denser interpolation of the frequency samples around the unit circle.

Sometimes people will say that zero-padding in the time domain yields higher spectral resolution in the frequency domain. However, signal processing practitioners should not say that, because ``resolution'' in signal processing refers to the ability to ``resolve'' closely spaced features in a spectrum analysis (see Book IV [70] for details). The usual way to increase spectral resolution is to take a longer DFT without zero padding--i.e., look at more data. In the field of graphics, the term resolution refers to pixel density, so the common terminology confusion is reasonable. However, remember that in signal processing, zero-padding in one domain corresponds to a higher interpolation-density in the other domain--not a higher resolution.


Ideal Spectral Interpolation

Using Fourier theorems, we will be able to show (§7.4.12) that zero padding in the time domain gives exact bandlimited interpolation in the frequency domain.7.9In other words, for truly time-limited signals $ x$, taking the DFT of the entire nonzero portion of $ x$ extended by zeros yields exact interpolation of the complex spectrum--not an approximation (ignoring computational round-off error in the DFT itself). Because the fast Fourier transform (FFT) is so efficient, zero-padding followed by an FFT is a highly practical method for interpolating spectra of finite-duration signals, and is used extensively in practice.

Before we can interpolate a spectrum, we must be clear on what a ``spectrum'' really is. As discussed in Chapter 6, the spectrum of a signal $ x(\cdot)$ at frequency $ \omega$ is defined as a complex number $ X(\omega)$ computed using the inner product

$\displaystyle X(\omega)
\isdef \left<x,s_\omega\right>
\isdef \sum_{\mbox{all } n} x(n) e^{-j\omega nT}.
$

That is, $ X(\omega)$ is the unnormalized coefficient of projection of $ x$ onto the sinusoid $ s_\omega$ at frequency $ \omega$. When $ \omega=\omega_k=2\pi f_s k/N$, for $ k=0,1,\ldots,
N-1$, we obtain the special set of spectral samples known as the DFT. For other values of $ \omega$, we obtain spectral points in between the DFT samples. Interpolating DFT samples should give the same result. It is straightforward to show that this ideal form of interpolation is what we call bandlimited interpolation, as discussed further in Appendix D and in Book IV [70] of this series.


Interpolation Operator

The interpolation operator $ \hbox{\sc Interp}_L()$ interpolates a signal by an integer factor $ L$ using bandlimited interpolation. For frequency-domain signals $ X(\omega_k)$, $ k=0,1,2,\ldots,N-1$, we may write spectral interpolation as follows:

\begin{eqnarray*}
\hbox{\sc Interp}_{L,k^\prime }(X) &\isdef & X(\omega_{k^\prim...
...i k^\prime /M,\; k^\prime =0,1,2,\dots,M-1,\;\\
M&\isdef & LN.
\end{eqnarray*}

Since $ X(\omega_k )\isdeftext \hbox{\sc DFT}_{N,k}(x)$ is initially only defined over the $ N$ roots of unity in the $ z$ plane, while $ X(\omega_{k^\prime })$ is defined over $ M=LN$ roots of unity, we define $ X(\omega_{k^\prime })$ for $ \omega_{k^\prime }\neq\omega_k $ by ideal bandlimited interpolation (specifically time-limited spectral interpolation in this case).

For time-domain signals $ x(n)$, exact interpolation is similarly bandlimited interpolation, as derived in Appendix D.


Repeat Operator

Like the $ \hbox{\sc Stretch}_L()$ and $ \hbox{\sc Interp}_L()$ operators, the $ \hbox{\sc Repeat}_L()$ operator maps a length $ N$ signal to a length $ M\isdeftext LN$ signal:


Definition: The repeat $ L$ times operator is defined for any $ x\in{\bf C}^N$ by

$\displaystyle \hbox{\sc Repeat}_{L,m}(x) \isdef x(m), \qquad m=0,1,2,\ldots,M-1,
$

where $ M\isdef LN$, and indexing of $ x$ is modulo $ N$ (periodic extension). Thus, the $ \hbox{\sc Repeat}_L()$ operator simply repeats its input signal $ L$ times.7.10 An example of $ \hbox{\sc Repeat}_2(x)$ is shown in Fig.7.8. The example is

$\displaystyle \hbox{\sc Repeat}_2([0,2,1,4,3,1]) = [0,2,1,4,3,1,0,2,1,4,3,1].
$

Figure: Illustration of $ \hbox{\sc Repeat}_2(x)$.
\includegraphics[width=\twidth]{eps/repeat}

A frequency-domain example is shown in Fig.7.9. Figure 7.9a shows the original spectrum $ X$, Fig.7.9b shows the same spectrum plotted over the unit circle in the $ z$ plane, and Fig.7.9c shows $ \hbox{\sc Repeat}_3(X)$. The $ z=1$ point (dc) is on the right-rear face of the enclosing box. Note that when viewed as centered about $ k=0$, $ X$ is a somewhat ``triangularly shaped'' spectrum. We see three copies of this shape in $ \hbox{\sc Repeat}_3(X)$.

Figure: Illustration of $ \hbox{\sc Repeat}_3(X)$. a) Conventional plot of $ X$. b) Plot of $ X$ over the unit circle in the $ z$ plane. c) $ \hbox{\sc Repeat}_3(X)$.
\includegraphics[width=4in]{eps/repeat3d}

The repeat operator is used to state the Fourier theorem

$\displaystyle \hbox{\sc Stretch}_L \;\longleftrightarrow\;\hbox{\sc Repeat}_L,
$

where $ \hbox{\sc Stretch}_L$ is defined in §7.2.6. That is, when you stretch a signal by the factor $ L$ (inserting zeros between the original samples), its spectrum is repeated $ L$ times around the unit circle. The simple proof is given on page [*].


Downsampling Operator

Downsampling by $ L$ (also called decimation by $ L$) is defined for $ x\in{\bf C}^N$ as taking every $ L$th sample, starting with sample zero:

\begin{eqnarray*}
\hbox{\sc Downsample}_{L,m}(x) &\isdef & x(mL),\\
m &=& 0,1,2,\ldots,M-1\\
N&=&LM.
\end{eqnarray*}

The $ \hbox{\sc Downsample}_L()$ operator maps a length $ N=LM$ signal down to a length $ M$ signal. It is the inverse of the $ \hbox{\sc Stretch}_L()$ operator (but not vice versa), i.e.,

\begin{eqnarray*}
\hbox{\sc Downsample}_L(\hbox{\sc Stretch}_L(x)) &=& x \\
\hb...
...L(\hbox{\sc Downsample}_L(x)) &\neq& x\quad \mbox{(in general).}
\end{eqnarray*}

The stretch and downsampling operations do not commute because they are linear time-varying operators. They can be modeled using time-varying switches controlled by the sample index $ n$.

Figure: Illustration of $ \protect\hbox{\sc Downsample}_2(x)$.
\includegraphics[width=4in]{eps/downsamplex}

The following example of $ \protect\hbox{\sc Downsample}_2(x)$ is illustrated in Fig.7.10:

$\displaystyle \hbox{\sc Downsample}_2([0,1,2,3,4,5,6,7,8,9]) = [0,2,4,6,8].
$

Note that the term ``downsampling'' may also refer to the more elaborate process of sampling-rate conversion to a lower sampling rate, in which a signal's sampling rate is lowered by resampling using bandlimited interpolation. To distinguish these cases, we can call this bandlimited downsampling, because a lowpass-filter is needed, in general, prior to downsampling so that aliasing is avoided. This topic is address in Appendix D. Early sampling-rate converters were in fact implemented using the $ \hbox{\sc Stretch}_L$ operation, followed by an appropriate lowpass filter, followed by $ \hbox{\sc Downsample}_M$, in order to implement a sampling-rate conversion by the factor $ L/M$.


Alias Operator

Aliasing occurs when a signal is undersampled. If the signal sampling rate $ f_s$ is too low, we get frequency-domain aliasing.

The topic of aliasing normally arises in the context of sampling a continuous-time signal. The sampling theorem (Appendix D) says that we will have no aliasing due to sampling as long as the sampling rate is higher than twice the highest frequency present in the signal being sampled.

In this chapter, we are considering only discrete-time signals, in order to keep the math as simple as possible. Aliasing in this context occurs when a discrete-time signal is downsampled to reduce its sampling rate. You can think of continuous-time sampling as the limiting case for which the starting sampling rate is infinity.

An example of aliasing is shown in Fig.7.11. In the figure, the high-frequency sinusoid is indistinguishable from the lower-frequency sinusoid due to aliasing. We say the higher frequency aliases to the lower frequency.

Figure 7.11: Example of frequency-domain aliasing due to undersampling in the time domain.
\includegraphics[scale=0.5]{eps/aliasing}

Undersampling in the frequency domain gives rise to time-domain aliasing. If time or frequency is not specified, the term ``aliasing'' normally means frequency-domain aliasing (due to undersampling in the time domain).

The aliasing operator for $ N$-sample signals $ x\in{\bf C}^N$ is defined by

\begin{eqnarray*}
\hbox{\sc Alias}_{L,m}(x) &\isdef & \sum_{l=0}^{L-1} x\left(m+lM\right),\\
m &=& 0,1,2,\ldots,M-1,\\
N&=&LM.
\end{eqnarray*}

Like the $ \hbox{\sc Downsample}_L()$ operator, the $ \hbox{\sc Alias}_L()$ operator maps a length $ N=LM$ signal down to a length $ M$ signal. A way to think of it is to partition the original $ N$ samples into $ L$ blocks of length $ M$, with the first block extending from sample 0 to sample $ M-1$, the second block from $ M$ to $ 2M-1$, etc. Then just add up the blocks. This process is called aliasing. If the original signal $ x$ is a time signal, it is called time-domain aliasing; if it is a spectrum, we call it frequency-domain aliasing, or just aliasing. Note that aliasing is not invertible in general. Once the blocks are added together, it is usually not possible to recover the original blocks.

Example:

\begin{eqnarray*}
\hbox{\sc Alias}_2([0,1,2,3,4,5]) &=& [0,1,2] + [3,4,5] = [3,5...
...ox{\sc Alias}_3([0,1,2,3,4,5]) &=& [0,1] + [2,3] + [4,5] = [6,9]
\end{eqnarray*}

The alias operator is used to state the Fourier theorem7.4.11)

$\displaystyle \hbox{\sc Downsample}_L \;\longleftrightarrow\;\frac{1}{L}\hbox{\sc Alias}_L.
$

That is, when you downsample a signal by the factor $ L$, its spectrum is aliased by the factor $ L$.

Figure: Illustration of aliasing in the frequency domain. a) $ \hbox{\sc Repeat}_3(X)$ from Fig.7.9c. b) First half of the original unit circle (0 to $ \pi $) wrapped around the new, smaller unit circle (which is magnified to the original size). c) Second half ($ \pi $ to $ 2\pi $), also wrapped around the new unit circle. d) Overlay of components to be summed. e) Sum of components (the aliased spectrum). f) Both sum and overlay.
\includegraphics[width=4.5in]{eps/aliasingfd}

Figure 7.12 shows the result of $ \hbox{\sc Alias}_2$ applied to $ \hbox{\sc Repeat}_3(X)$ from Figure 7.9c. Imagine the spectrum of Fig.7.12a as being plotted on a piece of paper rolled to form a cylinder, with the edges of the paper meeting at $ z=1$ (upper right corner of Fig.7.12a). Then the $ \hbox{\sc Alias}_2$ operation can be simulated by rerolling the cylinder of paper to cut its circumference in half. That is, reroll it so that at every point, two sheets of paper are in contact at all points on the new, narrower cylinder. Now, simply add the values on the two overlapping sheets together, and you have the $ \hbox{\sc Alias}_2$ of the original spectrum on the unit circle. To alias by $ 3$, we would shrink the cylinder further until the paper edges again line up, giving three layers of paper in the cylinder, and so on.

Figure 7.12b shows what is plotted on the first circular wrap of the cylinder of paper, and Fig.7.12c shows what is on the second wrap. These are overlaid in Fig.7.12d and added together in Fig.7.12e. Finally, Figure 7.12f shows both the addition and the overlay of the two components. We say that the second component (Fig.7.12c) ``aliases'' to new frequency components, while the first component (Fig.7.12b) is considered to be at its original frequencies. If the unit circle of Fig.7.12a covers frequencies 0 to $ f_s$, all other unit circles (Fig.7.12b-c) cover frequencies 0 to $ f_s/2$.

In general, aliasing by the factor $ K$ corresponds to a sampling-rate reduction by the factor $ K$. To prevent aliasing when reducing the sampling rate, an anti-aliasing lowpass filter is generally used. The lowpass filter attenuates all signal components at frequencies outside the interval $ [-f_s/(2K),f_s/(2K)]$ so that all frequency components which would alias are first removed.

Conceptually, in the frequency domain, the unit circle is reduced by $ \hbox{\sc Alias}_2$ to a unit circle half the original size, where the two halves are summed. The inverse of aliasing is then ``repeating'' which should be understood as increasing the unit circle circumference using ``periodic extension'' to generate ``more spectrum'' for the larger unit circle. In the time domain, on the other hand, downsampling is the inverse of the stretch operator. We may interchange ``time'' and ``frequency'' and repeat these remarks. All of these relationships are precise only for integer stretch/downsampling/aliasing/repeat factors; in continuous time and frequency, the restriction to integer factors is removed, and we obtain the (simpler) scaling theorem (proved in §C.2).


Next Section:
Even and Odd Functions
Previous Section:
The DFT and its Inverse Restated