Sampling Theory

In this appendix, sampling theory is derived as an application of the DTFT and the Fourier theorems developed in Appendix C. First, we must derive a formula for aliasing due to uniformly sampling a continuous-time signal. Next, the sampling theorem is proved. The sampling theorem provides that a properly bandlimited continuous-time signal can be sampled and reconstructed from its samples without error, in principle.

An early derivation of the sampling theorem is often cited as a 1928 paper by Harold Nyquist, and Claude Shannon is credited with reviving interest in the sampling theorem after World War II when computers became public.D.1As a result, the sampling theorem is often called ``Nyquist's sampling theorem,'' ``Shannon's sampling theorem,'' or the like. Also, the sampling rate has been called the Nyquist rate in honor of Nyquist's contributions [48]. In the author's experience, however, modern usage of the term ``Nyquist rate'' refers instead to half the sampling rate. To resolve this clash between historical and current usage, the term Nyquist limit will always mean half the sampling rate in this book series, and the term ``Nyquist rate'' will not be used at all.

Introduction to Sampling

Inside computers and modern ``digital'' synthesizers, (as well as music CDs), sound is sampled into a stream of numbers. Each sample can be thought of as a number which specifies the positionD.2of a loudspeaker at a particular instant. When sound is sampled, we call it digital audio. The sampling rate used for CDs nowadays is 44,100 samples per second. That means when you play a CD, the speakers in your stereo system are moved to a new position 44,100 times per second, or once every 23 microseconds. Controlling a speaker this fast enables it to generate any sound in the human hearing range because we cannot hear frequencies higher than around 20,000 cycles per second, and a sampling rate more than twice the highest frequency in the sound guarantees that exact reconstruction is possible from the samples.

Reconstruction from Samples--Pictorial Version

Figure D.1 shows how a sound is reconstructed from its samples. Each sample can be considered as specifying the scaling and location of a sinc function. The discrete-time signal being interpolated in the figure is a digital rectangular pulse:

$\displaystyle x = [\dots, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, \dots]
$

The sinc functions are drawn with dashed lines, and they sum to produce the solid curve. An isolated sinc function is shown in Fig.D.2. Note the ``Gibb's overshoot'' near the corners of the continuous rectangular pulse in Fig.D.1 due to bandlimiting. (A true continuous rectangular pulse has infinite bandwidth.)

Figure D.1: Summation of weighted sinc functions to create a continuous waveform from discrete-time samples.
\includegraphics[width=\twidth]{eps/SincSum}

Notice that each sinc function passes through zero at every sample instant but the one it is centered on, where it passes through 1.


The Sinc Function

Figure: The sinc function $ \protect$sinc$ (x) \protect\isdef \protect\sin(\pi x)/(\pi x)$.
\includegraphics[width=\twidth]{eps/Sinc}

The sinc function, or cardinal sine function, is the famous ``sine x over x'' curve, and is illustrated in Fig.D.2. For bandlimited interpolation of discrete-time signals, the ideal interpolation kernel is proportional to the sinc function

   sinc$\displaystyle (f_st) \isdef \frac{\sin(\pi f_st)}{\pi f_st}.
$

where $ f_s$ denotes the sampling rate in samples-per-second (Hz), and $ t$ denotes time in seconds. Note that the sinc function has zeros at all the integers except 0, where it is 1. For precise scaling, the desired interpolation kernel is $ f_s$sinc$ (f_st)$, which has a algebraic area (time integral) that is independent of the sampling rate $ f_s$.


Reconstruction from Samples--The Math

Let $ x_d(n) \isdef x(nT)$ denote the $ n$th sample of the original sound $ x(t)$, where $ t$ is time in seconds. Thus, $ n$ ranges over the integers, and $ T$ is the sampling interval in seconds. The sampling rate in Hertz (Hz) is just the reciprocal of the sampling period, i.e.,

$\displaystyle f_s\isdef \frac{1}{T}.
$

To avoid losing any information as a result of sampling, we must assume $ x(t)$ is bandlimited to less than half the sampling rate. This means there can be no energy in $ x(t)$ at frequency $ f_s/2$ or above. We will prove this mathematically when we prove the sampling theorem in §D.3 below.

Let $ X(\omega)$ denote the Fourier transform of $ x(t)$, i.e.,

$\displaystyle X(\omega)\isdef \int_{-\infty}^\infty x(t) e^{-j\omega t} dt .
$

Then we can say $ x$ is bandlimited to less than half the sampling rate if and only if $ X(\omega)=0$ for all $ \vert\omega\vert\geq\pi f_s$. In this case, the sampling theorem gives us that $ x(t)$ can be uniquely reconstructed from the samples $ x(nT)$ by summing up shifted, scaled, sinc functions:

$\displaystyle {\hat x}(t) \isdef \sum_{n=-\infty}^\infty x(nT) h_s(t-nT) \equiv x(t)
$

where

$\displaystyle h_s(t) \isdef$   sinc$\displaystyle (f_st) \isdef \frac{\sin(\pi f_st)}{\pi f_st}.
$

The sinc function is the impulse response of the ideal lowpass filter. This means its Fourier transform is a rectangular window in the frequency domain. The particular sinc function used here corresponds to the ideal lowpass filter which cuts off at half the sampling rate. In other words, it has a gain of 1 between frequencies 0 and $ f_s/2$, and a gain of zero at all higher frequencies.

The reconstruction of a sound from its samples can thus be interpreted as follows: convert the sample stream into a weighted impulse train, and pass that signal through an ideal lowpass filter which cuts off at half the sampling rate. These are the fundamental steps of digital to analog conversion (DAC). In practice, neither the impulses nor the lowpass filter are ideal, but they are usually close enough to ideal that one cannot hear any difference. Practical lowpass-filter design is discussed in the context of bandlimited interpolation [72].


Aliasing of Sampled Signals

This section quantifies aliasing in the general case. This result is then used in the proof of the sampling theorem in the next section.

It is well known that when a continuous-time signal contains energy at a frequency higher than half the sampling rate $ f_s/2$, sampling at $ f_s$ samples per second causes that energy to alias to a lower frequency. If we write the original frequency as $ f = f_s/2 +
\epsilon$, then the new aliased frequency is $ f_a = f_s/2 - \epsilon$, for $ \epsilon\leq f_s/2$. This phenomenon is also called ``folding'', since $ f_a$ is a ``mirror image'' of $ f$ about $ f_s/2$. As we will see, however, this is not a complete description of aliasing, as it only applies to real signals. For general (complex) signals, it is better to regard the aliasing due to sampling as a summation over all spectral ``blocks'' of width $ f_s$.

Continuous-Time Aliasing Theorem

Let $ x(t)$ denote any continuous-time signal having a Fourier Transform (FT)

$\displaystyle X(j\omega)\isdef \int_{-\infty}^\infty x(t) e^{-j\omega t} dt.
$

Let

$\displaystyle x_d(n) \isdef x(nT), \quad n=\ldots,-2,-1,0,1,2,\ldots,
$

denote the samples of $ x(t)$ at uniform intervals of $ T$ seconds, and denote its Discrete-Time Fourier Transform (DTFT) by

$\displaystyle X_d(e^{j\theta})\isdef \sum_{n=-\infty}^\infty x_d(n) e^{-j\theta n}.
$

Then the spectrum $ X_d$ of the sampled signal $ x_d$ is related to the spectrum $ X$ of the original continuous-time signal $ x$ by

$\displaystyle X_d(e^{j\theta}) = \frac{1}{T} \sum_{m=-\infty}^\infty X\left[j\left(\frac{\theta}{T}
+ m\frac{2\pi}{T}\right)\right].
$

The terms in the above sum for $ m\neq 0$ are called aliasing terms. They are said to alias into the base band $ [-\pi/T,\pi/T]$. Note that the summation of a spectrum with aliasing components involves addition of complex numbers; therefore, aliasing components can be removed only if both their amplitude and phase are known.


Proof: Writing $ x(t)$ as an inverse FT gives

$\displaystyle x(t) = \frac{1}{2\pi}\int_{-\infty}^\infty X(j\omega) e^{j\omega t} d\omega.
$

Writing $ x_d(n)$ as an inverse DTFT gives

$\displaystyle x_d(n) = \frac{1}{2\pi}\int_{-\pi}^\pi X_d(e^{j\theta}) e^{j \theta t} d\theta
$

where $ \theta \isdef 2\pi \omega_d T$ denotes the normalized discrete-time frequency variable.

The inverse FT can be broken up into a sum of finite integrals, each of length $ \Omega_s \isdef 2\pi f_s= 2\pi/T$, as follows:

\begin{eqnarray*}
x(t) &=& \frac{1}{2\pi}\int_{-\infty}^\infty X(j\omega) e^{j\o...
...y X\left(j\omega + j m\Omega_s \right) e^{j\Omega_s m t} d\omega
\end{eqnarray*}

Let us now sample this representation for $ x(t)$ at $ t=nT$ to obtain $ x_d(n) \isdef x(nT)$, and we have

\begin{eqnarray*}
x_d(n) &=& \frac{1}{2\pi}\int_{-\Omega_s /2}^{\Omega_s /2} e^{...
..._{m=-\infty}^\infty X\left(j\omega + j m\Omega_s \right) d\omega
\end{eqnarray*}

since $ n$ and $ m$ are integers. Normalizing frequency as $ \theta^\prime = \omega T$ yields

$\displaystyle x_d(n) = \frac{1}{2\pi}\int_{-\pi}{\pi} e^{j\theta^\prime n}
\f...
...t(\frac{\theta^\prime }{T}
+ m\frac{2\pi}{T}\right) \right] d\theta^\prime .
$

Since this is formally the inverse DTFT of $ X_d(e^{j\theta^\prime })$ written in terms of $ X(j\theta^\prime /T)$, the result follows.
$ \Box$


Sampling Theorem

Let $ x(t)$ denote any continuous-time signal having a continuous Fourier transform

$\displaystyle X(j\omega)\isdef \int_{-\infty}^\infty x(t) e^{-j\omega t} dt.
$

Let

$\displaystyle x_d(n) \isdef x(nT), \quad n=\ldots,-2,-1,0,1,2,\ldots,
$

denote the samples of $ x(t)$ at uniform intervals of $ T$ seconds. Then $ x(t)$ can be exactly reconstructed from its samples $ x_d(n)$ if $ X(j\omega)=0$ for all $ \vert\omega\vert\geq\pi/T$.D.3


Proof: From the continuous-time aliasing theoremD.2), we have that the discrete-time spectrum $ X_d(e^{j\theta})$ can be written in terms of the continuous-time spectrum $ X(j\omega)$ as

$\displaystyle X_d(e^{j\omega_d T}) = \frac{1}{T} \sum_{m=-\infty}^\infty X[j(\omega_d +m\Omega_s )]
$

where $ \omega_d \in(-\pi/T,\pi/T)$ is the ``digital frequency'' variable. If $ X(j\omega)=0$ for all $ \vert\omega\vert\geq\Omega_s /2$, then the above infinite sum reduces to one term, the $ m=0$ term, and we have

$\displaystyle X_d(e^{j\omega_d T}) = \frac{1}{T} X(j\omega_d ), \quad \omega_d \in\left(-\frac{\pi}{T},\frac{\pi}{T}\right)
$

At this point, we can see that the spectrum of the sampled signal $ x(nT)$ coincides with the nonzero spectrum of the continuous-time signal $ x(t)$. In other words, the DTFT of $ x(nT)$ is equal to the FT of $ x(t)$ between plus and minus half the sampling rate, and the FT is zero outside that range. This makes it clear that spectral information is preserved, so it should now be possible to go from the samples back to the continuous waveform without error, which we now pursue.

To reconstruct $ x(t)$ from its samples $ x(nT)$, we may simply take the inverse Fourier transform of the zero-extended DTFT, because

\begin{eqnarray*}
x(t) &=& \hbox{\sc IFT}_t(X)
\isdef \frac{1}{2\pi}\int_{-\inft...
...j\theta}) e^{j\omega t} d\omega
\isdef \hbox{\sc IDTFT}_t(X_d).
\end{eqnarray*}

By expanding $ X_d(e^{j\theta})$ as the DTFT of the samples $ x(n)$, the formula for reconstructing $ x(t)$ as a superposition of weighted sinc functions is obtained (depicted in Fig.D.1):

\begin{eqnarray*}
x(t) &=& \hbox{\sc IDTFT}_t(X_d) \\
&\isdef & \frac{1}{2\pi}...
...um_{n=-\infty}^{\infty}x(nT) h(t-nT) \\
&\isdef & (x\ast h)(t)
\end{eqnarray*}

where we defined

\begin{eqnarray*}
h(t-nT) &\isdef & \frac{T}{2\pi} \int_{-\pi/T}^{\pi/T} e^{j\om...
...t(\frac{t-nT}{T}\right) = \mbox{sinc}\left(\frac{t}{T}-n\right),
\end{eqnarray*}

or

$\displaystyle h(t) =$   sinc$\displaystyle \left(\frac{t}{T}\right),$   where    sinc$\displaystyle (t)\isdef \frac{\sin(\pi t)}{\pi t}.
$

The ``sinc function'' is defined with $ \pi $ in its argument so that it has zero crossings on the nonzero integers, and its peak magnitude is 1. Figure D.2 illustrates the appearance of the sinc function.

We have shown that when $ x(t)$ is bandlimited to less than half the sampling rate, the IFT of the zero-extended DTFT of its samples $ x(nT)$ gives back the original continuous-time signal $ x(t)$. This completes the proof of the sampling theorem.
$ \Box$

Conversely, if $ x(t)$ can be reconstructed from its samples $ x_d(n) \isdef x(nT)$, it must be true that $ x(t)$ is bandlimited to $ [-f_s/2,f_s/2]$, since a sampled signal only supports frequencies up to $ f_s/2$ (see §D.4 below). While a real digital signal $ x(n)$ may have energy at half the sampling rate (frequency $ f_s/2$), the phase is constrained to be either 0 or $ \pi $ there, which is why this frequency had to be excluded from the sampling theorem.

A one-line summary of the essence of the sampling-theorem proof is

$\displaystyle \zbox {x(t) = \hbox{\sc IFT}_t\left\{\hbox{\sc ZeroPad}_\infty\left\{\hbox{\sc DTFT}\{x_d\}\right\}\right\}}
$

where $ x_d \isdef \hbox{\sc Sample}_T(x)$.

The sampling theorem is easier to show when applied to sampling-rate conversion in discrete-time, i.e., when simple downsampling of a discrete time signal is being used to reduce the sampling rate by an integer factor. In analogy with the continuous-time aliasing theorem of §D.2, the downsampling theorem7.4.11) states that downsampling a digital signal by an integer factor $ L$ produces a digital signal whose spectrum can be calculated by partitioning the original spectrum into $ L$ equal blocks and then summing (aliasing) those blocks. If only one of the blocks is nonzero, then the original signal at the higher sampling rate is exactly recoverable.


Appendix: Frequencies Representable
by a Geometric Sequence

Consider $ z_0\in{\bf C}$, with $ \vert z_0\vert=1$. Then we can write $ z_0$ in polar form as

$\displaystyle z_0\isdef e^{j\theta_0 } \isdef e^{j\omega_0 T},
$

where $ \theta_0 $, $ \omega_0$, and $ T$ are real numbers.

Forming a geometric sequence based on $ z_0$ yields the sequence

$\displaystyle x(t_n) \isdef z_0^n = e^{j\theta_0 n} = e^{j\omega_0 t_n}
$

where $ t_n \isdef nT$. Thus, successive integer powers of $ z_0$ produce a sampled complex sinusoid with unit amplitude, and zero phase. Defining the sampling interval as $ T$ in seconds provides that $ \omega_0$ is the radian frequency in radians per second.

A natural question to investigate is what frequencies $ \omega_0$ are possible. The angular step of the point $ z_0^n$ along the unit circle in the complex plane is $ \theta_0 =\omega_0 T$. Since $ e^{j(\theta_0 n + 2\pi)} =
e^{j\theta_0 n}$, an angular step $ \theta_0 >2\pi$ is indistinguishable from the angular step $ \theta_0 -2\pi$. Therefore, we must restrict the angular step $ \theta_0 $ to a length $ 2\pi $ range in order to avoid ambiguity.

Recall from §4.3.3 that we need support for both positive and negative frequencies since equal magnitudes of each must be summed to produce real sinusoids, as indicated by the identities

\begin{eqnarray*}
\cos(\omega_0 t_n) &=& \frac{1}{2}e^{j\omega_0 t_n} + \frac{1}...
...& -\frac{j}{2}e^{j\omega_0 t_n} + \frac{j}{2}e^{-j\omega_0 t_n}.
\end{eqnarray*}

The length $ 2\pi $ range which is symmetric about zero is

$\displaystyle \theta_0 \in [-\pi,\pi],
$

which, since $ \theta_0 =\omega_0 T= 2\pi f_0T$, corresponds to the frequency range

\begin{eqnarray*}
\omega_0 &\in& \left[-\frac{\pi}{T},\frac{\pi}{T}\right]\\
f_0&\in& \left[-\frac{f_s}{2},\frac{f_s}{2}\right].
\end{eqnarray*}

However, there is a problem with the point at $ f_0=\pm f_s/2$: Both $ +f_s/2$ and $ -f_s/2$ correspond to the same point $ z_0=-1$ in the $ z$-plane. Technically, this can be viewed as aliasing of the frequency $ -f_s/2$ on top of $ f_s/2$, or vice versa. The practical impact is that it is not possible in general to reconstruct a sinusoid from its samples at this frequency. For an obvious example, consider the sinusoid at half the sampling-rate sampled on its zero-crossings: $ \sin(\omega_0 t_n) = \sin(\pi n) \equiv 0$. We cannot be expected to reconstruct a nonzero signal from a sequence of zeros! For the signal $ \cos(\omega_0 t_n) = \cos(\pi n) = (-1)^n$, on the other hand, we sample the positive and negative peaks, and everything looks fine. In general, we either do not know the amplitude, or we do not know phase of a sinusoid sampled at exactly twice its frequency, and if we hit the zero crossings, we lose it altogether.

In view of the foregoing, we may define the valid range of ``digital frequencies'' to be

\begin{eqnarray*}
\theta_0 &\in& [-\pi,\pi) \\
\omega_0 &\in& [-\pi/T,\pi/T) \\
f_0&\in& [-f_s/2,f_s/2).
\end{eqnarray*}

While one might have expected the open interval $ (-\pi,\pi)$, we are free to give the point $ z_0=-1$ either the largest positive or largest negative representable frequency. Here, we chose the largest negative frequency since it corresponds to the assignment of numbers in two's complement arithmetic, which is often used to implement phase registers in sinusoidal oscillators. Since there is no corresponding positive-frequency component, samples at $ f_s/2$ must be interpreted as samples of clockwise circular motion around the unit circle at two points per revolution. Such signals appear as an alternating sequence of the form $ c(-1)^n$, where $ c$ can be complex. The amplitude at $ -f_s/2$ is then defined as $ \vert c\vert$, and the phase is $ \angle c$.

We have seen that uniformly spaced samples can represent frequencies $ f_0$ only in the range $ [-f_s/2,f_s/2)$, where $ f_s$ denotes the sampling rate. Frequencies outside this range yield sampled sinusoids indistinguishable from frequencies inside the range.

Suppose we henceforth agree to sample at higher than twice the highest frequency in our continuous-time signal. This is normally ensured in practice by lowpass-filtering the input signal to remove all signal energy at $ f_s/2$ and above. Such a filter is called an anti-aliasing filter, and it is a standard first stage in an Analog-to-Digital (A/D) Converter (ADC). Nowadays, ADCs are normally implemented within a single integrated circuit chip, such as a CODEC (for ``coder/decoder'') or ``multimedia chip''.


Next Section:
Taylor Series Expansions
Previous Section:
Selected Continuous-Time Fourier Theorems