Selected Continuous Fourier Theorems
This section presents continuous-time Fourier theorems that go beyond obvious analogs of the DTFT theorems proved in §2.3 above. The differentiation theorem comes up quite often, and its dual pertains as well to the DTFT. The scaling theorem provides an important basic insight into time-frequency duality. The Poisson Summation Formula (PSF) in continuous time extends the discrete-time version presented in §8.3.1. Finally, the extremely fundamental uncertainty principle is derived from the scaling theorem.
Radians versus Cycles
Our usual frequency variable is in radians per second. However, certain Fourier theorems are undeniably simpler and more elegant when the frequency variable is chosen to be in cycles per second. The two are of course related by
(B.1) |
As an example, is more compact than . On the other hand, it is nice to get rid of all normalization constants in the Fourier transform and its inverse:
(B.2) | |||
(B.3) |
The ``editorial policy'' for this book is this: Generally, is preferred, but is used when considerable simplification results.
Differentiation Theorem
Let denote a function differentiable for all such that and the Fourier transforms (FT) of both and exist, where denotes the time derivative of . Then we have
(B.4) |
where denotes the Fourier transform of . In operator notation:
(B.5) |
Proof:
This follows immediately from integration by parts:
since .
Differentiation Theorem Dual
Theorem: Let
denote a signal with Fourier transform
, and let
(B.6) |
denote the derivative of with respect to . Then we have
(B.7) |
where denotes the Fourier transform of .
Proof:
We can show this by direct differentiation of the definition of the
Fourier transform:
An alternate method of proof is given in §2.3.13.
The transform-pair may be alternately stated as follows:
(B.8) |
Scaling Theorem
The scaling theorem (or similarity theorem) provides that if you horizontally ``stretch'' a signal by the factor in the time domain, you ``squeeze'' and amplify its Fourier transform by the same factor in the frequency domain. This is an important general Fourier duality relationship.
Theorem: For all continuous-time functions
possessing a Fourier
transform,
(B.9) |
where
(B.10) |
and is any nonzero real number (the abscissa stretch factor). A more commonly used notation is the following:
(B.11) |
Proof:
Taking the Fourier transform of the stretched signal gives
The absolute value appears above because, when , , which brings out a minus sign in front of the integral from to .
Shift Theorem
The shift theorem for Fourier transforms states that delaying a signal by seconds multiplies its Fourier transform by .
Proof:
Thus,
(B.12) |
Modulation Theorem (Shift Theorem Dual)
The Fourier dual of the shift theorem is often called the modulation theorem:
(B.13) |
This is proved in the same way as the shift theorem above by starting with the inverse Fourier transform of the right-hand side:
or,
(B.14) |
Convolution Theorem
The convolution theorem for Fourier transforms states that convolution in the time domain equals multiplication in the frequency domain. The continuous-time convolution of two signals and is defined by
(B.15) |
The Fourier transform is then
or,
(B.16) |
Exercise: Show that
(B.17) |
when frequency-domain convolution is defined by
(B.18) |
where is in radians per second, and that
(B.19) |
when frequency-domain convolution is defined by
(B.20) |
with in Hertz.
Flip Theorems
Let the flip operator be denoted by
where denotes time in seconds, and denotes frequency in radians per second. The following Fourier pairs are easily verified:
The proof of the first relation is as follows:
Power Theorem
The power theorem for Fourier transforms states that the inner product of two signals in the time domain equals their inner product in the frequency domain.
The inner product of two spectra and may be defined as
(B.21) |
This expression can be interpreted as the inverse Fourier transform of evaluated at :
(B.22) |
By the convolution theorem (§B.7) and flip theorem (§B.8),
(B.23) |
which at gives
(B.24) |
Thus,
(B.25) |
The Continuous-Time Impulse
An impulse in continuous time must have ``zero width'' and unit area under it. One definition is
An impulse can be similarly defined as the limit of any pulse shape which maintains unit area and approaches zero width at time 0 [150]. As a result, the impulse under every definition has the so-called sifting property under integration,
provided is continuous at . This is often taken as the defining property of an impulse, allowing it to be defined in terms of non-vanishing function limits such as
(B.28) |
(Note, incidentally, that is in but not .)
An impulse is not a function in the usual sense, so it is called instead a distribution or generalized function [36,150]. (It is still commonly called a ``delta function'', however, despite the misnomer.)
Gaussian Pulse
The Gaussian pulse of width (second central moment) centered on time 0 may be defined by
(B.29) |
where the normalization scale factor is chosen to give unit area under the pulse. Its Fourier transform is derived in Appendix D to be
(B.30) |
Rectangular Pulse
The rectangular pulse of width centered on time 0 may be defined by
(B.31) |
Its Fourier transform is easily evaluated:
Thus, we have derived the Fourier pair
Note that sinc is the Fourier transform of the one-second rectangular pulse:
sinc | (B.33) |
From this, the scaling theorem implies the more general case:
sinc | (B.34) |
Sinc Impulse
The preceding Fourier pair can be used to show that
(B.35) |
Proof: The inverse Fourier transform of
sinc
is
In particular, in the middle of the rectangular pulse at , we have
(B.36) |
This establishes that the algebraic area under sinc is 1 for every . Every delta function (impulse) must have this property.
We now show that sinc also satisfies the sifting property in the limit as . This property fully establishes the limit as a valid impulse. That is, an impulse is any function having the property that
(B.37) |
for every continuous function . In the present case, we need to show, specifically, that
(B.38) |
Define sinc . Then by the power theorem (§B.9),
(B.39) |
Then as , the limit converges to the algebraic area under , which is as desired:
(B.40) |
We have thus established that
(B.41) |
where
sinc | (B.42) |
For related discussion, see [36, p. 127].
Impulse Trains
The impulse signal (defined in §B.10) has a constant Fourier transform:
(B.43) |
An impulse train can be defined as a sum of shifted impulses:
(B.44) |
Here, is the period of the impulse train, in seconds--i.e., the spacing between successive impulses. The -periodic impulse train can also be defined as
where is the so-called shah symbol [23]:
(B.46) |
Note that the scaling by in (B.46) is necessary to maintain unit area under each impulse.
We will now show that
(B.47) |
That is, the Fourier transform of the normalized impulse train is exactly the same impulse train in the frequency domain, where denotes time in seconds and denotes frequency in Hz. By the scaling theorem (§B.4),
(B.48) |
so that the -periodic impulse-train defined in (B.46) transforms to
Thus, the -periodic impulse train transforms to a -periodic impulse train, in which each impulse contains area :
(B.49) |
Proof:
Let's set up a limiting construction by defining
(B.50) |
so that . We may interpret as a sampled rectangular pulse of width seconds (yielding samples).By linearity of the Fourier transform and the shift theorem (§B.5), we readily obtain the transform of to be
Using the closed form of a geometric series,
(B.51) |
with , we can write this as
where we have used the definition of given in Eq. (3.5) of §3.1. As we would expect from basic sampling theory, the Fourier transform of the sampled rectangular pulse is an aliased sinc function. Figure 3.2 illustrates one period for .
The proof can be completed by expressing the aliased sinc function as a sum of regular sinc functions, and using linearity of the Fourier transform to distribute over the sum, converting each sinc function into an impulse, in the limit, by §B.13:
by §B.13. Note that near , we have
as , as shown in §B.13. Similarly, near , we have
(B.52) |
as . Finally, we expect that the limit for non-integer can be neglected since
(B.53) |
whenever and is some integer, as implied by §B.13.
See, e.g., [23,79] for more about impulses and their application in Fourier analysis and linear systems theory.
Exercise: Using a similar limiting construction as before,
(B.54) |
show that a direct inverse-Fourier transform calculation gives
(B.55) |
and verify that the peaks occur every seconds and reach height . Also show that the peak widths, measured between zero crossings, are , so that the area under each peak is of order 1 in the limit as . [Hint: The shift theorem for inverse Fourier transforms is , and .]
Poisson Summation Formula
As shown in §B.14 above, the Fourier transform of an impulse train is an impulse train with inversely proportional spacing:
(B.56) |
where
(B.57) |
Using this Fourier theorem, we can derive the continuous-time PSF using the convolution theorem for Fourier transforms:B.1
(B.58) |
Using linearity and the shift theorem for inverse Fourier transforms, the above relation yields
We have therefore shown
Compare this result to Eq. (8.30). The left-hand side of (B.60) can be interpreted , i.e., the time-alias of on a block of length . The function is periodic with period seconds. The right-hand side of (B.60) can be interpreted as the inverse Fourier series of sampled at intervals of Hz. This sampling of in the frequency domain corresponds to the aliasing of in the time domain.
Sampling Theory
The dual of the Poisson Summation Formula is the continuous-time aliasing theorem, which lies at the foundation of elementary sampling theory [264, Appendix G]. If denotes a continuous-time signal, its sampled version , , is associated with the continuous-time signal
(B.60) |
where denotes the (fixed) sampling interval in seconds. The sampled signal values are thus treated mathematically as coefficients of impulses at the sampling instants. Taking the Fourier transform gives
where denotes the sampling rate in radians per second. Note that is periodic with period . We see that if is bandlimited to less than radians per second, i.e., if for all , then only the term will be nonzero in the summation over , and this means there is no aliasing. The terms for are all aliasing terms.
The Uncertainty Principle
The uncertainty principle (for Fourier transform pairs) follows immediately from the scaling theorem (§B.4). It may be loosely stated as
Time Duration Frequency Bandwidth cwhere is some constant determined by the precise definitions of ``duration'' in the time domain and ``bandwidth'' in the frequency domain.
If duration and bandwidth are defined as the ``nonzero interval,'' then we obtain , which is not useful. This conclusion follows immediately from the definition of the Fourier transform and its inverse (§2.2).
Duration and Bandwidth as Second Moments
More interesting definitions of duration and bandwidth are obtained
using the normalized second moments of the squared magnitude:
where
By the DTFT power theorem (§2.3.8), we have . Note that writing `` '' and `` '' is an abuse of notation, but a convenient one. These duration/bandwidth definitions are routinely used in physics, e.g., in connection with the Heisenberg uncertainty principle [59].Under these definitions, we have the following theorem [202, p. 273-274]:
Theorem: If
as
, then
with equality if and only if
(B.63) |
That is, only the Gaussian function (also known as the ``bell curve'' or ``normal curve'') achieves the lower bound on the time-bandwidth product.
Proof: Without loss of generality, we may take consider
to be real
and normalized to have unit
norm (
). From the
Schwarz inequality [264],B.2
The left-hand side can be evaluated using integration by parts:
(B.65) |
where we used the assumption that as .
The second term on the right-hand side of (B.65) can be evaluated using the power theorem and differentiation theorem (§B.2):
(B.66) |
Substituting these evaluations into (B.65) gives
(B.67) |
Taking the square root of both sides gives the uncertainty relation sought.
If equality holds in the uncertainty relation (B.63), then (B.65) implies
(B.68) |
for some constant , implying for some constants and .
Time-Limited Signals
If for , then
(B.69) |
where is as defined above in (B.62).
Proof: See [202, pp. 274-5].
Time-Bandwidth Products Unbounded Above
We have considered two lower bounds for the time-bandwidth product based on two different definitions of duration in time. In the opposite direction, there is no upper bound on time-bandwidth product. To see this, imagine filtering an arbitrary signal with an allpass filter.B.3 The allpass filter cannot affect bandwidth , but the duration can be arbitrarily extended by successive applications of the allpass filter.
Relation of Smoothness to Roll-Off Rate
In §3.1.1, we found that the side lobes of the rectangular-window transform ``roll off'' as . In this section we show that this roll-off rate is due to the amplitude discontinuity at the edges of the window. We also show that, more generally, a discontinuity in the th derivative corresponds to a roll-off rate of .
The Fourier transform of an impulse is simply
(B.70) |
by the sifting property of the impulse under integration. This shows that an impulse consists of Fourier components at all frequencies in equal amounts. The roll-off rate is therefore zero in the Fourier transform of an impulse.
By the differentiation theorem for Fourier transforms (§B.2), if , then
(B.71) |
where . Consequently, the integral of transforms to :
(B.72) |
The integral of the impulse is the unit step function:
(B.73) |
Therefore,B.4
(B.74) |
Thus, the unit step function has a roll-off rate of dB per octave, just like the rectangular window. In fact, the rectangular window can be synthesized as the superposition of two step functions:
(B.75) |
Integrating the unit step function gives a linear ramp function:
(B.76) |
Applying the integration theorem again yields
(B.77) |
Thus, the linear ramp has a roll-off rate of dB per octave. Continuing in this way, we obtain the following Fourier pairs:
Now consider the Taylor series expansion of the function at :
(B.78) |
The derivatives up to order are all zero at . The th derivative, however, has a discontinuous jump at . Since this is the only ``wideband event'' in the signal, we may conclude that a discontinuity in the th derivative corresponds to a roll-off rate of . The following theorem generalizes this result to a wider class of functions which, for our purposes, will be spectrum analysis window functions (before sampling):
Theorem: (Riemann Lemma):
If the derivatives up to order
of the function
exist and
are of bounded variation (defined below), then its Fourier Transform
is asymptotically of orderB.5
, i.e.,
(B.79) |
Proof: Following [202, p. 95], let be any real function of bounded variation on the interval of the real line, and let
(B.80) |
denote its decomposition into a nondecreasing part and nonincreasing part .B.6 Then there exists such that
Since
(B.82) |
we conclude
(B.83) |
where , which is finite since is of bounded variation. Note that the conclusion holds also when . Analogous conclusions follow for im , re , and im , leading to the result
(B.84) |
If in addition the derivative is bounded on , then the above gives that its transform is asymptotically of order , so that . Repeating this argument, if the first derivatives exist and are of bounded variation on , we have .
Since spectrum-analysis windows are often obtained by sampling continuous time-limited functions , we normally see these asymptotic roll-off rates in aliased form, e.g.,
(B.85) |
where denotes the sampling rate in radians per second. This aliasing normally causes the roll-off rate to ``slow down'' near half the sampling rate, as shown in Fig.3.6 for the rectangular window transform. Every window transform must be continuous at (for finite windows), so the roll-off envelope must reach a slope of zero there.
In summary, we have the following Fourier rule-of-thumb:
(B.86) |
This is also dB per decade.
To apply this result to estimating FFT window roll-off rate (as in Chapter 3), we normally only need to look at the window's endpoints. The interior of the window is usually differentiable of all orders. For discrete-time windows, the roll-off rate ``slows down'' at high frequencies due to aliasing.
Next Section:
Beginning Statistical Signal Processing
Previous Section:
Notation