DSPRelated.com
Free Books

Fourier Theorems for the DTFT

This section states and proves selected Fourier theorems for the DTFT. A more complete list for the DFT case is given in [264].3.4Since this material was originally part of an appendix, it is relatively dry reading. Feel free to skip to the next chapter and refer back as desired when a theorem is invoked.

As introduced in §2.1 above, the Discrete-Time Fourier Transform (DTFT) may be defined as

$\displaystyle X(\omega) \isdef \sum_{n=-\infty}^\infty x(n) e^{-j\omega n}.$ (3.8)

We say that $ X$ is the spectrum of $ x$ .

Linearity of the DTFT

$\displaystyle \zbox {\alpha x_1 + \beta x_2 \;\longleftrightarrow\;\alpha X_1 + \beta X_2}$ (3.9)

or

$\displaystyle \hbox{\sc DTFT}(\alpha x_1 + \beta x_2) = \alpha\cdot \hbox{\sc DTFT}(x_1) + \beta \cdot\hbox{\sc DTFT}(x_2)$ (3.10)

where $ \alpha, \beta$ are any scalars (real or complex numbers), $ x_1$ and $ x_2$ are any two discrete-time signals (real- or complex-valued functions of the integers), and $ X_1, X_2$ are their corresponding continuous-frequency spectra defined over the unit circle in the complex plane.


Proof: We have

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega(\alpha x_1 + \beta x_2)
& \isdef & \sum_{n=-\infty}^{\infty}[\alpha x_1(n) + \beta x_2(n)]e^{-j\omega n}\\
&=& \alpha\sum_{n=-\infty}^{\infty}x_1(n)e^{-j\omega n} + \beta \sum_{n=-\infty}^{\infty}x_2(n)e^{-j\omega n}\\
&\isdef & \alpha X_1(\omega) + \beta X_2(\omega)
\end{eqnarray*}

One way to describe the linearity property is to observe that the Fourier transform ``commutes with mixing.''


Time Reversal

For any complex signal $ x(n)$ , $ n\in(-\infty,\infty)$ , we have

$\displaystyle \zbox {\hbox{\sc Flip}(x) \;\longleftrightarrow\;\hbox{\sc Flip}(X)}$ (3.11)

where $ \hbox{\sc Flip}_n(x)\isdef x(-n)$ .


Proof:

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega(\hbox{\sc Flip}(x))
&\isdef & \sum_{n=-\infty}^{\infty} x(-n)e^{-j\omega n}
\eqsp \sum_{m=\infty}^{-\infty} x(m)e^{-j(-\omega) m}
\eqsp X(-\omega) \\ [5pt]
&\isdef & \hbox{\sc Flip}_\omega(X)
\end{eqnarray*}

Arguably, $ \hbox{\sc Flip}(x)$ should include complex conjugation. Let

$\displaystyle \hbox{\sc Flip}_n'(x)\isdefs \overline{\hbox{\sc Flip}_n(x)}\,\mathrel{\mathop=}\,\overline{x(-n)}$ (3.12)

denote such a definition. Then in this case we have

$\displaystyle \zbox {\hbox{\sc Flip}'(x) \;\longleftrightarrow\;\overline{X}}$ (3.13)


Proof:

$\displaystyle \hbox{\sc DTFT}_\omega(\hbox{\sc Flip}'(x)) \isdefs \sum_{n=-\infty}^{\infty} \overline{x(-n)}e^{-j\omega n} \eqsp \sum_{m=\infty}^{-\infty} \overline{x(m)e^{-j\omega m}} \isdefs \overline{X(\omega)}$ (3.14)

In the typical special case of real signals ( $ x(n)\in{\bf R}$ ), we have $ \hbox{\sc Flip}(x)=\hbox{\sc Flip}'(x)$ so that

$\displaystyle \zbox {\hbox{\sc Flip}(x) \;\longleftrightarrow\;\overline{X}.}$ (3.15)

That is, time-reversing a real signal conjugates its spectrum.


Symmetry of the DTFT for Real Signals

Most (if not all) of the signals we deal with in practice are real signals. Here we note some spectral symmetries associated with real signals.

DTFT of Real Signals

The previous section established that the spectrum $ X$ of every real signal $ x$ satisfies

$\displaystyle \hbox{\sc Flip}(X)\eqsp \overline{X}.$ (3.16)

I.e.,

$\displaystyle \zbox {x(n)\in{\bf R}\;\longleftrightarrow\;X(-\omega) = \overline{X(\omega)}.}$ (3.17)

In other terms, if a signal $ x(n)$ is real, then its spectrum is Hermitian (``conjugate symmetric''). Hermitian spectra have the following equivalent characterizations:
  • The real part is even, while the imaginary part is odd:

    \begin{eqnarray*}
\mbox{re}\left\{X(-\omega)\right\} &=& \mbox{re}\left\{X(\omega)\right\}\\
\mbox{im}\left\{X(-\omega)\right\} &=& -\mbox{im}\left\{X(\omega)\right\}
\end{eqnarray*}

  • The magnitude is even, while the phase is odd:

    \begin{eqnarray*}
\left\vert X(-\omega)\right\vert &=& \left\vert X(\omega)\right\vert\\
\angle{X(-\omega)} &=& -\angle{X(\omega)}
\end{eqnarray*}

Note that an even function is symmetric about argument zero while an odd function is antisymmetric about argument zero.


Real Even (or Odd) Signals

If a signal is even in addition to being real, then its DTFT is also real and even. This follows immediately from the Hermitian symmetry of real signals, and the fact that the DTFT of any even signal is real:

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega(x)
& \isdef & \sum_{n=-\infty}^{\infty}x(n) e^{-j\omega n}\\
& = & \sum_{n=-\infty}^{\infty}x(n) \left[\cos(\omega n) + j\sin(\omega n)\right]\\
& = & \sum_{n=-\infty}^{\infty}x(n) \cos(\omega n) + j\sum_{n=-\infty}^{\infty}x(n)\sin(\omega n)\\
& = & \sum_{n=-\infty}^{\infty}x(n) \cos(\omega n)\\
& = & \hbox{real and even}
\end{eqnarray*}

This is true since cosine is even, sine is odd, even times even is even, even times odd is odd, and the sum over all samples of an odd signal is zero. I.e.,

\begin{eqnarray*}
\sum_{n=-\infty}^{\infty}x(n)\cos(\omega n)
&=& \sum_{n=-\infty}^{\infty}\hbox{(even in $n$)}\;\cdot\;\hbox{(doubly even)}\\
&=& \sum_{n=-\infty}^{\infty}\hbox{(doubly even)} = \hbox{(even in $\omega$)}
\end{eqnarray*}

and

\begin{eqnarray*}
\sum_{n=-\infty}^{\infty}x(n)\sin(\omega n)
&=& \sum_{n=-\infty}^{\infty}\hbox{(even in $n$)}\;\cdot\;\hbox{(doubly odd)}\\
&=& \sum_{n=-\infty}^{\infty}\hbox{(doubly odd)} = 0.
\end{eqnarray*}

If $ x$ is real and even, the following are true:

\begin{eqnarray*}
\hbox{\sc Flip}(x) & = & x \qquad \hbox{($x(-n)=x(n)$)}\\
\overline{x} & = & x\\ [5pt]
\hbox{\sc Flip}(X) & = & X\\
\overline{X} & = & X\\
\angle X(\omega) & =& 0 \hbox{ or } \pi
\end{eqnarray*}

Similarly, if a signal is odd and real, then its DTFT is odd and purely imaginary. This follows from Hermitian symmetry for real signals, and the fact that the DTFT of any odd signal is imaginary.

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega(x)
& \isdef & \sum_{n=-\infty}^{\infty}x(n) e^{-j\omega n}\\
& = & \sum_{n=-\infty}^{\infty}x(n) \cos(\omega n) + j\sum_{n=-\infty}^{\infty}x(n)\sin(\omega n)\\
& = & j\sum_{n=-\infty}^{\infty}x(n) \sin(\omega n)\\
& = & \hbox{imaginary and odd}
\end{eqnarray*}

where we used the fact that

\begin{eqnarray*}
\sum_{n=-\infty}^{\infty}x(n)\cos(\omega n)
&=& \sum_{n=-\infty}^{\infty}\hbox{(odd in $n$)}\;\cdot\;\hbox{(doubly even)}\\
&=& \sum_{n=-\infty}^{\infty}\hbox{(odd in $n$, even in $\omega$)} = 0
\end{eqnarray*}

and

\begin{eqnarray*}
\sum_{n=-\infty}^{\infty}x(n)\sin(\omega n)
&=& \sum_{n=-\infty}^{\infty}\hbox{(odd in $n$)}\;\cdot\;\hbox{(doubly odd)}\\
&=& \sum_{n=-\infty}^{\infty}\hbox{(even in $n$, odd in $\omega$)} = \hbox{(odd in $\omega$)}.
\end{eqnarray*}


Shift Theorem for the DTFT

We define the shift operator for sampled signals $ x(n)$ by

$\displaystyle \hbox{\sc Shift}_{l,n}(x) \isdefs x(n-l)$ (3.18)

where $ l$ is any integer ( $ l\in{\bf Z}$ ). Thus, $ \hbox{\sc Shift}_l(x)$ is a right-shift or delay by $ l$ samples.

The shift theorem states3.5

$\displaystyle \zbox {\hbox{\sc Shift}_l(x) \;\longleftrightarrow\;e^{-j(\cdot)l}X},$ (3.19)

or, in operator notation,

$\displaystyle \hbox{\sc DTFT}_\omega[\hbox{\sc Shift}_l(x)] \eqsp \left( e^{-j\omega l} \right) X(\omega)$ (3.20)


Proof:

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega[\hbox{\sc Shift}_l(x)] &\isdef & \sum_{n=-\infty}^{\infty}x(n-l) e^{-j \omega n} \\
&=& \sum_{m=-\infty}^{\infty} x(m) e^{-j \omega (m+l)}
\qquad(m\isdef n-l) \\
&=& \sum_{m=-\infty}^{\infty}x(m) e^{-j \omega m} e^{-j \omega l} \\
&=& e^{-j \omega l} \sum_{m=-\infty}^{\infty}x(m) e^{-j \omega m} \\
&\isdef & e^{-j \omega l} X(\omega)
\end{eqnarray*}

Note that $ e^{-j\omega l}$ is a linear phase term, so called because it is a linear function of frequency with slope equal to $ -l$ :

$\displaystyle \angle \left(e^{-j \omega l}\right) \eqsp -\omega l$ (3.21)

The shift theorem gives us that multiplying a spectrum $ X(\omega)$ by a linear phase term $ e^{-j\omega l}$ corresponds to a delay in the time domain by $ l$ samples. If $ l<0$ , it is called a time advance by $ \vert l\vert$ samples.


Convolution Theorem for the DTFT

The convolution of discrete-time signals $ x$ and $ y$ is defined as

$\displaystyle (x \ast y)(n) \isdefs \sum_{m=-\infty}^\infty x(m)y(n-m).$ (3.22)

This is sometimes called acyclic convolution to distinguish it from the cyclic convolution used for length $ N$ sequences in the context of the DFT [264]. Convolution is cyclic in the time domain for the DFT and FS cases (i.e., whenever the time domain has a finite length), and acyclic for the DTFT and FT cases.3.6

The convolution theorem is then

$\displaystyle \zbox {(x\ast y) \;\longleftrightarrow\;X \cdot Y}$ (3.23)

That is, convolution in the time domain corresponds to pointwise multiplication in the frequency domain.


Proof: The result follows immediately from interchanging the order of summations associated with the convolution and DTFT:

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega(x\ast y) &\isdef & \sum_{n=-\infty}^{\infty}(x\ast y)_n e^{-j\omega n} \\
&\isdef & \sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}x(m) y(n-m) e^{-j\omega n} \\
&=& \sum_{m=-\infty}^{\infty}x(m) \sum_{n=-\infty}^{\infty}\underbrace{y(n-m) e^{-j\omega n}}_{e^{-j\omega m}Y(k)} \\
&=& \left(\sum_{m=-\infty}^{\infty}x(m) e^{-j\omega m}\right)Y(\omega)\quad\mbox{(by the shift theorem)}\\
&\isdef & X(\omega)Y(\omega)
\end{eqnarray*}


Correlation Theorem for the DTFT

We define the correlation of discrete-time signals $ x$ and $ y$ by

$\displaystyle \zbox {(x\star y)_n \isdefs \sum_m \overline{x(m)} y(m+n)}
$

The correlation theorem for DTFTs is then

$\displaystyle \zbox {x\star y \;\longleftrightarrow\;\overline{X}\cdot Y}
$


Proof:

\begin{eqnarray*}
(x\star y)_n
&\isdef & \sum_m \overline{x(m)}y(n+m) \\
&=& \sum_m \overline{x(-m)}y(n-m) \qquad (m\leftarrow -m)\\
&=& \left(\hbox{\sc Flip}(\overline{x})\ast y\right)_n \\
&\;\longleftrightarrow\;& \overline{X} \cdot Y
\end{eqnarray*}

where the last step follows from the convolution theorem of §2.3.5 and the symmetry result $ \hbox{\sc Flip}(\overline{x}) \;\longleftrightarrow\;
\overline{X}$ of §2.3.2.


Autocorrelation

The autocorrelation of a signal $ x$ is simply the cross-correlation of $ x$ with itself:

$\displaystyle (x \star x)(n) \isdefs \sum_m\overline{x(m)}x(m+n).$ (3.24)

From the correlation theorem, we have

$\displaystyle \zbox {(x \star x) \;\longleftrightarrow\;\vert X\vert^2}
$

Note that this definition of autocorrelation is appropriate for signals having finite support (nonzero over a finite number of samples). For infinite-energy (but finite-power) signals, such as stationary noise processes, we define the sample autocorrelation to include a normalization suitable for this case (see Chapter 6 and Appendix C).

From the autocorrelation theorem we have that a digital-filter impulse-response $ h(n)$ is that of a lossless allpass filter [263] if and only if $ h\star h = \delta \,\leftrightarrow\, 1$ . In other words, the autocorrelation of the impulse-response of every allpass filter is impulsive.


Power Theorem for the DTFT

The inner product of two signals may be defined in the time domain by [264]

$\displaystyle \left<x,y\right> \isdefs \sum_{n=-\infty}^{\infty} x_n\overline{y}_n.$ (3.25)

The inner product of two spectra may be defined as

$\displaystyle \left<X,Y\right> \isdefs \frac{1}{2\pi}\int_{-\pi}^\pi X(\omega)\overline{Y(\omega)} d\omega.$ (3.26)

Note that the frequency-domain inner product includes a normalization factor while the time-domain definition does not.

Using inner-product notation, the power theorem (or Parseval's theorem [202]) for DTFTs can be stated as follows:

$\displaystyle \zbox {\left<x,y\right> \eqsp \left<X,Y\right>}$ (3.27)

That is, the inner product of two signals in the time domain equals the inner product of their respective spectra (a complex scalar in general).

When we consider the inner product of a signal with itself, we have the special case known as the energy theorem (or Rayleigh's energy theorem):

$\displaystyle \Vert x \Vert^2 \eqsp \left<x,x\right> \eqsp \left<X,X\right> \eqsp \Vert X \Vert^2$ (3.28)

where $ \Vert\,\cdot\,\Vert$ denotes the $ L2$ norm induced by the inner product. It is always real.


Proof:

\begin{eqnarray*}
\left<x,y\right> &\isdef & \sum_{n=-\infty}^{\infty} x(n)\overline{y(n)}
\eqsp (y\star x)_0
\eqsp \hbox{\sc DTFT}_0^{-1}(\overline{Y}\cdot X) \\
&=& \frac{1}{2\pi} \int_{-\pi}^{\pi} X(\omega)\overline{Y(\omega)} d\omega
\isdefs \left<X,Y\right>
\end{eqnarray*}


Stretch Operator

We define the stretch operator in the time domain by

$\displaystyle \hbox{\sc Stretch}_{L,n}(x) \isdefs \left\{\begin{array}{ll} x\left(\frac{n}{L}\right), & n = 0\;(\hbox{\sc mod}\ L) \\ [5pt] 0, & \hbox{otherwise} \\ \end{array} \right..$ (3.29)

In other terms, we stretch a sampled signal by the factor $ L$ by inserting $ L-1$ zeros in between each pair of samples of the signal.

Figure 2.1: Illustration of the stretch operator.
\includegraphics[width=4in]{eps/stretch2}

In the literature on multirate filter banks (see Chapter 11), the stretch operator is typically called instead the upsampling operator. That is, stretching a signal by the factor of $ K$ is called upsampling the signal by the factor $ K$ . (See §11.1.1 for the graphical symbol ( $ \uparrow K$ ) and associated discussion.) The term ``stretch'' is preferred in this book because ``upsampling'' is easily confused with ``increasing the sampling rate''; resampling a signal to a higher sampling rate is conceptually implemented by a stretch operation followed by an ideal lowpass filter which moves the inserted zeros to their properly interpolated values.

Note that we could also call the stretch operator the scaling operator, to unify the terminology in the discrete-time case with that of the continuous-time case (§2.4.1 below).


Repeat (Scaling) Operator

We define the repeat operator in the frequency domain as a scaling of frequency axis by some integer factor $ L>0$ :

$\displaystyle \hbox{\sc Repeat}_{L,\nu}(X) \isdefs X(L\omega), \quad \omega\in\left[-\frac{\pi}{L},\frac{\pi}{L}\right),$ (3.30)

where $ \nu=L\omega\in[-\pi,\pi)$ denotes the radian frequency variable after applying the repeat operator.

The repeat operator maps the entire unit circle (taken as $ -\pi$ to $ \pi$ ) to a segment of itself $ [-\pi/L,\pi/L)$ , centered about $ \omega
= 0$ , and repeated $ L$ times. This is illustrated in Fig.2.2 for $ L=3$ .

\begin{psfrags}
% latex2html id marker 7156\psfrag{w}{\Large $\omega$}\begin{figure}[htbp]
\includegraphics[width=\twidth]{eps/repeat2}
\caption{Illustration of the repeat operator.}
\end{figure}
\end{psfrags}

Since the frequency axis is continuous and $ 2\pi$ -periodic for DTFTs, the repeat operator is precisely equivalent to a scaling operator for the Fourier transform case (§B.4). We call it ``repeat'' rather than ``scale'' because we are restricting the scale factor to positive integers, and because the name ``repeat'' describes more vividly what happens to a periodic spectrum that is compressively frequency-scaled over the unit circle by an integer factor.


Stretch/Repeat (Scaling) Theorem

Using these definitions, we can compactly state the stretch theorem:

$\displaystyle \zbox {\hbox{\sc Stretch}_L(x) \;\longleftrightarrow\;\hbox{\sc Repeat}_L(X)}$ (3.31)


Proof:

\begin{eqnarray*}
\hbox{\sc DTFT}_\omega[\hbox{\sc Stretch}_L(x)]
&\isdef & \sum_{n=-\infty}^{\infty}\hbox{\sc Stretch}_{L,n}(x)e^{-j\omega n}\\
&=& \sum_{m=-\infty}^{\infty}x(m)e^{-j\omega m L}\qquad \hbox{($m\isdef n/L$)}\\
&\isdef & X(\omega L)
\end{eqnarray*}

As $ \omega$ traverses the interval $ [-\pi,\pi)$ , $ X(\omega L)$ traverses the unit circle $ L$ times, thus implementing the repeat operation on the unit circle. Note also that when $ \omega
= 0$ , we have $ \omega L = 0$ , so that dc always maps to dc. At half the sampling rate $ \omega=\pm\pi$ , on the other hand, after the mapping, we may have either $ Y(\pi)=X(-\pi)$ ($ L$ odd), or $ X(0)$ ($ L$ even), where $ Y(\omega)
\isdeftext X(\omega L)$ .

The stretch theorem makes it clear how to do ideal sampling-rate conversion for integer upsampling ratios $ L$ : We first stretch the signal by the factor $ L$ (introducing $ L-1$ zeros between each pair of samples), followed by an ideal lowpass filter cutting off at $ \pi/L$ . That is, the filter has a gain of 1 for $ \left\vert\omega\right\vert <\pi/L$ , and a gain of 0 for $ \pi/L < \left\vert\omega\right\vert
\leq \pi$ . Such a system (if it were realizable) implements ideal bandlimited interpolation of the original signal by the factor $ L$ .

The stretch theorem is analogous to the scaling theorem for continuous Fourier transforms (introduced in §2.4.1 below).


Downsampling and Aliasing

The downsampling operator $ \hbox{\sc Downsample}_M$ selects every $ M^{th}$ sample of a signal:

$\displaystyle \zbox {\hbox{\sc Downsample}_{M,n}(x) \isdefs x(Mn)}$ (3.32)

The aliasing theorem states that downsampling in time corresponds to aliasing in the frequency domain:

$\displaystyle \zbox {\hbox{\sc Downsample}_M(x) \;\longleftrightarrow\;\frac{1}{M} \hbox{\sc Alias}_M(X)}$ (3.33)

where the $ \hbox{\sc Alias}$ operator is defined as

$\displaystyle \zbox {\hbox{\sc Alias}_{M,\omega}(X) \isdefs \sum_{k=0}^{M-1} X\left(\omega+k\frac{2\pi}{M}\right)}$ (3.34)

for $ \omega\in[-\pi,\pi)$ . The summation terms for $ k\neq 0$ are called aliasing components.

In z transform notation, the $ \hbox{\sc Alias}$ operator can be expressed as [287]

$\displaystyle \hbox{\sc Alias}_{M,z}(X) \eqsp \sum_{k=0}^{M-1} X\left(W_M^k z^\frac{1}{M}\right)$ (3.35)

where $ W_M\isdeftext e^{j2\pi/M}$ is a common notation for the primitive $ M$ th root of unity. On the unit circle of the $ z$ plane, this becomes

$\displaystyle \hbox{\sc Alias}_{M,\omega}(X) \eqsp \sum_{k=0}^{M-1} X\left[e^{j\left(\frac{\omega}{M} + k\frac{2\pi}{M}\right)}\right], \quad -\pi\leq \omega < \pi.$ (3.36)

The frequency scaling corresponds to having a sampling interval of $ T=1$ after downsampling, which corresponds to the interval $ T=1/M$ prior to downsampling.

The aliasing theorem makes it clear that, in order to downsample by factor $ M$ without aliasing, we must first lowpass-filter the spectrum to $ (-\pi / M, \pi / M)$ . This filtering (when ideal) zeroes out the spectral regions which alias upon downsampling.

Note that any rational sampling-rate conversion factor $ \rho = L/M$ may be implemented as an upsampling by the factor $ L$ followed by downsampling by the factor $ M$ [50,287]. Conceptually, a stretch-by-$ L$ is followed by a lowpass filter cutting off at $ \omega_c \isdeftext \pi/(L\;\max\;M)$ , followed by downsample-by-$ L$ , i.e.,

$\displaystyle x^\prime \eqsp \hbox{\sc Downsample}_M\{\hbox{\sc Lowpass}_{\omega_c}[\hbox{\sc Stretch}_L(x)]\}$ (3.37)

In practice, there are more efficient algorithms for sampling-rate conversion [270,135,78] based on a more direct approach to bandlimited interpolation.

Proof of Aliasing Theorem

To show:

$\displaystyle \zbox {\hbox{\sc Downsample}_N(x) \;\longleftrightarrow\;\frac{1}{N} \hbox{\sc Alias}_N(X)}
$

or
\fbox{$x(nN) \;\longleftrightarrow\;\dfrac{1}{N} \displaystyle\sum_{m=0}^{N-1} X\left(e^{j2\pi m/N} z^{1/N}\right)$}

From the DFT case [264], we know this is true when $ x$ and $ X$ are each complex sequences of length $ N_s$ , in which case $ y$ and $ Y$ are length $ N_s/N$ . Thus,

$\displaystyle x(nN) \;\longleftrightarrow\; Y(\omega_k N) \eqsp \frac{1}{N} \sum_{m=0}^{N-1} X\left(\omega_k + \frac{2\pi}{N} m \right), \; k\in [0,N_s/N)$ (3.38)

where we have chosen to keep frequency samples $ \omega_k$ in terms of the original frequency axis prior to downsampling, i.e., $ \omega_k =
2\pi k/ N_s$ for both $ X$ and $ Y$ . This choice allows us to easily take the limit as $ N_s\to\infty$ by simply replacing $ \omega_k$ by $ \omega$ :

$\displaystyle x(nN) \;\longleftrightarrow\; Y(\omega N) \eqsp \frac{1}{N} \sum_{m=0}^{N-1} X\left(\omega + \frac{2\pi}{N} m \right), \; \omega\in[0,2\pi/N)$ (3.39)

Replacing $ \omega$ by $ \omega^\prime =\omega N$ and converting to $ z$ -transform notation $ X(z)$ instead of Fourier transform notation $ X(\omega)$ , with $ z=e^{j\omega^\prime }$ , yields the final result.


Differentiation Theorem Dual


Theorem: Let $ x(n)$ denote a signal with DTFT $ X(e^{j\omega})$ , and let

$\displaystyle X^\prime(e^{j\omega}) \isdefs \frac{d}{d\omega} X(e^{j\omega})$ (3.40)

denote the derivative of $ X$ with respect to $ \omega$ . Then we have

$\displaystyle \zbox {-jn x(n) \;\longleftrightarrow\;\frac{d}{d\omega}X(e^{j\omega})}
$

where $ X(e^{j\omega})$ denotes the DTFT of $ x(n)$ .


Proof: Using integration by parts, we obtain

\begin{eqnarray*}
\hbox{\sc IDTFT}_{n}(X^\prime)
&\isdef & \frac{1}{2\pi}\int_{-\pi}^\pi X^\prime(e^{j\omega}) e^{j\omega n} d\omega\\
&=& \left. \frac{1}{2\pi}X(e^{j\omega})e^{j\omega t}\right\vert _{-\pi}^{\pi} -
\frac{1}{2\pi}\int_{-\pi}^\pi X(e^{j\omega}) (jn)e^{j\omega n} d\omega\\
&=& -jn x(n).
\end{eqnarray*}

An alternate method of proof is given in §B.3.

Corollary: Perhaps a cleaner statement is as follows:

$\displaystyle \zbox {- n x(n) \;\longleftrightarrow\;\frac{d}{d(j\omega)}X(e^{j\omega})}
$

This completes our coverage of selected DTFT theorems. The next section adds some especially useful FT theorems having no precise counterpart in the DTFT (discrete-time) case.


Next Section:
Continuous-Time Fourier Theorems
Previous Section:
Fourier Transform (FT) and Inverse