An Easier Way

We derived the frequency response above using trig identities in order to minimize the mathematical level involved. However, it turns out it is actually easier, though more advanced, to use complex numbers for this purpose. To do this, we need Euler's identity:

$\displaystyle \zbox {e^{j\theta} = \cos(\theta) + j \sin(\theta)}\qquad \hbox{(Euler's Identity)} \protect$ (2.8)

where $ j\isdef \sqrt{-1}$ is the imaginary unit for complex numbers, and $ e$ is a transcendental constant approximately equal to $ 2.718\ldots$. Euler's identity is fully derived in [84]; here we will simply use it ``on faith.'' It can be proved by computing the Taylor series expansion of each side of Eq.$ \,$(1.8) and showing equality term by term [84,14].

Complex Sinusoids

Using Euler's identity to represent sinusoids, we have

$\displaystyle A e^{j(\omega t+\phi)} = A\cos(\omega t+\phi) + j A\sin(\omega t+\phi) \protect$ (2.9)

when time $ t$ is continuous (see §A.1 for a list of notational conventions), and when time is discrete,

$\displaystyle A e^{j(\omega nT+\phi)} = A\cos(\omega nT+\phi) + j A\sin(\omega nT+\phi). \protect$ (2.10)

Any function of the form $ A e^{j(\omega t+\phi)}$ or $ A e^{j(\omega
nT+\phi)}$ will henceforth be called a complex sinusoid.2.3 We will see that it is easier to manipulate both sine and cosine simultaneously in this form than it is to deal with either sine or cosine separately. One may even take the point of view that $ e^{j\theta}$ is simpler and more fundamental than $ \sin(\theta)$ or $ \cos(\theta)$, as evidenced by the following identities (which are immediate consequences of Euler's identity, Eq.$ \,$(1.8)):

$\displaystyle \cos(\theta)$ $\displaystyle =$ $\displaystyle \frac{e^{j\theta} + e^{-j\theta}}{2}
\protect$ (2.11)
$\displaystyle \sin(\theta)$ $\displaystyle =$ $\displaystyle \frac{e^{j\theta} - e^{-j\theta}}{2j}
\protect$ (2.12)

Thus, sine and cosine may each be regarded as a combination of two complex sinusoids. Another reason for the success of the complex sinusoid is that we will be concerned only with real linear operations on signals. This means that $ j$ in Eq.$ \,$(1.8) will never be multiplied by $ j$ or raised to a power by a linear filter with real coefficients. Therefore, the real and imaginary parts of that equation are actually treated independently. Thus, we can feed a complex sinusoid into a filter, and the real part of the output will be the cosine response and the imaginary part of the output will be the sine response. For the student new to analysis using complex variables, natural questions at this point include ``Why $ e$?, Where did the imaginary exponent come from? Are imaginary exponents legal?'' and so on. These questions are fully answered in [84] and elsewhere [53,14]. Here, we will look only at some intuitive connections between complex sinusoids and the more familiar real sinusoids.

Complex Amplitude

Note that the amplitude $ A$ and phase $ \phi$ can be viewed as the magnitude and angle of a single complex number

$\displaystyle {\cal A}\isdefs A e^{j\phi}

which is naturally thought of as the complex amplitude of the complex sinusoid defined by the left-hand side of either Eq.$ \,$(1.9) or Eq.$ \,$(1.10). The complex amplitude is the same whether we are talking about the continuous-time sinusoid $ A e^{j(\omega t+\phi)}$ or the discrete-time sinusoid $ A e^{j(\omega

Phasor Notation

The complex amplitude $ {\cal A}\isdef A e^{j\phi}$ is also defined as the phasor associated with any sinusoid having amplitude $ A$ and phase $ \phi$. The term ``phasor'' is more general than ``complex amplitude'', however, because it also applies to the corresponding real sinusoid given by the real part of Equations (1.9-1.10). In other words, the real sinusoids $ A\cos(\omega t+\phi)$ and $ A\cos(\omega nT+\phi)$ may be expressed as

A\cos(\omega t+\phi) &\isdef & \mbox{re}\left\{A e^{j(\omega t...
...hi)}\right\} = \mbox{re}\left\{{\cal A}e^{j\omega nT}\right\}\\

and $ {\cal A}$ is the associated phasor in each case. Thus, we say that the phasor representation of $ A\cos(\omega t+\phi)$ is $ {\cal A}\isdef A e^{j\phi}$. Phasor analysis is often used to analyze linear time-invariant systems such as analog electrical circuits.

Plotting Complex Sinusoids as Circular Motion

Figure 1.8 shows Euler's relation graphically as it applies to sinusoids. A point traveling with uniform velocity around a circle with radius 1 may be represented by $ e^{j\omega t}=e^{j2\pi f t}$ in the complex plane, where $ t$ is time and $ f$ is the number of revolutions per second. The projection of this motion onto the horizontal (real) axis is $ \cos (\omega t)$, and the projection onto the vertical (imaginary) axis is $ \sin (\omega t)$. For discrete-time circular motion, replace $ t$ by $ nT$ to get $ e^{j\omega nT} = e^{j2\pi (f/f_s) n}$ which may be interpreted as a point which jumps an arc length $ 2\pi f/f_s$ radians along the circle each sampling instant.

Figure: Relation of uniform circular motion to sinusoidal motion via Euler's identity $ \exp (j\omega t) = \cos (\omega t) + j\sin (\omega t)$ (Eq.$ \,$(1.8)). The projection of $ \exp (j\omega t)$ onto the real axis is $ \cos (\omega t)$, and its projection onto the imaginary axis is $ \sin (\omega t)$.
\begin{figure}\input fig/kfig2p6.pstex_t

Figure 1.9: Opposite circular motions add to give real sinusoidal motion, $ e^{j\omega t} + e^{-j\omega t} = 2\cos (\omega t)$.
\begin{figure}\input fig/kfig2p7.pstex_t

$\textstyle \parbox{0.8\textwidth}{%
\emph{Euler's identity says that a complex ...
...otion in the complex plane, and is the vector sum of two
sinusoidal motions.}

For circular motion to ensue, the sinusoidal motions must be at the same frequency, one-quarter cycle out of phase, and perpendicular (orthogonal) to each other. (With phase differences other than one-quarter cycle, the motion is generally elliptical.)

The converse of this is also illuminating. Take the usual circular motion $ e^{j\omega t}$ which spins counterclockwise along the unit circle as $ t$ increases, and add to it a similar but clockwise circular motion $ e^{-j\omega t}$. This is shown in Fig.1.9. Next apply Euler's identity to get

e^{j\omega t} + e^{-j\omega t}
& = & [\cos(\omega t) + j \sin(...
...a t) + \cos(\omega t) - j \sin(\omega t)\\
&=& 2 \cos(\omega t)


$\textstyle \parbox{0.8\textwidth}{%
\emph{\emph{Cosine} motion is the vector sum of two circular
motions with the same angular speed but opposite direction.}}$
This statement is a graphical or geometric interpretation of Eq.$ \,$(1.11). A similar derivation (subtracting instead of adding) gives the sine identity Eq.$ \,$(1.12).

We call $ e^{j\omega t}$ a positive-frequency sinusoidal component when $ \omega > 0$, and $ e^{-j\omega t}$ is the corresponding negative-frequency component. Note that both sine and cosine signals have equal-amplitude positive- and negative-frequency components (see also [84,53]). This happens to be true of every real signal (i.e., non-complex). To see this, recall that every signal can be represented as a sum of complex sinusoids at various frequencies (its Fourier expansion). For the signal to be real, every positive-frequency complex sinusoid must be summed with a negative-frequency sinusoid of equal amplitude. In other words, any counterclockwise circular motion must be matched by an equal and opposite clockwise circular motion in order that the imaginary parts always cancel to yield a real signal (see Fig.1.9). Thus, a real signal always has a magnitude spectrum which is symmetric about 0 Hz. Fourier symmetries such as this are developed more completely in [84].

Rederiving the Frequency Response

Let's repeat the mathematical sine-wave analysis of the simplest low-pass filter, but this time using a complex sinusoid instead of a real one. Thus, we will test the filter's response at frequency $ f$ by setting its input to

$\displaystyle x(n) = Ae^{j(2\pi f nT + \phi)} =
A\cos(2\pi f n T + \phi)
+ j A\sin(2\pi f n T + \phi).

Again, because of time-invariance, the frequency response will not depend on $ \phi$, so let $ \phi = 0$. Similarly, owing to linearity, we may normalize $ A$ to 1. By virtue of Euler's relation Eq.$ \,$(1.8) and the linearity of the filter, setting the input to $ x(n) = e^{j\omega
nT}$ is physically equivalent to putting $ \cos(\omega nT)$ into one copy of the filter and $ \sin(\omega nT)$ into a separate copy of the same filter. The signal path where the cosine goes in is the real part of the signal, and the other signal path is simply called the imaginary part. Thus, a complex signal in real life is implemented as two real signals processed in parallel; in particular, a complex sinusoid is implemented as two real sinusoids, side by side, one-quarter cycle out of phase. When the filter itself is real, two copies of it suffice to process a complex signal. If the filter is complex, we must implement complex multiplies between the complex signal samples and filter coefficients.

Using the normal rules for manipulating exponents, we find that the output of the simple low-pass filter in response to the complex sinusoid at frequency $ \omega/2\pi$ Hz is given by

y(n) &=& x(n) + x(n - 1) \\
&=& e^{j\omega n T} + e^{j\omeg...
...{-j\omega T}) x(n)\\
&\isdef & H(e^{j\omega T}) x(n),

where we have defined $ H(e^{j\omega T})\isdef (1+e^{-j\omega T})$, which we will show is in fact the frequency response of this filter at frequency $ \omega$. This derivation is clearly easier than the trigonometry approach. What may be puzzling at first, however, is that the filter is expressed as a frequency-dependent complex multiply (when the input signal is a complex sinusoid). What does this mean? Well, the theory we are blindly trusting at this point says it must somehow mean a gain scaling and a phase shift. This is true and easy to see once the complex filter gain is expressed in polar form,

$\displaystyle H(e^{j\omega T}) \eqsp G(\omega)e^{j\Theta(\omega)},

where the gain versus frequency is given by $ G(\omega)\isdef
\vert H(e^{j\omega T})\vert$ (the absolute value, or modulus of $ H$), and the phase shift in radians versus frequency is given by the phase angle (or argument) $ \Theta(\omega)\isdeftext \angle H(e^{j\omega T})$. In other words, we must find

$\displaystyle G(\omega) \isdefs \left\vert H(e^{j\omega T})\right\vert

which is the amplitude response, and

$\displaystyle \Theta(\omega) \isdefs \angle H(e^{j\omega T})

which is the phase response. There is a trick we can call ``balancing the exponents,'' which will work nicely for the simple low-pass of Eq.$ \,$(1.1).

H(e^{j\omega T}) &=& (1 + e^{-j\omega T})\\
&=& (e^{j\omega...
... T/2})e^{-j\omega T/2}\\
&=& 2\cos(\omega T/2)e^{-j\omega T/2}

It is now easy to see that

G(\omega) &=& \left\vert 2\cos(\omega T/2)e^{-j\omega T/2}\rig...
... \eqsp 2\cos(\pi f T), \qquad \left\vert f\right\vert\leq f_s/2.


$\displaystyle \Theta(\omega) \eqsp -\frac{\omega T}{2} \eqsp -\pi f T
\eqsp - \pi \frac{f}{f_s},
\qquad \left\vert f\right\vert\leq f_s/2.

We have derived again the graph of Fig.1.7, which shows the complete frequency response of Eq.$ \,$(1.1). The gain of the simplest low-pass filter varies, as cosine varies, from 2 to 0 as the frequency of an input sinusoid goes from 0 to half the sampling rate. In other words, the amplitude response of Eq.$ \,$(1.1) goes sinusoidally from 2 to 0 as $ \omega T$ goes from 0 to $ \pi $. It does seem somewhat reasonable to consider it a low-pass, and it is a poor one in the sense that it is hard to see which frequency should be called the cut-off frequency. We see that the spectral ``roll-off'' is very slow, as low-pass filters go, and this is what we pay for the extreme simplicity of Eq.$ \,$(1.1). The phase response $ \Theta(\omega) =
-\omega T/2$ is linear in frequency, which gives rise to a constant time delay irrespective of the signal frequency.

It deserves to be emphasized that all a linear time-invariant filter can do to a sinusoid is scale its amplitude and change its phase. Since a sinusoid is completely determined by its amplitude $ A$, frequency $ f$, and phase $ \phi$, the constraint on the filter is that the output must also be a sinusoid, and furthermore it must be at the same frequency as the input sinusoid. More explicitly:

$\textstyle \parbox{0.8\textwidth}{%
If a sinusoid, $A_1\cos(\omega nT + \phi_1)...
...terized by its gain $A_2/A_1$, and phase $\phi_2 - \phi_1$, at
each frequency.}$

Mathematically, a sinusoid has no beginning and no end, so there really are no start-up transients in the theoretical setting. However, in practice, we must approximate eternal sinusoids with finite-time sinusoids whose starting time was so long ago that the filter output is essentially the same as if the input had been applied forever.

Tying it all together, the general output of a linear time-invariant filter with a complex sinusoidal input may be expressed as

y(n) &=& (\textit{Complex Filter Gain}) \;\textit{times}\;\, (...
...ith Radius $[G(\omega)A]$\ and Phase $[\phi + \Theta(\omega)]$}.

Next Section:
Previous Section:
Finding the Frequency Response