DSPRelated.com
Free Books

Analog Filters

For our purposes, an analog filter is any filter which operates on continuous-time signals. In other respects, they are just like digital filters. In particular, linear, time-invariant (LTI) analog filters can be characterized by their (continuous) impulse response $ h(t)$, where $ t$ is time in seconds. Instead of a difference equation, analog filters may be described by a differential equation. Instead of using the z transform to compute the transfer function, we use the Laplace transform (introduced in Appendix D). Every aspect of the theory of digital filters has its counterpart in that of analog filters. In fact, one can think of analog filters as simply the limiting case of digital filters as the sampling-rate is allowed to go to infinity.

In the real world, analog filters are often electrical models, or ``analogues'', of mechanical systems working in continuous time. If the physical system is LTI (e.g., consisting of elastic springs and masses which are constant over time), an LTI analog filter can be used to model it. Before the widespread use of digital computers, physical systems were simulated on so-called ``analog computers.'' An analog computer was much like an analog synthesizer providing modular building-blocks (such as ``integrators'') that could be patched together to build models of dynamic systems.

Example Analog Filter

Figure E.1: Simple RC lowpass.
\begin{figure}\input fig/rc.pstex_t
\end{figure}

Figure E.1 shows a simple analog filter consisting of one resistor ($ R$ Ohms) and one capacitor ($ C$ Farads). The voltages across these elements are $ v_R(t)$ and $ v_C(t)$, respectively, where $ t$ denotes time in seconds. The filter input is the externally applied voltage $ v_e(t)$, and the filter output is taken to be $ v_C(t)$. By Kirchoff's loop constraints [20], we have

$\displaystyle v_e(t) = v_R(t) + v_C(t), \protect$ (E.1)

and the loop current is $ i(t)$.


Capacitors

A capacitor can be made physically using two parallel conducting plates which are held close together (but not touching). Electric charge can be stored in a capacitor by applying a voltage across the plates.

The defining equation of a capacitor $ C$ is

$\displaystyle q(t) = Cv(t) \protect$ (E.2)

where $ q(t)$ denotes the capacitor's charge in Coulombs, $ C$ is the capacitance in Farads, and $ v(t)$ is the voltage drop across the capacitor in volts. Differentiating with respect to time gives

$\displaystyle i(t) = C\frac{dv(t)}{dt},
$

where $ i(t)\isdef dq(t)/dt$ is now the current in Amperes. Note that, by convention, the current is taken to be positive when flowing from plus to minus across the capacitor (see the arrow in Fig.E.1 which indicates the direction of current flow--there is only one current $ i(t)$ flowing clockwise around the loop formed by the voltage source, resistor, and capacitor when an external voltage $ v_e$ is applied).

Taking the Laplace transform of both sides gives

$\displaystyle I(s) = Cs V(s) - Cv(0),
$

by the differentiation theorem for Laplace transformsD.4.2).

Assuming a zero initial voltage across the capacitor at time 0, we have

$\displaystyle R_C(s) \isdef \frac{V(s)}{I(s)} = \frac{1}{Cs}.
$

We call this the driving-point impedance of the capacitor. The driving-point impedance facilitates steady state analysis (zero initial conditions) by allowing the capacitor to be analyzed like a simple resistor, with value $ 1/(Cs)$ Ohms.

Mechanical Equivalent of a Capacitor is a Spring

The mechanical analog of a capacitor is the compliance of a spring. The voltage $ v(t)$ across a capacitor $ C$ corresponds to the force $ f(t)$ used to displace a spring. The charge $ q(t)$ stored in the capacitor corresponds to the displacement $ x(t)$ of the spring. Thus, Eq.$ \,$(E.2) corresponds to Hooke's law for ideal springs:

$\displaystyle x(t) = \frac{1}{k} f(t),
$

where $ k$ is called the spring constant or spring stiffness. Note that Hooke's law is usually written as $ f(t) =
k\,x(t)$. The quantity $ 1/k$ is called the spring compliance.


Inductors

Figure E.2: An RLC filter, input $ = v_e(t)$, output $ = v_C(t) = v_L(t)$.
\begin{figure}\input fig/rlc.pstex_t
\end{figure}

An inductor can be made physically using a coil of wire, and it stores magnetic flux when a current flows through it. Figure E.2 shows a circuit in which a resistor $ R$ is in series with the parallel combination of a capacitor $ C$ and inductor $ L$.

The defining equation of an inductor $ L$ is

$\displaystyle \phi(t) = Li(t) \protect$ (E.3)

where $ \phi(t)$ denotes the inductor's stored magnetic flux at time $ t$, $ L$ is the inductance in Henrys (H), and $ i(t)$ is the current through the inductor coil in Amperes (A), where an Ampere is a Coulomb (of electric charge) per second. Differentiating with respect to time gives

$\displaystyle v(t) = L\frac{di(t)}{dt}, \protect$ (E.4)

where $ v(t)= d \phi(t)/ dt$ is the voltage across the inductor in volts. Again, the current $ i(t)$ is taken to be positive when flowing from plus to minus through the inductor.

Taking the Laplace transform of both sides gives

$\displaystyle V(s) = Ls I(s) - LI(0),
$

by the differentiation theorem for Laplace transforms.

Assuming a zero initial current in the inductor at time 0, we have

$\displaystyle R_L(s) \isdef \frac{V(s)}{I(s)} = Ls.
$

Thus, the driving-point impedance of the inductor is $ Ls$. Like the capacitor, it can be analyzed in steady state (initial conditions neglected) as a simple resistor with value $ Ls$ Ohms.

Mechanical Equivalent of an Inductor is a Mass

The mechanical analog of an inductor is a mass. The voltage $ v(t)$ across an inductor $ L$ corresponds to the force $ f(t)$ used to accelerate a mass $ m$. The current $ i(t)$ through in the inductor corresponds to the velocity $ {\dot x}(t)$ of the mass. Thus, Eq.$ \,$(E.4) corresponds to Newton's second law for an ideal mass:

$\displaystyle f(t) = m a(t),
$

where $ a(t)$ denotes the acceleration of the mass $ m$.

From the defining equation $ \phi=Li$ for an inductor [Eq.$ \,$(E.3)], we see that the stored magnetic flux in an inductor is analogous to mass times velocity, or momentum. In other words, magnetic flux may be regarded as electric-charge momentum.


RC Filter Analysis

Referring again to Fig.E.1, let's perform an impedance analysis of the simple RC lowpass filter.

Driving Point Impedance

Taking the Laplace transform of both sides of Eq.$ \,$(E.1) gives

$\displaystyle V_e(s) = V_R(s) + V_C(s) = R\, I(s) + \frac{1}{Cs} I(s)
$

where we made use of the fact that the impedance of a capacitor is $ 1/(Cs)$, as derived above. The driving point impedance of the whole RC filter is thus

$\displaystyle R_d(s) \isdef \frac{V_e(s)}{I(s)} = R + \frac{1}{Cs}.
$

Alternatively, we could simply note that impedances always sum in series and write down this result directly.


Transfer Function

Since the input and output signals are defined as $ v_e(t)$ and $ v_C(t)$, respectively, the transfer function of this analog filter is given by, using voltage divider rule,

$\displaystyle H(s) = \frac{V_C(s)}{V_e(s)}
= \frac{\frac{1}{Cs}}{R+\frac{1}{Cs...
...{RC}\frac{1}{s+\frac{1}{RC}}
\isdef \frac{1}{\tau} \frac{1}{s+\frac{1}{\tau}}.
$

The parameter $ \tau\isdef RC$ is called the RC time constant, for reasons we will soon see.


Impulse Response

In the same way that the impulse response of a digital filter is given by the inverse z transform of its transfer function, the impulse response of an analog filter is given by the inverse Laplace transform of its transfer function, viz.,

$\displaystyle h(t) = {\cal L}_t^{-1}\{H(s)\} = \tau e^{-t/\tau} u(t)
$

where $ u(t)$ denotes the Heaviside unit step function

$\displaystyle u(t) \isdef \left\{\begin{array}{ll}
1, & t\geq 0 \\ [5pt]
0, & t<0. \\
\end{array}\right.
$

This result is most easily checked by taking the Laplace transform of an exponential decay with time-constant $ \tau>0$:

\begin{eqnarray*}
{\cal L}_s\{e^{-t/\tau}\}
&\isdef & \int_0^{\infty}e^{-t/\tau...
...ght\vert _0^\infty\\
&=& \frac{1}{s+1/\tau} = \frac{RC}{RCs+1}.
\end{eqnarray*}

In more complicated situations, any rational $ H(s)$ (ratio of polynomials in $ s$) may be expanded into first-order terms by means of a partial fraction expansion (see §6.8) and each term in the expansion inverted by inspection as above.


The Continuous-Time Impulse

The continuous-time impulse response was derived above as the inverse-Laplace transform of the transfer function. In this section, we look at how the impulse itself must be defined in the continuous-time case.

An impulse in continuous time may be loosely defined as any ``generalized function'' having ``zero width'' and unit area under it. A simple valid definition is

$\displaystyle \delta(t) \isdef \lim_{\Delta \to 0} \left\{\begin{array}{ll} \fr...
...eq t\leq \Delta \\ [5pt] 0, & \hbox{otherwise}. \\ \end{array} \right. \protect$ (E.5)

More generally, an impulse can be defined as the limit of any pulse shape which maintains unit area and approaches zero width at time 0. As a result, the impulse under every definition has the so-called sifting property under integration,

$\displaystyle \int_{-\infty}^\infty f(t) \delta(t) dt = f(0), \protect$ (E.6)

provided $ f(t)$ is continuous at $ t=0$. This is often taken as the defining property of an impulse, allowing it to be defined in terms of non-vanishing function limits such as

$\displaystyle \delta(t) \isdef \lim_{\Omega\to\infty}\frac{\sin(\Omega t)}{\pi t}.
$

An impulse is not a function in the usual sense, so it is called instead a distribution or generalized function [13,44]. (It is still commonly called a ``delta function'', however, despite the misnomer.)


Poles and Zeros

In the simple RC-filter example of §E.4.3, the transfer function is

$\displaystyle H(s) = \frac{1}{s+1/\tau} = \frac{RC}{RCs+1}.
$

Thus, there is a single pole at $ s=-1/\tau=-RC$, and we can say there is one zero at infinity as well. Since resistors and capacitors always have positive values, the time constant $ \tau = RC$ is always non-negative. This means the impulse response is always an exponential decay--never a growth. Since the pole is at $ s=-1/\tau$, we find that it is always in the left-half $ s$ plane. This turns out to be the case also for any complex analog one-pole filter. By consideration of the partial fraction expansion of any $ H(s)$, it is clear that, for stability of an analog filter, all poles must lie in the left half of the complex $ s$ plane. This is the analog counterpart of the requirement for digital filters that all poles lie inside the unit circle.


RLC Filter Analysis

Referring now to Fig.E.2, let's perform an impedance analysis of that RLC network.

Driving Point Impedance

By inspection, we can write

$\displaystyle R_d(s) = R + Ls \left\Vert \frac{1}{Cs}\right.
= R +\frac{L/C}{L...
...}{Cs}}
= R + \frac{Ls}{1+LCs^2} = R + \frac{1}{C}
\frac{s}{s^2+\frac{1}{LC}}.
$

where $ \Vert$ denotes ``in parallel with,'' and we used the general formula, memorized by any electrical engineering student,

$\displaystyle \zbox {R_1 \Vert R_2 = \frac{R_1 R_2}{R_1 + R_2}.}
$

That is, the impedance of the parallel combination of impedances $ R_1$ and $ R_2$ is given by the product divided by the sum of the impedances.


Transfer Function

The transfer function in this example can similarly be found using voltage divider rule:

$\displaystyle H(s) = \frac{V_C(s)}{V_e(s)}
= \frac{\left(Ls\left\Vert\frac{1}{...
...1}{RC} s + \frac{1}{LC}}
\isdef 2\eta\cdot\frac{s}{s^2 + 2\eta s + \omega_0^2}
$


Poles and Zeros

From the quadratic formula, the two poles are located at

$\displaystyle s =
-\eta \pm \sqrt{\eta^2 - \omega_0^2}
\;\isdef \;
-\frac{1}{2RC} \pm \sqrt{\left(\frac{1}{2RC}\right)^2 - \frac{1}{LC}}
$

and there is a zero at $ s=0$ and another at $ s=\infty$. If the damping $ R$ is sufficienly small so that $ \eta^2 < \omega_0^2$, then the poles form a complex-conjugate pair:

$\displaystyle s = -\eta \pm j\sqrt{\omega_0^2 - \eta^2}
$

Since $ \eta = 1/(2RC) > 0$, the poles are always in the left-half plane, and hence the analog RLC filter is always stable. When the damping is zero, the poles go to the $ j\omega$ axis:

$\displaystyle s = \pm j\omega_0
$


Impulse Response

The impulse response is again the inverse Laplace transform of the transfer function. Expanding $ H(s)$ into a sum of complex one-pole sections,

$\displaystyle H(s) = 2\eta\cdot\frac{s}{s^2 + 2\eta s + \omega_0^2}
= \frac{r_...
...r_2}{s-p_2}
= \frac{(r_1+r_2)s - (r_1p_2 + r_2p_1)}{s^2-(p_1 + p_2) + p_1p_2},
$

where $ p_{1,2}\isdef -\eta \pm \sqrt{\eta^2 - \omega_0^2}$. Equating numerator coefficients gives

\begin{eqnarray*}
r_1+r_2 &=& 2\eta \;\mathrel{=}\; \frac{1}{RC}\\
r_1p_2 + r_2p_1 &=& 0.
\end{eqnarray*}

This pair of equations in two unknowns may be solved for $ r_1$ and $ r_2$. The impulse response is then

$\displaystyle h(t) = r_1 e^{p_1 t} u(t) + r_2 e^{p_2 t} u(t).
$


Relating Pole Radius to Bandwidth

Consider the continuous-time complex one-pole resonator with $ s$-plane transfer function

$\displaystyle H(s) = \frac{-\sigma_p}{s-p}.
$

where $ s=\sigma + j\omega$ is the Laplace-transform variable, and $ p\isdef \sigma_p+j\omega_p$ is the single complex pole. The numerator scaling has been set to $ -\sigma_p$ so that the frequency response is normalized to unity gain at resonance:

$\displaystyle H(j\omega_p) = \frac{-\sigma_p}{j\omega_p-\sigma_p-j\omega_p} = \frac{-\sigma_p}{-\sigma_p} = 1.
$

The amplitude response at all frequencies is given by

$\displaystyle G(\omega) \isdef \left\vert H(j\omega)\right\vert = \frac{\left\v...
...\frac{\left\vert\sigma_p\right\vert}{\sqrt{(\omega-\omega_p)^2 + \sigma_p^2}}.
$

Without loss of generality, we may set $ \omega_p=0$, since changing $ \omega_p$ merely translates the amplitude response with respect to $ \omega$. (We could alternatively define the translated frequency variable $ \nu\isdef \omega-\omega_p$ to get the same simplification.) The squared amplitude response is now

$\displaystyle G^2(\omega) = \frac{\sigma_p^2}{\omega^2+\sigma_p^2}.
$

Note that

\begin{eqnarray*}
G^2(0) &=& 1 = 0 \hbox{ dB},\\
G^2(\pm\sigma_p) &=& \frac{1}{2} = - 3 \hbox{ dB}.
\end{eqnarray*}

This shows that the 3-dB bandwidth of the resonator in radians per second is $ 2\left\vert\sigma_p\right\vert$, or twice the absolute value of the real part of the pole. Denoting the 3-dB bandwidth in Hz by $ B$, we have derived the relation $ 2\pi B = 2\left\vert\sigma_p\right\vert$, or

$\displaystyle \zbox {B=\frac{\left\vert\sigma_p\right\vert}{\pi}=\frac{\left\vert\mbox{re}\left\{p\right\}\right\vert}{\pi}.}
$

Since a $ -3$ dB attenuation is the same thing as a power scaling by $ 1/2$, the 3-dB bandwidth is also called the half-power bandwidth.

It now remains to ``digitize'' the continuous-time resonator and show that relation Eq.$ \,$(8.7) follows. The most natural mapping of the $ s$ plane to the $ z$ plane is

$\displaystyle z = e^{sT},
$

where $ T$ is the sampling period. This mapping follows directly from sampling the Laplace transform to obtain the z transform. It is also called the impulse invariant transformation [68, pp. 216-219], and for digital poles it is the same as the matched z transformation [68, pp. 224-226]. Applying the matched z transformation to the pole $ p$ in the $ s$ plane gives the digital pole

$\displaystyle p_d = R_d e^{j\theta_d} \isdef e^{p T} = e^{(\sigma_p+j\omega_p)T} = e^{\sigma_p T} e^{j\omega_p T}
$

from which we identify

$\displaystyle R_d = e^{\sigma_p T} = e^{-\pi B T}
$

and the relation between pole radius $ R_d$ and analog 3-dB bandwidth $ B$ (in Hz) is now shown. Since the mapping $ z=e^{sT}$ becomes exact as $ T\to 0$, we have that $ B$ is also the 3-dB bandwidth of the digital resonator in the limit as the sampling rate approaches infinity. In practice, it is a good approximate relation whenever the digital pole is close to the unit circle ( $ R_d \approx 1$).


Quality Factor (Q)

The quality factor (Q) of a two-pole resonator is defined by [20, p. 184]

$\displaystyle Q \isdef \frac{\omega_0}{2\alpha} \protect$ (E.7)

where $ \omega_0$ and $ \alpha$ are parameters of the resonator transfer function

$\displaystyle H(s) = g\frac{s}{s^2 + 2\alpha s + \omega_0^2}. \protect$ (E.8)

Note that Q is defined in the context of continuous-time resonators, so the transfer function $ H(s)$ is the Laplace transform (instead of the z transform) of the continuous (instead of discrete-time) impulse-response $ h(t)$. An introduction to Laplace-transform analysis appears in Appendix D. The parameter $ \alpha$ is called the damping constant (or ``damping factor'') of the second-order transfer function, and $ \omega_0$ is called the resonant frequency [20, p. 179]. The resonant frequency $ \omega_0$ coincides with the physical oscillation frequency of the resonator impulse response when the damping constant $ \alpha$ is zero. For light damping, $ \omega_0$ is approximately the physical frequency of impulse-response oscillation ($ 2\pi$ times the zero-crossing rate of sinusoidal oscillation under an exponential decay). For larger damping constants, it is better to use the imaginary part of the pole location as a definition of resonance frequency (which is exact in the case of a single complex pole). (See §B.6 for a more complete discussion of resonators, in the discrete-time case.)

By the quadratic formula, the poles of the transfer function $ H(s)$ are given by

$\displaystyle p = -\alpha \pm \sqrt{\alpha^2 - \omega_0^2} \isdef -\alpha \pm \alpha_d . \protect$ (E.9)

Therefore, the poles are complex only when $ Q>1/2$. Since real poles do not resonate, we have $ Q>1/2$ for any resonator. The case $ Q=1/2$ is called critically damped, while $ Q<1/2$ is called overdamped. A resonator ($ Q>1/2$) is said to be underdamped, and the limiting case $ Q=\infty$ is simply undamped.

Relating to the notation of the previous section, in which we defined one of the complex poles as $ p\isdef \sigma_p+j\omega_p$, we have

$\displaystyle \sigma_p$ $\displaystyle =$ $\displaystyle -\alpha$ (E.10)
$\displaystyle \omega_p$ $\displaystyle =$ $\displaystyle \sqrt{\omega_0-\alpha^2}.$ (E.11)

For resonators, $ \omega_p$ coincides with the classically defined quantity [20, p. 624]

$\displaystyle \omega_d \isdef \omega_p = \sqrt{\omega_0^2 -\alpha^2} = \frac{\alpha_d}{j}.
$

Since the imaginary parts of the complex resonator poles are $ \pm\omega_d$, the zero-crossing rate of the resonator impulse response is $ \omega_d/\pi$ crossings per second. Moreover, $ \omega_d$ is very close to the peak-magnitude frequency in the resonator amplitude response. If we eliminate the negative-frequency pole, $ \omega_d/\pi$ becomes exactly the peak frequency. In other words, as a measure of resonance peak frequency, $ \omega_d$ only neglects the interaction of the positive- and negative-frequency resonance peaks in the frequency response, which is usually negligible except for highly damped, low-frequency resonators. For any amount of damping $ \omega_d/\pi$ gives the impulse-response zero-crossing rate exactly, as is immediately seen from the derivation in the next section.

Decay Time is Q Periods

Another well known rule of thumb is that the $ Q$ of a resonator is the number of ``periods'' under the exponential decay of its impulse response. More precisely, we will show that, for $ Q\gg 1/2$, the impulse response decays by the factor $ e^{-\pi}$ in $ Q$ cycles, which is about 96 percent decay, or -27 dB.

The impulse response corresponding to Eq.$ \,$(E.8) is found by inverting the Laplace transform of the transfer function $ H(s)$. Since it is only second order, the solution can be found in many tables of Laplace transforms. Alternatively, we can break it up into a sum of first-order terms which are invertible by inspection (possibly after rederiving the Laplace transform of an exponential decay, which is very simple). Thus we perform the partial fraction expansion of Eq.$ \,$(E.8) to obtain

$\displaystyle H(s) = \frac{g_1}{s-p_1} + \frac{g_2}{s-p_2}
$

where $ p_i$ are given by Eq.$ \,$(E.9), and some algebra gives
$\displaystyle g_1$ $\displaystyle =$ $\displaystyle -g\frac{p_1}{p_2-p_1}$ (E.12)
$\displaystyle g_2$ $\displaystyle =$ $\displaystyle g\frac{p_2}{p_2-p_1}$ (E.13)

as the respective residues of the poles $ p_i$.

The impulse response is thus

$\displaystyle h(t) = g_1 e^{p_1t} + g_2 e^{p_2t}.
$

Assuming a resonator, $ Q>1/2$, we have $ p_2 = \overline{p}_1$, where $ p_1=\sigma_p +j\omega_p = -\alpha + j\omega_d$ (using notation of the preceding section), and the impulse response reduces to

$\displaystyle h(t) = g_1\,e^{p_1 t} + \overline{g}_1\,e^{\overline{p}_1 t} = A\,e^{-\alpha t} \cos(\omega_p t + \phi)
$

where $ A$ and $ \phi$ are overall amplitude and phase constants, respectively.E.1

We have shown so far that the impulse response $ h(t)$ decays as $ e^{-\alpha t}$ with a sinusoidal radian frequency $ \omega_p=\omega_d$ under the exponential envelope. After Q periods at frequency $ \omega_p$, time has advanced to

$\displaystyle t_Q = Q\frac{2\pi}{\omega_p}
\approx \frac{2\pi Q}{\omega_0}
= \frac{\pi}{\alpha},
$

where we have used the definition Eq.$ \,$(E.7) $ Q\isdef \omega_0/(2\alpha)$. Thus, after $ Q$ periods, the amplitude envelope has decayed to

$\displaystyle e^{-\alpha t_Q} = e^{-\pi} \approx 0.043\dots
$

which is about 96 percent decay. The only approximation in this derivation was

$\displaystyle \omega_p = \sqrt{\omega_0^2 - \alpha^2} \approx \omega_0
$

which holds whenever $ \alpha\ll\omega_0$, or $ Q\gg 1/2$.


Q as Energy Stored over Energy Dissipated

Yet another meaning for $ Q$ is as follows [20, p. 326]

$\displaystyle Q = 2\pi\frac{\hbox{Stored Energy}}{\hbox{Energy Dissipated in One Cycle}}
$

where the resonator is freely decaying (unexcited).

Proof. The total stored energy at time $ t$ is equal to the total energy of the remaining response. After an impulse at time 0, the stored energy in a second-order resonator is

$\displaystyle {\cal E}(0) = \int_0^\infty h^2(t)dt \propto \int_0^\infty e^{-2\alpha t}dt
= \frac{1}{2\alpha}.
$

The energy dissipated in the first period $ P = 2\pi/\omega_p$ is $ {\cal E}(0)-{\cal E}(P)$, where

\begin{eqnarray*}
{\cal E}(P) &=& \int_P^\infty h^2(t)dt \propto \int_P^\infty e...
...}}{2\alpha}\\
&=& \frac{e^{-2\alpha (2\pi/\omega_p)}}{2\alpha}.
\end{eqnarray*}

Assuming $ Q\gg 1/2$ as before, $ \omega_p\approx\omega_0$ so that

$\displaystyle {\cal E}(P) \approx \frac{e^{-2\pi/Q}}{2\alpha}.
$

Assuming further that $ Q\gg 2\pi$, we obtain

$\displaystyle {\cal E}(0)-{\cal E}(P) \approx \frac{1}{2\alpha} \left(1-e^{-\frac{2\pi}{Q}}\right)
\approx \frac{1}{2\alpha}\frac{2\pi}{Q}.
$

This is the energy dissipated in one cycle. Dividing this into the total stored energy at time zero, $ {\cal E}(0)=1/(2\alpha)$, gives

$\displaystyle \frac{{\cal E}(0)}{{\cal E}(0)-{\cal E}(P)} \approx \frac{Q}{2\pi}
$

whence

$\displaystyle Q = 2\pi \frac{{\cal E}(0)}{{\cal E}(0)-{\cal E}(P)}
$

as claimed. Note that this rule of thumb requires $ Q\gg 2\pi$, while the one of the previous section only required $ Q\gg 1/2$.


Analog Allpass Filters

It turns out that analog allpass filters are considerably simpler mathematically than digital allpass filters (discussed in §B.2). In fact, when working with digital allpass filters, it can be fruitful to convert to the analog case using the bilinear transformI.3.1), so that the filter may be manipulated in the analog $ s$ plane rather than the digital $ z$ plane. The analog case is simpler because analog allpass filters may be described as having a zero at $ s=-\overline{p}$ for every pole at $ s=p$, while digital allpass filters must have a zero at $ z=1/\overline{p}$ for every pole at $ z=p$. In particular, the transfer function of every first-order analog allpass filter can be written as

$\displaystyle H(s) = e^{j\phi}\frac{s+\overline{p}}{s-p}
$

where $ \phi\in[-\pi,\pi)$ is any constant phase offset. To see why $ H(s)$ must be allpass, note that its frequency response is given by

$\displaystyle H(j\omega)
= e^{j\phi}\frac{j\omega+\overline{p}}{j\omega-p}
= - e^{j\phi}\frac{\overline{j\omega-p}}{j\omega-p},
$

which clearly has modulus 1 for all $ \omega$ (since $ \vert\overline{z}/z\vert=1,\,\forall z\neq 0$). For real allpass filters, complex poles must occur in conjugate pairs, so that the ``allpass rule'' for poles and zeros may be simplified to state that a zero is required at minus the location of every pole, i.e., every real first-order allpass filter is of the form

$\displaystyle H(s) = \pm\frac{s+p}{s-p},
$

and, more generally, every real allpass transfer function can be factored as

$\displaystyle H(s) = \pm\frac{(s+p_1)(s+p_2)\cdots(s+p_N)}{(s-p_1)(s-p_2)\cdots(s-p_N)}. \protect$ (E.14)

This simplified rule works because every complex pole $ p_i$ is accompanied by its conjugate $ p_k=\overline{p_i}$ for some $ k\in[1:N]$.

Multiplying out the terms in Eq.$ \,$(E.14), we find that the numerator polynomial $ B(s)$ is simply related to the denominator polynomial $ A(s)$:

$\displaystyle H(s)
= \pm(-1)^N\frac{A(-s)}{A(s)}
= \pm(-1)^N\frac{s^N - a_{N-1}s^{N-1} + \cdots - a_1 s + a_0}{s^N + a_{N-1}s^{N-1} + \cdots + a_1 s + a_0}
$

Since the roots of $ A(s)$ must be in the left-half $ s$-plane for stability, $ A(s)$ must be a Hurwitz polynomial, which implies that all of its coefficients are nonnegative. The polynomial

$\displaystyle A(-s)=A\left(e^{j\pi}s\right)
$

can be seen as a $ \pi $-rotation of $ A(s)$ in the $ s$ plane; therefore, its roots must have non-positive real parts, and its coefficients form an alternating sequence.

As an example of the greater simplicity of analog allpass filters relative to the discrete-time case, the graphical method for computing phase response from poles and zeros (§8.3) gives immediately that the phase response of every real analog allpass filter is equal to twice the phase response of its numerator (plus $ \pi $ when the frequency response is negative at dc). This is because the angle of a vector from a pole at $ s=p$ to the point $ s=j\omega$ along the frequency axis is $ \pi $ minus the angle of the vector from a zero at $ s=-p$ to the point $ j\omega$.

Lossless Analog Filters

As discussed in §B.2, the an allpass filter can be defined as any filter that preserves signal energy for every input signal $ x(t)$. In the continuous-time case, this means

$\displaystyle \left\Vert\,x\,\right\Vert _2^2
\isdef \int_{-\infty}^\infty \le...
...\infty \left\vert y(t)\right\vert^2 dt
\isdef \left\Vert\,y\,\right\Vert _2^2
$

where $ y(t)$ denotes the output signal, and $ \left\Vert\,y\,\right\Vert$ denotes the L2 norm of $ y$. Using the Rayleigh energy theorem (Parseval's theorem) for Fourier transforms [87], energy preservation can be expressed in the frequency domain by

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2
$

where $ X$ and $ Y$ denote the Fourier transforms of $ x$ and $ y$, respectively, and frequency-domain L2 norms are defined by

$\displaystyle \left\Vert\,X\,\right\Vert _2 \isdef \sqrt{\frac{1}{2\pi}\int_{-\infty}^\infty \left\vert X(j\omega)\right\vert^2 d\omega}.
$

If $ h(t)$ denotes the impulse response of the allpass filter, then its transfer function $ H(s)$ is given by the Laplace transform of $ h$,

$\displaystyle H(s) = \int_0^{\infty} h(t)e^{-st}dt,
$

and we have the requirement

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2 = \left\Vert\,H\cdot X\,\right\Vert _2.
$

Since this equality must hold for every input signal $ x$, it must be true in particular for complex sinusoidal inputs of the form $ x(t) =
\exp(j2\pi f_xt)$, in which case [87]

\begin{eqnarray*}
X(f) &=& \delta(f-f_x)\\
Y(f) &=& H(j2\pi f_x)\delta(f-f_x),
\end{eqnarray*}

where $ \delta(f)$ denotes the Dirac ``delta function'' or continuous impulse functionE.4.3). Thus, the allpass condition becomes

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2 = \left\vert H(j2\pi f_x)\right\vert\cdot\left\Vert\,X\,\right\Vert _2
$

which implies

$\displaystyle \left\vert H(j\omega)\right\vert = 1, \quad \forall\, \omega\in(-\infty,\infty). \protect$ (E.15)

Suppose $ H$ is a rational analog filter, so that

$\displaystyle H(s) = \frac{B(s)}{A(s)}
$

where $ B(s)$ and $ A(s)$ are polynomials in $ s$:

\begin{eqnarray*}
B(s) &=& b_M s^M + b_{M-1}s^{M-1} + \cdots + b_1 s + b_0\\
A(s) &=& s^N + a_{N-1}s^{N-1} + \cdots + a_1 s + a_0
\end{eqnarray*}

(We have normalized $ B(s)$ so that $ A(s)$ is monic ($ a_N=1$) without loss of generality.) Equation (E.15) implies

$\displaystyle \left\vert A(j\omega)\right\vert = \left\vert B(j\omega)\right\vert, \quad \forall\, \omega\in(-\infty,\infty).
\protect$

If $ M=N=0$, then the allpass condition reduces to $ \vert b_0\vert=\vert a_0\vert=1$, which implies

$\displaystyle b_0 = e^{j\phi} a_0 = e^{j\phi}
$

where $ \phi\in[-\pi,\pi)$ is any real phase constant. In other words, $ b_0$ can be any unit-modulus complex number. If $ M = N = 1$, then the filter is allpass provided

$\displaystyle \left\vert b_1j\omega + b_0\right\vert = \left\vert j\omega + a_0\right\vert, \quad \forall\, \omega\in(-\infty,\infty).
$

Since this must hold for all $ \omega$, there are only two solutions:
  1. $ b_0=a_0$ and $ b_1=1$, in which case $ H(s)=B(s)/A(s)=1$ for all $ s$.
  2. $ b_0=\overline{a_0}$ and $ b_1=1$, i.e.,

    $\displaystyle B(j\omega)=e^{j\phi}\overline{A(j\omega)}.
$

Case (1) is trivially allpass, while case (2) is the one discussed above in the introduction to this section.

By analytic continuation, we have

$\displaystyle 1 = \left\vert H(j\omega)\right\vert = \left\vert H(j\omega)\right\vert^2 = \left. H(s)\overline{H(s)}\right\vert _{s=j\omega}
$

If $ h(t)$ is real, then $ \overline{H(j\omega)} = H(-j\omega)$, and we can write

$\displaystyle 1 = \left. H(s)H(-s)\right\vert _{s=j\omega}.
$

To have $ H(s)H(-s)=1$, every pole at $ s=p$ in $ H(s)$ must be canceled by a zero at $ s=p$ in $ H(-s)$, which is a zero at $ s=-p$ in $ H(s)$. Thus, we have derived the simplified ``allpass rule'' for real analog filters.


Next Section:
Matrix Filter Representations
Previous Section:
Introduction to Laplace Transform Analysis