DSPRelated.com
Free Books

Analog Allpass Filters

It turns out that analog allpass filters are considerably simpler mathematically than digital allpass filters (discussed in §B.2). In fact, when working with digital allpass filters, it can be fruitful to convert to the analog case using the bilinear transformI.3.1), so that the filter may be manipulated in the analog $ s$ plane rather than the digital $ z$ plane. The analog case is simpler because analog allpass filters may be described as having a zero at $ s=-\overline{p}$ for every pole at $ s=p$, while digital allpass filters must have a zero at $ z=1/\overline{p}$ for every pole at $ z=p$. In particular, the transfer function of every first-order analog allpass filter can be written as

$\displaystyle H(s) = e^{j\phi}\frac{s+\overline{p}}{s-p}
$

where $ \phi\in[-\pi,\pi)$ is any constant phase offset. To see why $ H(s)$ must be allpass, note that its frequency response is given by

$\displaystyle H(j\omega)
= e^{j\phi}\frac{j\omega+\overline{p}}{j\omega-p}
= - e^{j\phi}\frac{\overline{j\omega-p}}{j\omega-p},
$

which clearly has modulus 1 for all $ \omega$ (since $ \vert\overline{z}/z\vert=1,\,\forall z\neq 0$). For real allpass filters, complex poles must occur in conjugate pairs, so that the ``allpass rule'' for poles and zeros may be simplified to state that a zero is required at minus the location of every pole, i.e., every real first-order allpass filter is of the form

$\displaystyle H(s) = \pm\frac{s+p}{s-p},
$

and, more generally, every real allpass transfer function can be factored as

$\displaystyle H(s) = \pm\frac{(s+p_1)(s+p_2)\cdots(s+p_N)}{(s-p_1)(s-p_2)\cdots(s-p_N)}. \protect$ (E.14)

This simplified rule works because every complex pole $ p_i$ is accompanied by its conjugate $ p_k=\overline{p_i}$ for some $ k\in[1:N]$.

Multiplying out the terms in Eq.$ \,$(E.14), we find that the numerator polynomial $ B(s)$ is simply related to the denominator polynomial $ A(s)$:

$\displaystyle H(s)
= \pm(-1)^N\frac{A(-s)}{A(s)}
= \pm(-1)^N\frac{s^N - a_{N-1}s^{N-1} + \cdots - a_1 s + a_0}{s^N + a_{N-1}s^{N-1} + \cdots + a_1 s + a_0}
$

Since the roots of $ A(s)$ must be in the left-half $ s$-plane for stability, $ A(s)$ must be a Hurwitz polynomial, which implies that all of its coefficients are nonnegative. The polynomial

$\displaystyle A(-s)=A\left(e^{j\pi}s\right)
$

can be seen as a $ \pi $-rotation of $ A(s)$ in the $ s$ plane; therefore, its roots must have non-positive real parts, and its coefficients form an alternating sequence.

As an example of the greater simplicity of analog allpass filters relative to the discrete-time case, the graphical method for computing phase response from poles and zeros (§8.3) gives immediately that the phase response of every real analog allpass filter is equal to twice the phase response of its numerator (plus $ \pi $ when the frequency response is negative at dc). This is because the angle of a vector from a pole at $ s=p$ to the point $ s=j\omega$ along the frequency axis is $ \pi $ minus the angle of the vector from a zero at $ s=-p$ to the point $ j\omega$.

Lossless Analog Filters

As discussed in §B.2, the an allpass filter can be defined as any filter that preserves signal energy for every input signal $ x(t)$. In the continuous-time case, this means

$\displaystyle \left\Vert\,x\,\right\Vert _2^2
\isdef \int_{-\infty}^\infty \le...
...\infty \left\vert y(t)\right\vert^2 dt
\isdef \left\Vert\,y\,\right\Vert _2^2
$

where $ y(t)$ denotes the output signal, and $ \left\Vert\,y\,\right\Vert$ denotes the L2 norm of $ y$. Using the Rayleigh energy theorem (Parseval's theorem) for Fourier transforms [87], energy preservation can be expressed in the frequency domain by

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2
$

where $ X$ and $ Y$ denote the Fourier transforms of $ x$ and $ y$, respectively, and frequency-domain L2 norms are defined by

$\displaystyle \left\Vert\,X\,\right\Vert _2 \isdef \sqrt{\frac{1}{2\pi}\int_{-\infty}^\infty \left\vert X(j\omega)\right\vert^2 d\omega}.
$

If $ h(t)$ denotes the impulse response of the allpass filter, then its transfer function $ H(s)$ is given by the Laplace transform of $ h$,

$\displaystyle H(s) = \int_0^{\infty} h(t)e^{-st}dt,
$

and we have the requirement

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2 = \left\Vert\,H\cdot X\,\right\Vert _2.
$

Since this equality must hold for every input signal $ x$, it must be true in particular for complex sinusoidal inputs of the form $ x(t) =
\exp(j2\pi f_xt)$, in which case [87]

\begin{eqnarray*}
X(f) &=& \delta(f-f_x)\\
Y(f) &=& H(j2\pi f_x)\delta(f-f_x),
\end{eqnarray*}

where $ \delta(f)$ denotes the Dirac ``delta function'' or continuous impulse functionE.4.3). Thus, the allpass condition becomes

$\displaystyle \left\Vert\,X\,\right\Vert _2 = \left\Vert\,Y\,\right\Vert _2 = \left\vert H(j2\pi f_x)\right\vert\cdot\left\Vert\,X\,\right\Vert _2
$

which implies

$\displaystyle \left\vert H(j\omega)\right\vert = 1, \quad \forall\, \omega\in(-\infty,\infty). \protect$ (E.15)

Suppose $ H$ is a rational analog filter, so that

$\displaystyle H(s) = \frac{B(s)}{A(s)}
$

where $ B(s)$ and $ A(s)$ are polynomials in $ s$:

\begin{eqnarray*}
B(s) &=& b_M s^M + b_{M-1}s^{M-1} + \cdots + b_1 s + b_0\\
A(s) &=& s^N + a_{N-1}s^{N-1} + \cdots + a_1 s + a_0
\end{eqnarray*}

(We have normalized $ B(s)$ so that $ A(s)$ is monic ($ a_N=1$) without loss of generality.) Equation (E.15) implies

$\displaystyle \left\vert A(j\omega)\right\vert = \left\vert B(j\omega)\right\vert, \quad \forall\, \omega\in(-\infty,\infty).
\protect$

If $ M=N=0$, then the allpass condition reduces to $ \vert b_0\vert=\vert a_0\vert=1$, which implies

$\displaystyle b_0 = e^{j\phi} a_0 = e^{j\phi}
$

where $ \phi\in[-\pi,\pi)$ is any real phase constant. In other words, $ b_0$ can be any unit-modulus complex number. If $ M = N = 1$, then the filter is allpass provided

$\displaystyle \left\vert b_1j\omega + b_0\right\vert = \left\vert j\omega + a_0\right\vert, \quad \forall\, \omega\in(-\infty,\infty).
$

Since this must hold for all $ \omega$, there are only two solutions:
  1. $ b_0=a_0$ and $ b_1=1$, in which case $ H(s)=B(s)/A(s)=1$ for all $ s$.
  2. $ b_0=\overline{a_0}$ and $ b_1=1$, i.e.,

    $\displaystyle B(j\omega)=e^{j\phi}\overline{A(j\omega)}.
$

Case (1) is trivially allpass, while case (2) is the one discussed above in the introduction to this section.

By analytic continuation, we have

$\displaystyle 1 = \left\vert H(j\omega)\right\vert = \left\vert H(j\omega)\right\vert^2 = \left. H(s)\overline{H(s)}\right\vert _{s=j\omega}
$

If $ h(t)$ is real, then $ \overline{H(j\omega)} = H(-j\omega)$, and we can write

$\displaystyle 1 = \left. H(s)H(-s)\right\vert _{s=j\omega}.
$

To have $ H(s)H(-s)=1$, every pole at $ s=p$ in $ H(s)$ must be canceled by a zero at $ s=p$ in $ H(-s)$, which is a zero at $ s=-p$ in $ H(s)$. Thus, we have derived the simplified ``allpass rule'' for real analog filters.


Next Section:
Introduction
Previous Section:
Quality Factor (Q)