For our purposes, an analog filter is any filter which operates on continuous-time signals. In other respects, they are just like digital filters. In particular, linear, time-invariant (LTI) analog filters can be characterized by their (continuous) impulse response , where is time in seconds. Instead of a difference equation, analog filters may be described by a differential equation. Instead of using the z transform to compute the transfer function, we use the Laplace transform (introduced in Appendix D). Every aspect of the theory of digital filters has its counterpart in that of analog filters. In fact, one can think of analog filters as simply the limiting case of digital filters as the sampling-rate is allowed to go to infinity.
In the real world, analog filters are often electrical models, or ``analogues'', of mechanical systems working in continuous time. If the physical system is LTI (e.g., consisting of elastic springs and masses which are constant over time), an LTI analog filter can be used to model it. Before the widespread use of digital computers, physical systems were simulated on so-called ``analog computers.'' An analog computer was much like an analog synthesizer providing modular building-blocks (such as ``integrators'') that could be patched together to build models of dynamic systems.
Example Analog Filter
Figure E.1 shows a simple analog filter consisting of one resistor ( Ohms) and one capacitor ( Farads). The voltages across these elements are and , respectively, where denotes time in seconds. The filter input is the externally applied voltage , and the filter output is taken to be . By Kirchoff's loop constraints , we have
and the loop current is .
A capacitor can be made physically using two parallel conducting plates which are held close together (but not touching). Electric charge can be stored in a capacitor by applying a voltage across the plates.
The defining equation of a capacitor is
where denotes the capacitor's charge in Coulombs, is the capacitance in Farads, and is the voltage drop across the capacitor in volts. Differentiating with respect to time gives
Taking the Laplace transform of both sides gives
Assuming a zero initial voltage across the capacitor at time 0, we have
Mechanical Equivalent of a Capacitor is a Spring
The mechanical analog of a capacitor is the compliance of a spring. The voltage across a capacitor corresponds to the force used to displace a spring. The charge stored in the capacitor corresponds to the displacement of the spring. Thus, Eq.(E.2) corresponds to Hooke's law for ideal springs:
An inductor can be made physically using a coil of wire, and it stores magnetic flux when a current flows through it. Figure E.2 shows a circuit in which a resistor is in series with the parallel combination of a capacitor and inductor .
The defining equation of an inductor is
where denotes the inductor's stored magnetic flux at time , is the inductance in Henrys (H), and is the current through the inductor coil in Amperes (A), where an Ampere is a Coulomb (of electric charge) per second. Differentiating with respect to time gives
where is the voltage across the inductor in volts. Again, the current is taken to be positive when flowing from plus to minus through the inductor.
Taking the Laplace transform of both sides gives
Assuming a zero initial current in the inductor at time 0, we have
The mechanical analog of an inductor is a mass. The voltage across an inductor corresponds to the force used to accelerate a mass . The current through in the inductor corresponds to the velocity of the mass. Thus, Eq.(E.4) corresponds to Newton's second law for an ideal mass:
From the defining equation for an inductor [Eq.(E.3)], we see that the stored magnetic flux in an inductor is analogous to mass times velocity, or momentum. In other words, magnetic flux may be regarded as electric-charge momentum.
RC Filter Analysis
Driving Point Impedance
In the same way that the impulse response of a digital filter is given by the inverse z transform of its transfer function, the impulse response of an analog filter is given by the inverse Laplace transform of its transfer function, viz.,
In more complicated situations, any rational (ratio of polynomials in ) may be expanded into first-order terms by means of a partial fraction expansion (see §6.8) and each term in the expansion inverted by inspection as above.
The Continuous-Time Impulse
The continuous-time impulse response was derived above as the inverse-Laplace transform of the transfer function. In this section, we look at how the impulse itself must be defined in the continuous-time case.
An impulse in continuous time may be loosely defined as any ``generalized function'' having ``zero width'' and unit area under it. A simple valid definition is
More generally, an impulse can be defined as the limit of any pulse shape which maintains unit area and approaches zero width at time 0. As a result, the impulse under every definition has the so-called sifting property under integration,
provided is continuous at . This is often taken as the defining property of an impulse, allowing it to be defined in terms of non-vanishing function limits such as
RLC Filter Analysis
Driving Point Impedance
By inspection, we can write
The transfer function in this example can similarly be found using voltage divider rule:
This pair of equations in two unknowns may be solved for and . The impulse response is then
This shows that the 3-dB bandwidth of the resonator in radians per second is , or twice the absolute value of the real part of the pole. Denoting the 3-dB bandwidth in Hz by , we have derived the relation , or
It now remains to ``digitize'' the continuous-time resonator and show that relation Eq.(8.7) follows. The most natural mapping of the plane to the plane is
where and are parameters of the resonator transfer function
Note that Q is defined in the context of continuous-time resonators, so the transfer function is the Laplace transform (instead of the z transform) of the continuous (instead of discrete-time) impulse-response . An introduction to Laplace-transform analysis appears in Appendix D. The parameter is called the damping constant (or ``damping factor'') of the second-order transfer function, and is called the resonant frequency [20, p. 179]. The resonant frequency coincides with the physical oscillation frequency of the resonator impulse response when the damping constant is zero. For light damping, is approximately the physical frequency of impulse-response oscillation ( times the zero-crossing rate of sinusoidal oscillation under an exponential decay). For larger damping constants, it is better to use the imaginary part of the pole location as a definition of resonance frequency (which is exact in the case of a single complex pole). (See §B.6 for a more complete discussion of resonators, in the discrete-time case.)
By the quadratic formula, the poles of the transfer function are given by
Therefore, the poles are complex only when . Since real poles do not resonate, we have for any resonator. The case is called critically damped, while is called overdamped. A resonator () is said to be underdamped, and the limiting case is simply undamped.
Relating to the notation of the previous section, in which we defined
one of the complex poles as
, we have
For resonators, coincides with the classically defined quantity [20, p. 624]
Since the imaginary parts of the complex resonator poles are , the zero-crossing rate of the resonator impulse response is crossings per second. Moreover, is very close to the peak-magnitude frequency in the resonator amplitude response. If we eliminate the negative-frequency pole, becomes exactly the peak frequency. In other words, as a measure of resonance peak frequency, only neglects the interaction of the positive- and negative-frequency resonance peaks in the frequency response, which is usually negligible except for highly damped, low-frequency resonators. For any amount of damping gives the impulse-response zero-crossing rate exactly, as is immediately seen from the derivation in the next section.
Decay Time is Q Periods
Another well known rule of thumb is that the of a resonator is the number of ``periods'' under the exponential decay of its impulse response. More precisely, we will show that, for , the impulse response decays by the factor in cycles, which is about 96 percent decay, or -27 dB.
The impulse response corresponding to Eq.(E.8) is found by inverting the Laplace transform of the transfer function . Since it is only second order, the solution can be found in many tables of Laplace transforms. Alternatively, we can break it up into a sum of first-order terms which are invertible by inspection (possibly after rederiving the Laplace transform of an exponential decay, which is very simple). Thus we perform the partial fraction expansion of Eq.(E.8) to obtain
as the respective residues of the poles .
The impulse response is thus
Assuming a resonator, , we have , where (using notation of the preceding section), and the impulse response reduces to
Yet another meaning for is as follows [20, p. 326]
Proof. The total stored energy at time is equal to the total energy of the remaining response. After an impulse at time 0, the stored energy in a second-order resonator is
Assuming as before, so that
Analog Allpass Filters
It turns out that analog allpass filters are considerably simpler mathematically than digital allpass filters (discussed in §B.2). In fact, when working with digital allpass filters, it can be fruitful to convert to the analog case using the bilinear transform (§I.3.1), so that the filter may be manipulated in the analog plane rather than the digital plane. The analog case is simpler because analog allpass filters may be described as having a zero at for every pole at , while digital allpass filters must have a zero at for every pole at . In particular, the transfer function of every first-order analog allpass filter can be written as
This simplified rule works because every complex pole is accompanied by its conjugate for some .
Multiplying out the terms in Eq.(E.14), we find that the numerator polynomial is simply related to the denominator polynomial :
As an example of the greater simplicity of analog allpass filters relative to the discrete-time case, the graphical method for computing phase response from poles and zeros (§8.3) gives immediately that the phase response of every real analog allpass filter is equal to twice the phase response of its numerator (plus when the frequency response is negative at dc). This is because the angle of a vector from a pole at to the point along the frequency axis is minus the angle of the vector from a zero at to the point .
Lossless Analog Filters
Suppose is a rational analog filter, so that
(We have normalized so that is monic () without loss of generality.) Equation (E.15) implies
- and , in which case for all .
and , i.e.,
By analytic continuation, we have
Matrix Filter Representations
Introduction to Laplace Transform Analysis