DSPRelated.com
Forums

IIR Filter Realizations

Started by naebad October 1, 2006
Randy Yates wrote:
> "John E. Hadstate" <jh113355@hotmail.com> writes: > > [...] > > I'm sorry. You are mistaken, and all of your conclusions > > about canonic vs. direct-form IIR's are wrong. Moreover, I > > proved this more than thirty years ago. > > Reference?
John, you're gonna have to "put your money where your mouth is". you say you "proved this more than thirty years ago". i just proved it in the very post previously. but let's get really specific so that no one can wiggle out with what Wikipedia likes to call "weasel words". 1st, if the filter itself, by "virtue" of its spec and design, has gain above 0 dB at some frequency, there is an inherent risk of clipping at the output. that, of course, cannot be helped without scaling down the input (which will decrease S/N). i am not address that issue. i am addressing the issue of (unnecessary) clipping of internal signals (or "nodes" or states), even if the output would not otherwise be clipped. so, to illustrate, let's use a 2nd-order bandstop or "notch" filter with a passband gain of 0 dB. and, let's consider the case where the notch is narrow requiring a pretty high Q. what is the pole/zero constellation for such a filter? 2nd, we're talking about fixed-point arithmetic with the input, output, and states having the same word width. but the word width of doing the intermediate calculations is wide enough to have no quantization error. the multiply-accumulate operation has, until we decide to lose the bits, no loss of precision. so, for the DF1 y[n] = Q{ b0*x[n] + b1*x[n-1] + b2*x[n-2] - a1*y[n-1] - a2*y[n-2] } the Q{ .. } operation is whatever quantization and saturation that is necessary to turn the wider word of the input back to the original sized word of x[n] and y[n]. for the DF1, we need only one point of quantization at the output of the summation. there is no reason (in the modern world) why the quantization needs to happen on every multiplication, such as: y[n] = Q{b0*x[n]} + Q{b1*x[n-1]} + Q{b2*x[n-2]} - Q{a1*y[n-1]} - Q{a2*y[n-2]} nor is there any need to quantize the conceptual output of the feedforward filter before applying that to the feedback filter: v[n] = Q{ b0*x[n] + b1*x[n-1] + b2*x[n-2] } y[n] = Q{ v[n] - a1*y[n-1] - a2*y[n-2] } in that case, the all-pole filter that comes second will have some noisy input and the gain from the poles will boost that noise. but i cannot see the need for that architecture anywhere, be it a DSP chip or someone designing an ASIC or FPGA. for the DF1, "v[n]" is a conceptual signal (to illustrate that the zeros come before the poles), there is no need for it to exist anywhere in an implementation. but, for the DF2, unless your states are double-wide (and your feedback multipliers are also double-wide for the signal input at least - the coef input to the multiplier can still be single width), even in the "modern world" you still need to quantize (*and* saturate, if necessary) at a minimum of two locations: v[n] = Q{ x[n] - a1*v[n-1] - a2*v[n-2] } y[n] = Q{ b0*v[n] + b1*v[n-1] + b2*v[n-2] } the all-pole filter that has x[n] for input and y[n] for it's output will have tremendous gain for frequencies close to the resonant frequency, and unlike the DF1, there is no help from the zeros in beating that gain down. v[n], a *real* intermediate signal (as opposed to a "conceptual" signal in the DF1) - a "node" in the signal path, will clip, if the input is at normal levels and the Q of the resonance is high. then the quantized and clipped signal, v[n], is input into the second all-zero filter which will reduce the gain at the frequencies where the all-pole filter boosted the gain, but the damage from the clipping has already been done. John, *please*, avail us of your wisdom of 30 years hence. but we have to be careful of precisely what the signal architecture is, how wide which words are where. but nonetheless, in the modern world of fixed-point CPUs and DSPs with double-wide accumulators and ASICs and FPGAs that can have accumulators as wide as the designer deigns to make them, for filters with enough Q to make them interesting in an application, the DF1 does *much* better than the DF2. i've proven it here John and you need to back up your words with some more than bluster. r b-j
"robert bristow-johnson" <rbj@audioimagination.com> wrote in 
message 
news:1159892351.396306.133220@i42g2000cwa.googlegroups.com...
> > Randy Yates wrote: >> "John E. Hadstate" <jh113355@hotmail.com> writes: >> > [...] >> > I'm sorry. You are mistaken, and all of your >> > conclusions >> > about canonic vs. direct-form IIR's are wrong. >> > Moreover, I >> > proved this more than thirty years ago. >> >> Reference? > > John, > > you're gonna have to "put your money where your mouth is". > you say you > "proved this more than thirty years ago". i just proved > it in the very > post previously. but let's get really specific so that no > one can > wiggle out with what Wikipedia likes to call "weasel > words".
You didn't prove anything. All you did was set up a straw-man situation that was friendly to your erroneous assertions and then proceed to set fire to it.
> > 1st, if the filter itself, by "virtue" of its spec and > design, has gain > above 0 dB at some frequency, there is an inherent risk of > clipping at > the output.
That is bullshit on a Caesar salad, and you know it.
> > 2nd, we're talking about fixed-point arithmetic with the > input, output, > and states having the same word width.
No, *you're* talking about fixed-point arithmetic, I haven't made any assumptions about the numerical representation. But it doesn't matter...
> there is no reason (in the modern world) why the > quantization > needs to happen on every multiplication, such as:
Round-off error happens on nearly every multiplication where the terms are not perfectly representable as integers. There's nothing that the "modern world" can do about that.
> in that case, the all-pole filter that comes second will > have some > noisy input and the gain from the poles will boost that > noise. but i > cannot see the need for that architecture anywhere,
I don't believe this! It sounds like we are in violent agreement. I can't see the need for "an all-pole filter that comes second" either. In fact, I can see good reasons for not using such an architecture. In the canonic form identified in Gold and Rader, the poles are processed first, followed by the zeroes. The poles do not see round-off error produced by the zeroes, and consequently, do not re-circulate the resulting noise. This, and the relatively improved stability that results for hi-Q poles, is why the canonic form is preferable for any implementation above about 3rd order. You know this to be true. Why are you contradicting me? Did you simply misread what I wrote?
> John, *please*, avail us of your wisdom of 30 years hence.
Unfortunately, I am not clairvoyant, so I have no idea what my wisdom will be 30 years hence. Stop using 50 cent words to express nickel ideas, especially if you don't know what they mean.
> but we have to be careful of precisely what the signal > architecture is, how wide which words are where.
No, we don't. That is just more smoke and mirrors. Nothing has changed in mathematics or the theory of numerical computation in thirty years that changes the correctness of my conclusions into the incorrectness of your assertions.
"John E. Hadstate" <jh113355@hotmail.com> writes:

> Round-off error happens on nearly every multiplication where > the terms are not perfectly representable as integers. > There's nothing that the "modern world" can do about that.
John, When dealing with fixed-point arithmetic, I would separate the absolute error of representing a real value A on a computer (for floating point systems, Burden denotes this as "fl(A)" [burden]) from the round-off error that occurs when performing arithmetic operations. That is because, once the "representation error" has been accounted for, fixed-point values can indeed be multiplied with absolutely no loss of precision on practically any modern fixed-point machine. --Randy @BOOK{burden, title = "{Numerical Analysis}", author = "Richard~L.Burden, J.~Douglas~Faires, Albert~C.~Reynolds", publisher = "Prindle, Weber and Schmidt", edition = "second", year = "1981"} -- % Randy Yates % "Ticket to the moon, flight leaves here today %% Fuquay-Varina, NC % from Satellite 2" %%% 919-577-9882 % 'Ticket To The Moon' %%%% <yates@ieee.org> % *Time*, Electric Light Orchestra http://home.earthlink.net/~yatescr
Randy Yates skrev:
> "John E. Hadstate" <jh113355@hotmail.com> writes: > > > Round-off error happens on nearly every multiplication where > > the terms are not perfectly representable as integers. > > There's nothing that the "modern world" can do about that. > > John, > > When dealing with fixed-point arithmetic, I would separate the > absolute error of representing a real value A on a computer (for > floating point systems, Burden denotes this as "fl(A)" [burden]) from > the round-off error that occurs when performing arithmetic operations.
Could you elaborate, and show examples of "representation errors" and "round-off errors", and point out how they are different? I can sense a "philosophical" difference lurking in the shadows, but I am not quite certain I see how to differentiate between the two in practice.
> That is because, once the "representation error" has been accounted > for, fixed-point values can indeed be multiplied with absolutely no > loss of precision on practically any modern fixed-point machine.
Again is this book's discussion purely theoretichal or is this intended to be applicable in practice? If the latter, how would you go about to actually do this? Obtain "absolutely no loss of precision" in a fixed-point machine, I mean? Rune
John E. Hadstate wrote:
> > > > 2nd, we're talking about fixed-point arithmetic with the > > input, output, > > and states having the same word width. > > No, *you're* talking about fixed-point arithmetic, I haven't > made any assumptions about the numerical representation. > But it doesn't matter...
you're right about that i brought fixed-point into the discussion. you said (in your first post of this thread):
> The main reason for preferring canonic over direct is higher > stability and lower internal generation of noise caused by > round-off error in finite precision implementations.
> There may also be a minor performance difference since one has to > manage only half as many delay elements in the canonic form.
when it's N/2 cascaded biquad stages, the number of delay elements of the DF2 is N which is not half of the number of delay elements of the DF1 which is N+2 unless N=2 (a single biquad).
> One of the reasons for the improved noise performance of the > canonic form is that the signal is first processed by the > poles, then by the zeroes.
correct (the signal is processed by an all-pole filter and then an all-zero filter), except that is the reason why the DF2 is at a disadvantage over the DF1.
> In the direct form, noise > generated by the zeroes is introduced to the poles, which > recirculate it over and over (IIR) through their inherent > feedback mechanism.
noise is not generated by zeros. noise (in this case) is generated by quantization. if there is no quantization in the all-pole filter (coming first in the DF1), there is no noise from that particular source to "recirculate .. over and over". however, in the all-pole filter (comes first in the DF2 and second in the DF1) there is quantization noise in any case that gets recirculated over-and-over. and you're right that it doesn't matter (much). you're still wrong. i make a case, you're relying on invective. if you don't like
> > there is no reason (in the modern world) why the quantization > > needs to happen on every multiplication, such as: > > Round-off error happens on nearly every multiplication where > the terms are not perfectly representable as integers.
no, whether it's fixed or floating point, if the destination register has twice the width (or in floating point, has a mantissa of twice the width) of the operands (forgive us for assuming the two operands have the same width), there is no multiplication error.
> There's nothing that the "modern world" can do about that. > > > in that case, the all-pole filter that comes second will have some > > noisy input and the gain from the poles will boost that noise. but i > > cannot see the need for that architecture anywhere, > > I don't believe this! It sounds like we are in violent agreement.
except that in any "modern" system where one is not forced to quantize the double-width accumulator, there is no quantization noise from the leading all-zero filter. that's maybe what you're missing, John.
> I can't see the need for "an all-pole filter > that comes second" either. In fact, I can see good reasons > for not using such an architecture. In the canonic form > identified in Gold and Rader, the poles are processed first, > followed by the zeroes.
i got that one. there are lot's of old textbooks that have used the DF2 as their only illustration (i think because it was "canonical" and saved on number of states) as if the DF1 never existed. that oversight by the ivory tower is not the only one (as people here know i've been complaining about the dimensional factor "T" in the passband gain of the reconstruction filter used in the Sampling Theorem of every textbook save for Pohlmann's Principles of Digital Audio). there are other oversights.
> The poles do not see round-off > error produced by the zeroes,
zeros don't produce round-off error. quantization, errr... rounding off does.
> and consequently, do not > re-circulate the resulting noise.
but they recirculate their own quantization noise.
> This, and the relatively > improved stability that results for hi-Q poles,
whooo, boy! that's when (hi-Q poles) the DF2 fails the worst, in fixed-point cases.
> is why the canonic form is preferable for any implementation above > about 3rd order. You know this to be true.
i know a lot of things, even some things i've forgotten. i don't think i'd use the DF2 or the DF1 for anything over 2nd order. i would cascade DF1 biquads.
> Why are you contradicting me?
only because you're wrong (as well as nakedly arrogant which is what makes it a little recreational to do so).
> Did you simply misread what I wrote?
no, you're just mistaken.
> > John, *please*, avail us of your wisdom of 30 years hence. > > Unfortunately, I am not clairvoyant, so I have no idea what > my wisdom will be 30 years hence.
"hence" means (among other things) "from this time; from now" ( http://www.thefreedictionary.com/hence ), it doesn't have to mean in the direction of the future.
> Stop using 50 cent words to express nickel ideas,
ah, yes, 50 cent words like "bullshit on a Caesar salad". that's mighty persuasive.
> > but we have to be careful of precisely what the signal > > architecture is, how wide which words are where. > > No, we don't. That is just more smoke and mirrors. Nothing > has changed in mathematics or the theory of numerical > computation in thirty years that changes the correctness of > my conclusions into the incorrectness of your assertions.
i guess we've been here before, John. (what was it before, dynamic range and word width?) r b-j
"Rune Allnor" <allnor@tele.ntnu.no> writes:

> Randy Yates skrev: >> "John E. Hadstate" <jh113355@hotmail.com> writes: >> >> > Round-off error happens on nearly every multiplication where >> > the terms are not perfectly representable as integers. >> > There's nothing that the "modern world" can do about that. >> >> John, >> >> When dealing with fixed-point arithmetic, I would separate the >> absolute error of representing a real value A on a computer (for >> floating point systems, Burden denotes this as "fl(A)" [burden]) from >> the round-off error that occurs when performing arithmetic operations. > > Could you elaborate, and show examples of "representation errors" > and "round-off errors", and point out how they are different?
I think it's reasonably self-explanatory.
> Again is this book's discussion purely theoretichal or is this > intended to be applicable in practice? If the latter, how would you > go about to actually do this? Obtain "absolutely no loss of precision" > in a fixed-point machine, I mean?
The C54x has a 16-bit data path, 17-bit multiplier, and a 40-bit accumulator. A minimum of 256 multiply-accumulates can be performed on this machine without losing even one bit of accuracy. -- % Randy Yates % "The dreamer, the unwoken fool - %% Fuquay-Varina, NC % in dreams, no pain will kiss the brow..." %%% 919-577-9882 % %%%% <yates@ieee.org> % 'Eldorado Overture', *Eldorado*, ELO http://home.earthlink.net/~yatescr
>> Could you elaborate, and show examples of "representation errors" >> and "round-off errors", and point out how they are different? > > I think it's reasonably self-explanatory. > >> Again is this book's discussion purely theoretichal or is this >> intended to be applicable in practice? If the latter, how would you >> go about to actually do this? Obtain "absolutely no loss of precision" >> in a fixed-point machine, I mean? > > The C54x has a 16-bit data path, 17-bit multiplier, and a 40-bit > accumulator. A minimum of 256 multiply-accumulates can be performed > on this machine without losing even one bit of accuracy.
I think that some of the confusion stems from disregarding the underlying structure available in the hardware. If you have infinite bits of precision (no quantization error), the performance will be the same and the DF2 is probably cheaper in MIPs. Every fixed point DSP chip that I have used has a double width accumulator. This is true of TI, ADI, Motorola, etc. There are also guard bits as Randy illustrates with his TI example. A Sharc for example uses 32 bit inputs and an 80 bit accumulator when performing fixed point MACs. This works very well for DF1 as rb-j suggests. You can also add first order error shaping with very little additional computation cost with this structure. Generally floating point results are the same size as the inputs, so floating point implementations do not benefit from the DF1 structure. Most floating point implementations use DF2 or DF2 transpose. On a SHARC this can be either with 32 bit or 40 bit floating point numbers. In most cases, a fixed point IIR using DF1 will be superior to a DF2 implemention using 32 bit and probably 40 bit floats, especially if you use error shaping. There are papers in the JAES (Moorer, Wilson, etc) that have studied this. Mark Allie gave a paper on this specific topic at the comp.dsp conference two years ago. -- Al Clark Danville Signal Processing, Inc. -------------------------------------------------------------------- Purveyors of Fine DSP Hardware and other Cool Stuff Available at http://www.danvillesignal.com