Hi,
I have a conundrum which I'm tying myself in knots thinking about; if
anyone can shed any light it would be much appreciated!
Take as a simple example the moving-sum filter:
L-1
H(z) = Sum { z^-n }
n=0
= 1 - z^-L
--------
1 - z^-1
The numerator includes a zero at DC, the denominator gives a pole at DC,
and so these cancel, giving non-zero response at DC.
However, if we implement the filter recursively:
(I hope my ASCII art comes out ok...)
+------+
---+--->| z^-L |------+
| +------+ | -
| v
+----------------->0-----------------+----->
^ |
| +------+ |
+-----| z^-1 |<---+
+------+
This is equivalent to:
+------+
---+--->| z^-L |------+
| +------+ | -
| v
+----------------->0--->0-----------------+----->
^ |
| +------+ |
+-----| z^-1 |<---+
+------+
i.e. implementing the feedforward section before the feedback section,
and doing the two additions separately.
The feedforward section has already killed the DC component, so we can't
recover it.
So my question is, how does the recursive moving-sum filter manage to
work (and we know it does), if it appears to be exactly equivalent to
something that blocks DC?
--
Oli
Question on pole-zero cancellation
Started by ●June 7, 2006
Reply by ●June 7, 20062006-06-07
Oli Filth skrev:> Hi, > > I have a conundrum which I'm tying myself in knots thinking about; if > anyone can shed any light it would be much appreciated! > > Take as a simple example the moving-sum filter: > > L-1 > H(z) = Sum { z^-n } > n=0 > > = 1 - z^-L > -------- > 1 - z^-1 > > The numerator includes a zero at DC, the denominator gives a pole at DC, > and so these cancel, giving non-zero response at DC. > > However, if we implement the filter recursively: > (I hope my ASCII art comes out ok...) > > +------+ > ---+--->| z^-L |------+ > | +------+ | - > | v > +----------------->0-----------------+-----> > ^ | > | +------+ | > +-----| z^-1 |<---+ > +------+ > > This is equivalent to: > > +------+ > ---+--->| z^-L |------+ > | +------+ | - > | v > +----------------->0--->0-----------------+-----> > ^ | > | +------+ | > +-----| z^-1 |<---+ > +------+ > > i.e. implementing the feedforward section before the feedback section, > and doing the two additions separately. > > The feedforward section has already killed the DC component, so we can't > recover it. > > So my question is, how does the recursive moving-sum filter manage to > work (and we know it does), if it appears to be exactly equivalent to > something that blocks DC?Because the zero at DC is exactly cancelled by a pole. Note that the second block scheme of yours is an expanded representation of the total system, not the actual system. You can multiply any number by 1, no matter how the factor 1 is represented, provided the representation is exact. Any transfer function H(z) can be expanded in a similar way: H(z) = 1*H(z) = (1-z)/(1-z)*H(z) (* means multiplication) and we have the same "paradox" you found above. Rune
Reply by ●June 7, 20062006-06-07
Rune Allnor said the following on 07/06/2006 21:20:> Oli Filth skrev: >> Take as a simple example the moving-sum filter: >> >> H(z) = 1 - z^-L >> -------- >> 1 - z^-1 >> >> The numerator includes a zero at DC, the denominator gives a pole at DC, >> and so these cancel, giving non-zero response at DC. >> >> However, if we implement the filter recursively: >> (I hope my ASCII art comes out ok...) >> >> +------+ >> ---+--->| z^-L |------+ >> | +------+ | - >> | v >> +----------------->0-----------------+-----> >> ^ | >> | +------+ | >> +-----| z^-1 |<---+ >> +------+ >> >> This is equivalent to: >> >> +------+ >> ---+--->| z^-L |------+ >> | +------+ | - >> | v >> +----------------->0--->0-----------------+-----> >> ^ | >> | +------+ | >> +-----| z^-1 |<---+ >> +------+ >> >> i.e. implementing the feedforward section before the feedback section, >> and doing the two additions separately. >> >> The feedforward section has already killed the DC component, so we can't >> recover it. >> >> So my question is, how does the recursive moving-sum filter manage to >> work (and we know it does), if it appears to be exactly equivalent to >> something that blocks DC? > > Because the zero at DC is exactly cancelled by a pole.Indeed, this is mathematically the case. But if we implement the unit-circle zero first, then the output of that section contains a perfect spectral null. If you put in a true DC signal, the output will be zero. Once that's happened, no amount of subsequent gain can recover the signal at that frequency, not even the infinite gain that a pole provides.> Note that the > second block scheme of yours is an expanded representation of the > total system, not the actual system.I'm not sure what you mean. The two block diagrams are the *exact* same system, and both explicitly implement: y[n] = y[n-1] + x[n] - x[n-L]> You can multiply any number by 1, > no matter how the factor 1 is represented, provided the representation > is exact. > > Any transfer function H(z) can be expanded in a similar way: > > H(z) = 1*H(z) = (1-z)/(1-z)*H(z) > > (* means multiplication) and we have the same "paradox" you > found above.I don't disagree. But conceptually, how do we resolve the paradox? Imagine another example, a channel equaliser. If the (noiseless) channel contains a spectral null (e^{j.theta} - z) for some theta, what equaliser would we use to recover a component at theta? I don't think that 1/(e^{j.omega} - z) would be very useful, even though mathematically it cancels the channel response. So why does it work above? -- Oli
Reply by ●June 7, 20062006-06-07
Oli Filth skrev:> Rune Allnor said the following on 07/06/2006 21:20: > > Oli Filth skrev: > >> Take as a simple example the moving-sum filter: > >> > >> H(z) = 1 - z^-L > >> -------- > >> 1 - z^-1 > >> > >> The numerator includes a zero at DC, the denominator gives a pole at DC, > >> and so these cancel, giving non-zero response at DC. > >> > >> However, if we implement the filter recursively: > >> (I hope my ASCII art comes out ok...) > >> > >> +------+ > >> ---+--->| z^-L |------+ > >> | +------+ | - > >> | v > >> +----------------->0-----------------+-----> > >> ^ | > >> | +------+ | > >> +-----| z^-1 |<---+ > >> +------+ > >> > >> This is equivalent to: > >> > >> +------+ > >> ---+--->| z^-L |------+ > >> | +------+ | - > >> | v > >> +----------------->0--->0-----------------+-----> > >> ^ | > >> | +------+ | > >> +-----| z^-1 |<---+ > >> +------+ > >> > >> i.e. implementing the feedforward section before the feedback section, > >> and doing the two additions separately. > >> > >> The feedforward section has already killed the DC component, so we can't > >> recover it. > >> > >> So my question is, how does the recursive moving-sum filter manage to > >> work (and we know it does), if it appears to be exactly equivalent to > >> something that blocks DC? > > > > Because the zero at DC is exactly cancelled by a pole. > > Indeed, this is mathematically the case. But if we implement the > unit-circle zero first, then the output of that section contains a > perfect spectral null. If you put in a true DC signal, the output will > be zero. Once that's happened, no amount of subsequent gain can recover > the signal at that frequency, not even the infinite gain that a pole > provides.There are two different, although formally equivalent, representations and two different implementations. See below.> > Note that the > > second block scheme of yours is an expanded representation of the > > total system, not the actual system. > > I'm not sure what you mean. The two block diagrams are the *exact* same > system, and both explicitly implement: > > y[n] = y[n-1] + x[n] - x[n-L]No. Your pole-zero cirquit implements this difference equation. The moving-sum filter can also be implemented as y[n] = x[n] + x[n-1] + ... + x[n-L]. This one has no problems with DC. The one above has, along with the possibly imperfect pole-zero cancellation.> > You can multiply any number by 1, > > no matter how the factor 1 is represented, provided the representation > > is exact. > > > > Any transfer function H(z) can be expanded in a similar way: > > > > H(z) = 1*H(z) = (1-z)/(1-z)*H(z) > > > > (* means multiplication) and we have the same "paradox" you > > found above. > > I don't disagree. But conceptually, how do we resolve the paradox?By factoring out the poles and zeros that cancel before we implement what remains. In stead of multiplying with (1-z)/(1-z), multiply by 1.> Imagine another example, a channel equaliser. If the (noiseless) > channel contains a spectral null (e^{j.theta} - z) for some theta, what > equaliser would we use to recover a component at theta? I don't think > that 1/(e^{j.omega} - z) would be very useful, even though > mathematically it cancels the channel response. So why does it work above?These are two diffreent problems. With the equalizer, the system only contains the zero. The pole has to be introduced from somewhere, which turns out to be all but impossible in practice. With the moving-sum filter, the pole and zero are introduced by analytical manipulations. Big difference. Rune
Reply by ●June 7, 20062006-06-07
Rune Allnor said the following on 07/06/2006 22:19:> Oli Filth skrev: >> Rune Allnor said the following on 07/06/2006 21:20: >>> Oli Filth skrev: >>>> So my question is, how does the recursive moving-sum filter manage to >>>> work (and we know it does), if it appears to be exactly equivalent to >>>> something that blocks DC? >>> Because the zero at DC is exactly cancelled by a pole. >> Indeed, this is mathematically the case. But if we implement the >> unit-circle zero first, then the output of that section contains a >> perfect spectral null. If you put in a true DC signal, the output will >> be zero. Once that's happened, no amount of subsequent gain can recover >> the signal at that frequency, not even the infinite gain that a pole >> provides. > > There are two different, although formally equivalent, representations > and two different implementations. See below. > >>> Note that the >>> second block scheme of yours is an expanded representation of the >>> total system, not the actual system. >> I'm not sure what you mean. The two block diagrams are the *exact* same >> system, and both explicitly implement: >> >> y[n] = y[n-1] + x[n] - x[n-L] > > No. Your pole-zero cirquit implements this difference equation. > The moving-sum filter can also be implemented as > > y[n] = x[n] + x[n-1] + ... + x[n-L]. > > This one has no problems with DC. The one above has, along with > the possibly imperfect pole-zero cancellation.But the recursive implementation of the moving-sum is a commonly-used optimisation, not just an abstract mathematical manipulation that I've dreamed up! Is the answer to this "conundrum", therefore, that in practice, this optimisation is actually not equivalent to the feedforward-only implementation? -- Oli
Reply by ●June 7, 20062006-06-07
Oli Filth skrev:> Rune Allnor said the following on 07/06/2006 22:19: > > Oli Filth skrev: > >> Rune Allnor said the following on 07/06/2006 21:20: > >>> Oli Filth skrev: > >>>> So my question is, how does the recursive moving-sum filter manage to > >>>> work (and we know it does), if it appears to be exactly equivalent to > >>>> something that blocks DC? > >>> Because the zero at DC is exactly cancelled by a pole. > >> Indeed, this is mathematically the case. But if we implement the > >> unit-circle zero first, then the output of that section contains a > >> perfect spectral null. If you put in a true DC signal, the output will > >> be zero. Once that's happened, no amount of subsequent gain can recover > >> the signal at that frequency, not even the infinite gain that a pole > >> provides. > > > > There are two different, although formally equivalent, representations > > and two different implementations. See below. > > > >>> Note that the > >>> second block scheme of yours is an expanded representation of the > >>> total system, not the actual system. > >> I'm not sure what you mean. The two block diagrams are the *exact* same > >> system, and both explicitly implement: > >> > >> y[n] = y[n-1] + x[n] - x[n-L] > > > > No. Your pole-zero cirquit implements this difference equation. > > The moving-sum filter can also be implemented as > > > > y[n] = x[n] + x[n-1] + ... + x[n-L]. > > > > This one has no problems with DC. The one above has, along with > > the possibly imperfect pole-zero cancellation. > > But the recursive implementation of the moving-sum is a commonly-used > optimisation, not just an abstract mathematical manipulation that I've > dreamed up!I'm an academic, I don't get my hands dirty with DSP. I take your word for it.> Is the answer to this "conundrum", therefore, that in > practice, this optimisation is actually not equivalent to the > feedforward-only implementation?Consider these two simplified block schemes, L is odd: A) x[n] -> 1 - z^{-L} -> v[n] -> 1/(1-z^{-1}) -> y[n] B) x[n] -> 1/(1 - z^{-1}) -> v[n] -> 1-z^{-L} -> y[n] In case A, the intermediate signal v[n] must necessarily have a null at DC. Hence, y[n] has a null at DC since no linear system can add a frequency component that was not present in the input signal. In case B, v[n] is unstable since the ROC does not include the unit circle. Hence the system is ill-posed, no matter what happens after the pole block. In conclusion: The recursive fomulation does have a null at DC, whereas the naive moving-sum filter does not. Whether the discrepancy at DC is relevant at all, depends on the signal. If you deal with zero-mean signals, it doesn't matter. Rune
Reply by ●June 7, 20062006-06-07
Oli Filth wrote:> Rune Allnor said the following on 07/06/2006 21:20: > >> Oli Filth skrev: >> >>> Take as a simple example the moving-sum filter: >>> >>> H(z) = 1 - z^-L >>> -------- >>> 1 - z^-1 >>> >>> The numerator includes a zero at DC, the denominator gives a pole at DC, >>> and so these cancel, giving non-zero response at DC. >>> >>> However, if we implement the filter recursively: >>> (I hope my ASCII art comes out ok...) >>> >>> +------+ >>> ---+--->| z^-L |------+ >>> | +------+ | - >>> | v >>> +----------------->0-----------------+-----> >>> ^ | >>> | +------+ | >>> +-----| z^-1 |<---+ >>> +------+ >>> >>> This is equivalent to: >>> >>> +------+ >>> ---+--->| z^-L |------+ >>> | +------+ | - >>> | v >>> +----------------->0--->0-----------------+-----> >>> ^ | >>> | +------+ | >>> +-----| z^-1 |<---+ >>> +------+ >>> >>> i.e. implementing the feedforward section before the feedback section, >>> and doing the two additions separately. >>> >>> The feedforward section has already killed the DC component, so we can't >>> recover it. >>> >>> So my question is, how does the recursive moving-sum filter manage to >>> work (and we know it does), if it appears to be exactly equivalent to >>> something that blocks DC? >> >> >> Because the zero at DC is exactly cancelled by a pole. > > > Indeed, this is mathematically the case. But if we implement the > unit-circle zero first, then the output of that section contains a > perfect spectral null. If you put in a true DC signal, the output will > be zero. Once that's happened, no amount of subsequent gain can recover > the signal at that frequency, not even the infinite gain that a pole > provides. >Any time you do 0 * infinity you have to stop thinking stepwise and start taking limits as some x -> 0 (or x -> infinity). In this case, what happens in practices is that the unit-circle section puts out a transient that will always integrate to the true DC value, so the DC value is preserved (at least mathematically). Note that this type of filter will work in practice _provided_ that you use bit-exact math, you initialize all your variables correctly, and nothing gets smacked by a stray gamma ray or other transient effects. So this is something that I might expect to see implemented with integer arithmetic, but never with floating point.> >> Note that the >> second block scheme of yours is an expanded representation of the >> total system, not the actual system. > > > I'm not sure what you mean. The two block diagrams are the *exact* same > system, and both explicitly implement: > > y[n] = y[n-1] + x[n] - x[n-L] >Correct. -- snip --> > I don't disagree. But conceptually, how do we resolve the paradox? > Imagine another example, a channel equaliser. If the (noiseless) > channel contains a spectral null (e^{j.theta} - z) for some theta, what > equaliser would we use to recover a component at theta? I don't think > that 1/(e^{j.omega} - z) would be very useful, even though > mathematically it cancels the channel response. So why does it work above? >Pole-zero cancellations work if the mathematics is exact. It's easy to make a moving average filter be bit-accurate because there are no multiplies. Doing this with any arbitrary omega is not possible; I doubt that it is possible with _any_ set of non-unity gains in an IIR. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by ●June 7, 20062006-06-07
Tim Wescott said the following on 08/06/2006 00:55:> Oli Filth wrote: > >> Rune Allnor said the following on 07/06/2006 21:20: >> >>> Oli Filth skrev: >>> >>>> Take as a simple example the moving-sum filter:<SNIP>>>>> So my question is, how does the recursive moving-sum filter manage to >>>> work (and we know it does), if it appears to be exactly equivalent to >>>> something that blocks DC? >>> >>> Because the zero at DC is exactly cancelled by a pole. >> >> Indeed, this is mathematically the case. But if we implement the >> unit-circle zero first, then the output of that section contains a >> perfect spectral null. If you put in a true DC signal, the output >> will be zero. Once that's happened, no amount of subsequent gain can >> recover the signal at that frequency, not even the infinite gain that >> a pole provides. >> > Any time you do 0 * infinity you have to stop thinking stepwise and > start taking limits as some x -> 0 (or x -> infinity). > > In this case, what happens in practices is that the unit-circle section > puts out a transient that will always integrate to the true DC value, so > the DC value is preserved (at least mathematically).Indeed, l'Hopital's rule (or similar) demonstrates this. But it's kind of similar to, say, sinc(x) = sin(x)/x. In the limit as x -> 0, this becomes 1. However, if you implement the sin(x) first, and then do the 1/x separately, you get NaN! Back to the filter: This transient would occur only when the input is first applied, i.e. it's effectively a response to a step input. But I guess that's alright in practice. Theoretically, if we take a true (for all time) DC input, there is no transient at the feedforward output, its output is DC zero. Of course, this is an abstract scenario only.>> I don't disagree. But conceptually, how do we resolve the paradox? >> Imagine another example, a channel equaliser. If the (noiseless) >> channel contains a spectral null (e^{j.theta} - z) for some theta, >> what equaliser would we use to recover a component at theta? I don't >> think that 1/(e^{j.omega} - z) would be very useful, even though >> mathematically it cancels the channel response. So why does it work >> above? >> > Pole-zero cancellations work if the mathematics is exact. It's easy to > make a moving average filter be bit-accurate because there are no > multiplies.Are you implying that with a hypothetical unquantised system, it would be possible to recover a signal from a perfect spectral null? -- Oli
Reply by ●June 7, 20062006-06-07
Oli Filth wrote:> Tim Wescott said the following on 08/06/2006 00:55: > >> Oli Filth wrote: >> >>> Rune Allnor said the following on 07/06/2006 21:20: >>> >>>> Oli Filth skrev: >>>> >>>>> Take as a simple example the moving-sum filter: > > <SNIP> > >>>>> So my question is, how does the recursive moving-sum filter manage to >>>>> work (and we know it does), if it appears to be exactly equivalent to >>>>> something that blocks DC? >>>> >>>> >>>> Because the zero at DC is exactly cancelled by a pole. >>> >>> >>> Indeed, this is mathematically the case. But if we implement the >>> unit-circle zero first, then the output of that section contains a >>> perfect spectral null. If you put in a true DC signal, the output >>> will be zero. Once that's happened, no amount of subsequent gain can >>> recover the signal at that frequency, not even the infinite gain that >>> a pole provides. >>> >> Any time you do 0 * infinity you have to stop thinking stepwise and >> start taking limits as some x -> 0 (or x -> infinity). >> >> In this case, what happens in practices is that the unit-circle >> section puts out a transient that will always integrate to the true DC >> value, so the DC value is preserved (at least mathematically). > > > Indeed, l'Hopital's rule (or similar) demonstrates this. But it's kind > of similar to, say, sinc(x) = sin(x)/x. In the limit as x -> 0, this > becomes 1. However, if you implement the sin(x) first, and then do the > 1/x separately, you get NaN!Actually, if you expand the pertinent sinusoid in the integral into a Taylor's series, _then_ integrate, you get infinity sinc(x) = sum x^(2n)/(2n+1)! n=0 and there's no problem with 0/0 at all.> > Back to the filter: > This transient would occur only when the input is first applied, i.e. > it's effectively a response to a step input. But I guess that's alright > in practice. Theoretically, if we take a true (for all time) DC input, > there is no transient at the feedforward output, its output is DC zero. > Of course, this is an abstract scenario only.OK, it only works if you use the one-sided z transform. What this _really_ means is that you have to observe the correct initialization, as I mentioned above. If you assume that your system starts at time t = 0 with the sum and all the elements in your vector set to zero then your output will ramp up to the correct value in N steps.> > >>> I don't disagree. But conceptually, how do we resolve the paradox? >>> Imagine another example, a channel equaliser. If the (noiseless) >>> channel contains a spectral null (e^{j.theta} - z) for some theta, >>> what equaliser would we use to recover a component at theta? I don't >>> think that 1/(e^{j.omega} - z) would be very useful, even though >>> mathematically it cancels the channel response. So why does it work >>> above? >>> >> Pole-zero cancellations work if the mathematics is exact. It's easy >> to make a moving average filter be bit-accurate because there are no >> multiplies. > > > Are you implying that with a hypothetical unquantised system, it would > be possible to recover a signal from a perfect spectral null? >Assuming you initialize things correctly, yes. You can see this if you follow a perfect notch with a perfect resonant filter -- feeding a sinusoid at 'that' frequency into the notch will get you a transient followed by silence -- the perfect resonator will take that transient and turn it into the input sinusoid. The key here is the transient, and the fact that you have to assume some turn on instant. The turn on instant guarantees that the input signal won't be 'pure', thus generating a transient. The transient will just magically (because the math guarantees it) be exactly the right thing to induce the 'blocked' behavior to happen. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by ●June 7, 20062006-06-07
Rune Allnor wrote:
As happens all too frequently, I'm confused. I assume that we agree that
a transversal moving-sum filter is just the common FIR structure with
all coefficients equal to unity. The recursive form I know uses a delay,
an adder, and a subtracter:
+------+
---+--->| z^-L |------+
| +------+ | -
| v
+----------- + --->0------->
Although that is mathematically the same as
+------+
---+--->| z^-L |------+
| +------+ | -
| v
+----------------->0-----------------+----->
^ |
| +------+ |
+-----| z^-1 |<---+
+------+
why look for trouble? Maybe if you built the filter the more elaborate
way, you would find it.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������






