Reply by George Bush April 9, 20052005-04-09
Once upon a time in this newsgroup (if I remember correctly), someone 
suggested that computing a running median was a better estimater of noise 
power.  

In article <BE65D48D.57D2%rbj@audioimagination.com>, robert bristow-johnson 
<rbj@audioimagination.com> wrote:
>in article 42401180.15043359@news.sf.sbcglobal.net, Rick Lyons at >r.lyons@_BOGUS_ieee.org wrote on 03/22/2005 07:50: > >> On Mon, 21 Mar 2005 01:46:55 -0500, robert bristow-johnson >> <rbj@audioimagination.com> wrote: >.... >>> i have never done this particular "moving variance" (i have >>> done LPFed power, so i guess that's about the same thing), but i've done >>> this LPF in fixed-point 56K code many a time. >> >> I'm looking at Fig 1 & FIG 2 as merely two lowpass IIR filters. >> I'm not thinking about any sort of "moving variance". > >cool, maybe i should change the subject line. > >>> FIG 1. is better. much better. >> >> Ah, but "better" in what way? Why is it "better"? > >no overflow. since this is a simple LPF and the gain is less than 0 dB for >all frequencies, then the output should never exceed the input and, if you >do your filter right, neither should any internal state. > >really, Rick, it's just the Direct Form 1 vs. Direct Form 2 argument we have >here periodically. even though it often is not canonical (but it is in this >case), i am a partisan for the DF1. > >i'm a little bit 56K-centric here, but the principles should be the same in >other cases, if you put the double-precision accumulator and quantizations >at the same place in the signal path. > >> I tried some MATLAB modeling (attempting to quantize >> all the sequences in the figures) and it looked to me >> like FIG 2 provides a y(n) output having a higher SQNR >> than if I used FIG 1. > >why would you have *any* SQNR (other than +inf) in MATLAB? > >> The effect was most pronounced when x(n) was low in amplitude. >> Of course, I don't know if I modeled this stuff properly. >> >> >>> (that summer should be a double precision >>> summer with a good quantizer after that for the delayed feedback state. >>> dunno if you need y[n] to be double wide or not.) >> >> Darn, I'm sure puzzled. How did the subject of >> a "quantizer" creep in here? > >i guess through the same door as "SQNR". you will have a quantizer >somewhere. i'm just saying that you should add the p*y[n-1] to (1-p)*x[n] >(as double precision) before you quantize them. > > >>> Rick, did you read that stuff about unbiasing the variance that you >>> calculate from the mean and mean square by multiplying by (1+p)/(2*p) if >>> using this IIR averager or N/(N-1) for the CIC averager? >> >> Well, I went through all the posts in this thread. >> Because Jim Shima's "leaky integrator" >> >> m[n+1] = m[n] + 1/(N+1)*(x[n+1] - m[n]) >> >> = N/(N+1)*m[n] + 1/(N+1)*x[n+1] >> >> does not, as far as I can tell, compute a "moving >> average" I didn't concentrate on his algorithm. > >it's a sorta-kinda moving-average. it's just that the weighting factors >ain't all 1/N. but they add to 1. > >my point was that when you use a moving average filter to estimate mean and >mean-square, and then you use those two values to compute an estimate of >variance, you have to scale that estimate up a teeny-weeny bit to have the >best estimate. >
Reply by robert bristow-johnson March 24, 20052005-03-24
in article 4241f14f.116507375@news.sf.sbcglobal.net, Rick Lyons at
r.lyons@_BOGUS_ieee.org wrote on 03/23/2005 18:19:

>> The numbers developed in Fig. 2 are larger than in Fig 1 by a factor of >> 1/(1-p). With p close to unity, many more bits will be needed to contain >> them without overflow. If course, the increased bit count also reduces >> the information lost by dividing prematurely. My point was that reducing >> the truncation error comes at a price. Is that price worth paying when >> seeking an estimate of an average? > > If the x(n) input signal is riding on a DC bias of value D, > the output of the adder (summer) in Fig. 2 will have > (eventually) an average value equal to D/(1-p) which > may be a large number, depending on the value of p. > > That problem does not exist in Fig. 1.
and if, in the DF1 case (Fig 1), you add p*y[n-1] to (1-p)*x[n] at full precision before truncation, i can't see how it loses to the DF2 in truncation error. this is the case where i see DF1 as decidedly superior to DF2. if all of the intermediate products (states and imput times coefs) are truncated before adding, then DF1 doesn't looks so good, but that does not have to be the case. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Rick Lyons March 23, 20052005-03-23
On Tue, 22 Mar 2005 09:41:08 -0500, Jerry Avins <jya@ieee.org> wrote:

>Rick Lyons wrote: >> On Mon, 21 Mar 2005 01:46:55 -0500, robert bristow-johnson >> <rbj@audioimagination.com> wrote: >> >> >>>in article 35SdnWAbpqYEBqDfRVn-qA@rcn.net, Jerry Avins at jya@ieee.org wrote >>>on 03/20/2005 10:49: >>> >>> >>>>Rick Lyons wrote: >>>> >>>>... >>>> >>>> >>>>>x(n)--->(*)------>(+)-----------+----> y(n) >>>>> ^ ^ | >>>>> | | v >>>>> [1-p] | [z^-1] FIG 1. >>>>> | | >>>>> +----(*)-----+ >>>>> ^ >>>>> | >>>>> p >>>> >>>> ... >>>> >>>> >>>>>x(n)------->(+)-----------+------->(*)---> y(n) >>>>> ^ | ^ >>>>> | v | >>>>> | [z^-1] [1-p] >>>>> | | >>>>> +----(*)-----+ >>>>> ^ FIG 2. >>>>> | >>>>> p >>>> >>>>What about overflow? > > ... > >> I wasn't able to glean any information >> from Jerry's three-word question. > > ... > >The numbers developed in Fig. 2 are larger than in Fig 1 by a factor of >1/(1-p). With p close to unity, many more bits will be needed to contain >them without overflow. If course, the increased bit count also reduces >the information lost by dividing prematurely. My point was that reducing >the truncation error comes at a price. Is that price worth paying when >seeking an estimate of an average? > >That's rather much to summarize as "What about overflow", but I hope you >know what I mean now. > >Jerry
Hi, yes, I now see your point (very clearly). If the x(n) input signal is riding on a DC bias of value D, the output of the adder (summer) in Fig. 2 will have (eventually) an average value equal to D/(1-p) which may be a large number, depending on the value of p. That problem does not exist in Fig. 1. Very good! Thanks Jerry & Robert B-J. Ha, ... now do you see why I come to you guys when I have a DSP question? See Ya', [-Rick-]
Reply by robert bristow-johnson March 22, 20052005-03-22
in article 42401180.15043359@news.sf.sbcglobal.net, Rick Lyons at
r.lyons@_BOGUS_ieee.org wrote on 03/22/2005 07:50:

> On Mon, 21 Mar 2005 01:46:55 -0500, robert bristow-johnson > <rbj@audioimagination.com> wrote:
...
>> i have never done this particular "moving variance" (i have >> done LPFed power, so i guess that's about the same thing), but i've done >> this LPF in fixed-point 56K code many a time. > > I'm looking at Fig 1 & FIG 2 as merely two lowpass IIR filters. > I'm not thinking about any sort of "moving variance".
cool, maybe i should change the subject line.
>> FIG 1. is better. much better. > > Ah, but "better" in what way? Why is it "better"?
no overflow. since this is a simple LPF and the gain is less than 0 dB for all frequencies, then the output should never exceed the input and, if you do your filter right, neither should any internal state. really, Rick, it's just the Direct Form 1 vs. Direct Form 2 argument we have here periodically. even though it often is not canonical (but it is in this case), i am a partisan for the DF1. i'm a little bit 56K-centric here, but the principles should be the same in other cases, if you put the double-precision accumulator and quantizations at the same place in the signal path.
> I tried some MATLAB modeling (attempting to quantize > all the sequences in the figures) and it looked to me > like FIG 2 provides a y(n) output having a higher SQNR > than if I used FIG 1.
why would you have *any* SQNR (other than +inf) in MATLAB?
> The effect was most pronounced when x(n) was low in amplitude. > Of course, I don't know if I modeled this stuff properly. > > >> (that summer should be a double precision >> summer with a good quantizer after that for the delayed feedback state. >> dunno if you need y[n] to be double wide or not.) > > Darn, I'm sure puzzled. How did the subject of > a "quantizer" creep in here?
i guess through the same door as "SQNR". you will have a quantizer somewhere. i'm just saying that you should add the p*y[n-1] to (1-p)*x[n] (as double precision) before you quantize them.
>> Rick, did you read that stuff about unbiasing the variance that you >> calculate from the mean and mean square by multiplying by (1+p)/(2*p) if >> using this IIR averager or N/(N-1) for the CIC averager? > > Well, I went through all the posts in this thread. > Because Jim Shima's "leaky integrator" > > m[n+1] = m[n] + 1/(N+1)*(x[n+1] - m[n]) > > = N/(N+1)*m[n] + 1/(N+1)*x[n+1] > > does not, as far as I can tell, compute a "moving > average" I didn't concentrate on his algorithm.
it's a sorta-kinda moving-average. it's just that the weighting factors ain't all 1/N. but they add to 1. my point was that when you use a moving average filter to estimate mean and mean-square, and then you use those two values to compute an estimate of variance, you have to scale that estimate up a teeny-weeny bit to have the best estimate. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Jerry Avins March 22, 20052005-03-22
Rick Lyons wrote:
> On Mon, 21 Mar 2005 01:46:55 -0500, robert bristow-johnson > <rbj@audioimagination.com> wrote: > > >>in article 35SdnWAbpqYEBqDfRVn-qA@rcn.net, Jerry Avins at jya@ieee.org wrote >>on 03/20/2005 10:49: >> >> >>>Rick Lyons wrote: >>> >>>... >>> >>> >>>>x(n)--->(*)------>(+)-----------+----> y(n) >>>> ^ ^ | >>>> | | v >>>> [1-p] | [z^-1] FIG 1. >>>> | | >>>> +----(*)-----+ >>>> ^ >>>> | >>>> p >>> >>> ... >>> >>> >>>>x(n)------->(+)-----------+------->(*)---> y(n) >>>> ^ | ^ >>>> | v | >>>> | [z^-1] [1-p] >>>> | | >>>> +----(*)-----+ >>>> ^ FIG 2. >>>> | >>>> p >>> >>>What about overflow?
...
> I wasn't able to glean any information > from Jerry's three-word question.
... The numbers developed in Fig. 2 are larger than in Fig 1 by a factor of 1/(1-p). With p close to unity, many more bits will be needed to contain them without overflow. If course, the increased bit count also reduces the information lost by dividing prematurely. My point was that reducing the truncation error comes at a price. Is that price worth paying when seeking an estimate of an average? That's rather much to summarize as "What about overflow", but I hope you know what I mean now. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Rick Lyons March 22, 20052005-03-22
On Mon, 21 Mar 2005 01:46:55 -0500, robert bristow-johnson
<rbj@audioimagination.com> wrote:

>in article 35SdnWAbpqYEBqDfRVn-qA@rcn.net, Jerry Avins at jya@ieee.org wrote >on 03/20/2005 10:49: > >> Rick Lyons wrote: >> >> ... >> >>> x(n)--->(*)------>(+)-----------+----> y(n) >>> ^ ^ | >>> | | v >>> [1-p] | [z^-1] FIG 1. >>> | | >>> +----(*)-----+ >>> ^ >>> | >>> p >> >> ... >> >>> x(n)------->(+)-----------+------->(*)---> y(n) >>> ^ | ^ >>> | v | >>> | [z^-1] [1-p] >>> | | >>> +----(*)-----+ >>> ^ FIG 2. >>> | >>> p >> >> What about overflow?
Hi Robert,
>i'm with Jerry.
I wasn't able to glean any information from Jerry's three-word question. I wasn't thinking about any overflow possibility in either of my Figures (filters). All I was thinking was if the x(n) input samples have some given (known) signal-to-quantization noise ratio, [SQNR] and I multiply x(n) by a number less than one then am I not degrading the SQNR of the input sequence as the first computational step in FIG. 1?
>i have never done this particular "moving variance" (i have >done LPFed power, so i guess that's about the same thing), but i've done >this LPF in fixed-point 56K code many a time.
I'm looking at Fig 1 & FIG 2 as merely two lowpass IIR filters. I'm not thinking about any sort of "moving variance".
>FIG 1. is better. much better.
Ah, but "better" in what way? Why is it "better"? I tried some MATLAB modeling (attempting to quantize all the sequences in the figures) and it looked to me like FIG 2 provides a y(n) output having a higher SQNR than if I used FIG 1. The effect was most pronounced when x(n) was low in amplitude. Of course, I don't know if I modeled this stuff properly.
>(that summer should be a double precision >summer with a good quantizer after that for the delayed feedback state. >dunno if you need y[n] to be double wide or not.)
Darn, I'm sure puzzled. How did the subject of a "quantizer" creep in here?
>Rick, did you read that stuff about unbiasing the variance that you >calculate from the mean and mean square by multiplying by (1+p)/(2*p) if >using this IIR averager or N/(N-1) for the CIC averager?
Well, I went through all the posts in this thread. Because Jim Shima's "leaky integrator" m[n+1] = m[n] + 1/(N+1)*(x[n+1] - m[n]) = N/(N+1)*m[n] + 1/(N+1)*x[n+1] does not, as far as I can tell, compute a "moving average" I didn't concentrate on his algorithm. What I was originally looking for was a way to compute a "moving" N-point variance. Clay's suggestion of the "raw score method" answered my question. John Hadstate's post explained how to derive that method. However, that "raw score method" requires two CIC filters with their inherent delay lines. John Sampson's April 2001 post gave a scheme to compute a "cumulative variance" that eliminates the memory requirements needed to implement CIC filter delay lines. But Sampson's technique requires more computations than the "raw score method". So what I've learned (I think) is that we can perform running variances, but we have to "trade off" the unpleasant need for either much memory or many computations. Pick your poison. See Ya, [-Rick-]
Reply by robert bristow-johnson March 21, 20052005-03-21
in article 35SdnWAbpqYEBqDfRVn-qA@rcn.net, Jerry Avins at jya@ieee.org wrote
on 03/20/2005 10:49:

> Rick Lyons wrote: > > ... > >> x(n)--->(*)------>(+)-----------+----> y(n) >> ^ ^ | >> | | v >> [1-p] | [z^-1] FIG 1. >> | | >> +----(*)-----+ >> ^ >> | >> p > > ... > >> x(n)------->(+)-----------+------->(*)---> y(n) >> ^ | ^ >> | v | >> | [z^-1] [1-p] >> | | >> +----(*)-----+ >> ^ FIG 2. >> | >> p > > What about overflow?
i'm with Jerry. i have never done this particular "moving variance" (i have done LPFed power, so i guess that's about the same thing), but i've done this LPF in fixed-point 56K code many a time. FIG 1. is better. much better. (that summer should be a double precision summer with a good quantizer after that for the delayed feedback state. dunno if you need y[n] to be double wide or not.) Rick, did you read that stuff about unbiasing the variance that you calculate from the mean and mean square by multiplying by (1+p)/(2*p) if using this IIR averager or N/(N-1) for the CIC averager? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by John E. Hadstate March 20, 20052005-03-20
"Rick Lyons" <r.lyons@_BOGUS_ieee.org> wrote in message
news:423d6a71.48872734@news.sf.sbcglobal.net...
> > > Ya' know Jim, I was thinking. When I see block diagrams
of
> that exponential averager in the literature, I always see > the input sequence multiplied by (1-p) [my alpha] before > the feedback loop as shown below. > > > x(n)--->(*)------>(+)-----------+----> y(n) > ^ ^ | > | | v > [1-p] | [z^-1] FIG 1. > | | > +----(*)-----+ > ^ > | > p > > > It seems to me that multiplying x(n) by a number less than > one [(1-p)] reduces the input signal-to-quantization noise > ratio if we use a fixed-point number system. If that's > true, maybe it's better to use the following FIG 2 > block diagram: > > > x(n)------->(+)-----------+------->(*)---> y(n) > ^ | ^ > | v | > | [z^-1] [1-p] > | | > +----(*)-----+ > ^ FIG 2. > | > p > > I wonder if anyone has modeled (in fixed-pt math) those > two versions of an exponential averager, and knows which > version should be used. > (I don't have an easy way to model "fixed-point" math.) >
I once wrote a program (IBM S/360 Assembler and FORTRAN) that would evaluate the quantization noise (and round-off noise) in every possible permutation of the sections of any IIR filter that had been reduced to a cascade of second-order and/or first order canonic sections. The theoretical basis for the noise computation was provided by my advisor, Dr. John N. Gowdy (still at Clemson University) with heavy reliance on work by E. I. Jury. A paper summarizing our conclusions, including the program description, was written-up and presented by Dr. Gowdy. I don't have a copy of the paper, but he might, or you might be able to get it from the IEEE archives. a.. Gowdy, J. N., and Hadstate, J. E., "Design of Optimum Configurations of Digital Filters," Proceedings of the l973 Southeastcon, Louisville, KY, May l973. I noticed that another of his students did some later work in this area (around 1976). http://www.ece.clemson.edu/speech/summary.htm My memory is not what it used to be, but I believe that the gist of our research concluded that you should put the sections with poles closest to the unit circle earlier (or maybe later) in the cascade and you should put as much as possible of the constant gain at the front of the cascade (to raise the signal as much as possible above the internally generated noise without inducing overflow) and then distribute the rest of the gain among the downstream sections.
Reply by Jerry Avins March 20, 20052005-03-20
Rick Lyons wrote:

   ...

> x(n)--->(*)------>(+)-----------+----> y(n) > ^ ^ | > | | v > [1-p] | [z^-1] FIG 1. > | | > +----(*)-----+ > ^ > | > p
...
> x(n)------->(+)-----------+------->(*)---> y(n) > ^ | ^ > | v | > | [z^-1] [1-p] > | | > +----(*)-----+ > ^ FIG 2. > | > p
Rick, What about overflow? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Rick Lyons March 20, 20052005-03-20
On Tue, 15 Mar 2005 13:59:38 -0500, Jim Thomas <jthomas@bittware.com>
wrote:

>robert bristow-johnson wrote: >> in article 113dq4c29g9k066@corp.supernews.com, Jim Thomas at >> jthomas@bittware.com wrote on 03/15/2005 08:52: >>>p = (N+1)/(N-1) >>> >> are you sure? i think it's the other way around? p = (N-1)/(N+1) > >Of course you're right. But I haven't had much luck getting this to >compute a moving average (and thus, a moving variance). Is this correct? > >y[n] = p*y[n-1] + (1-p)x[n] >
Hi Jim, just saw you post. To add 2 cents to Robert B-J's words: your y[n] = p*y[n-1] + (1-p)x[n] difference equation is, as you said, and exponential averager [also called a "leaky integrator"],when p is some real number less that one. I'll bet you already knew that. It's the IIR filter (used for noise reduction) that I described in Section 8.5 of the 1st edition of my book [Section 11.5 of the 2nd edition]. Please note: my alpha is equal to your above (1-p). Ya' know Jim, I was thinking. When I see block diagrams of that exponential averager in the literature, I always see the input sequence multiplied by (1-p) [my alpha] before the feedback loop as shown below. x(n)--->(*)------>(+)-----------+----> y(n) ^ ^ | | | v [1-p] | [z^-1] FIG 1. | | +----(*)-----+ ^ | p It seems to me that multiplying x(n) by a number less than one [(1-p)] reduces the input signal-to-quantization noise ratio if we use a fixed-point number system. If that's true, maybe it's better to use the following FIG 2 block diagram: x(n)------->(+)-----------+------->(*)---> y(n) ^ | ^ | v | | [z^-1] [1-p] | | +----(*)-----+ ^ FIG 2. | p I wonder if anyone has modeled (in fixed-pt math) those two versions of an exponential averager, and knows which version should be used. (I don't have an easy way to model "fixed-point" math.) See Ya, [-Rick-]