DSPRelated.com
Forums

The slippery slope of sliding Goertzels

Started by Steve Underwood November 16, 2005
Nearly 30 years after first reading about the sliding-window Goertzel 
algorithm, I finally have cause to try implementing it. :-)

My understanding is you just add one step to the recursive part of the 
usual Goertzel algorithm - you just subtract the input sample from one 
block length back. I tried that, and it basically works. However, there 
are various grubby little details which bother me.

I remember there was some discussion in this group a couple of years 
back when Rick Lyons and Eric Jacobsen wrote an article on this subject, 
and someone was querying the details. However, checking that discussion 
it seemed to fade out without really answering anything.

Because the structure of the Goertzel algorithm is IIR and tuned for 
resonance, it seems a lot like an oscillator. It feels like there should 
be a danger of the output level wandering, in the way that uncontrolled 
IIR oscillators tend to. However, in trying the algorithm with single 
precision floating arithmetic, and certainly loosing some bits along the 
way, it always seems stable. I'm not clear if something I'm missing 
makes it unconditionally stable, or if I am simply not prodding it hard 
enough. The block processing of the standard Goertzel algorithm prevents 
this being an issue there.

If the non-sliding Goertzel algorithm has a block length of N, you clear 
the registers, push N samples of signal through the filter, then a zero, 
then the result is available. It seems with the sliding version I also 
need to push a zero through before calculating the non-recursive half of 
the filter. Can someone confirm that is correct?

The non-sliding Goertzel algorithm only gives optimal results when the 
block length is a whole number of cycles of the filter's centre 
frequency. However, the filter still works well for other block lengths. 
You just get some ripple in the block to block output for a stationary 
input signal. This means if you can be a little tolerant about the 
output level, you can tweak the bandwidth to taste. Typically, this 
seems to be what people do.

With the sliding Goertzel algorithm, it seems that choosing N to be 
anything other than an exact whole number of cycles of the filter's 
centre frequency makes the results fall apart completely. Is there a way 
to tame this behaviour, so the bandwidth of the filter can be tweaked in 
a more flexible manner? It worries me that if things are this sensitive, 
then even when the block length is a multiple of the filter's centre 
frequency, the tiniest error in the frequency dependent coefficient 
could have bad consequences. It seems to work OK in the cases I have 
tried, but can that be guaranteed?

Any help greatly appreciated.

Steve
On Wed, 16 Nov 2005 20:25:52 +0800, Steve Underwood <steveu@dis.org>
wrote:

>Nearly 30 years after first reading about the sliding-window Goertzel >algorithm, I finally have cause to try implementing it. :-) >
(snipped)
> >With the sliding Goertzel algorithm, it seems that choosing N to be >anything other than an exact whole number of cycles of the filter's >centre frequency makes the results fall apart completely. Is there a way >to tame this behaviour, so the bandwidth of the filter can be tweaked in >a more flexible manner? It worries me that if things are this sensitive, >then even when the block length is a multiple of the filter's centre >frequency, the tiniest error in the frequency dependent coefficient >could have bad consequences. It seems to work OK in the cases I have >tried, but can that be guaranteed? > >Any help greatly appreciated. > >Steve
Hi Steve, I'm gettin' ready to take off on a business trip tomorrow so I can't spend the time to dig through my MATLAB code to help answer all your sensible questions. I'll send ya a copy of the two articles that contained Goertzel & sliding Goertzel info. Maybe they'll be of some use to you. [-Rick-]
On Wed, 16 Nov 2005 20:25:52 +0800, Steve Underwood <steveu@dis.org>
wrote:

  (snipped)
> >Any help greatly appreciated. > >Steve
Steve, I tried to E-mail you, but the <steveu@dis.org> address didn't work. Gotta run. [-Rick-]
Steve Underwood wrote:
> Nearly 30 years after first reading about the sliding-window Goertzel > algorithm, I finally have cause to try implementing it. :-) > > My understanding is you just add one step to the recursive part of the > usual Goertzel algorithm - you just subtract the input sample from one > block length back. I tried that, and it basically works. However, there > are various grubby little details which bother me. > > I remember there was some discussion in this group a couple of years > back when Rick Lyons and Eric Jacobsen wrote an article on this subject, > and someone was querying the details. However, checking that discussion > it seemed to fade out without really answering anything. > > Because the structure of the Goertzel algorithm is IIR and tuned for > resonance, it seems a lot like an oscillator. It feels like there should > be a danger of the output level wandering, in the way that uncontrolled > IIR oscillators tend to. However, in trying the algorithm with single > precision floating arithmetic, and certainly loosing some bits along the > way, it always seems stable. I'm not clear if something I'm missing > makes it unconditionally stable, or if I am simply not prodding it hard > enough. The block processing of the standard Goertzel algorithm prevents > this being an issue there.
I haven't applied the Goertzel specifically, but I _have_ applied lots of IIR filters. The Goertzel isn't an oscillator, really, rather it's an IIR filter who's poles or exactly on the stability boundary. As such _any_ gain deviation in any of the multiplies will cause the poles to deviate from this boundary which will cause the response to either decay or grow. In addition there will be quantization noise which will either add to or subtract from the data, most likely in unpredictable ways.
> > If the non-sliding Goertzel algorithm has a block length of N, you clear > the registers, push N samples of signal through the filter, then a zero, > then the result is available. It seems with the sliding version I also > need to push a zero through before calculating the non-recursive half of > the filter. Can someone confirm that is correct?
I always see the Goertzel expressed as y_n = 2 cos(th) y_{n-1} - y_{n-2} + x_n. (1) This not only makes the algorithm more sensitive to gain and data quantization than necessary, it also makes it a bit obscure to pull out the "actual" value of the sine wave if it happens to be at a null when you finish your data. For resonant filters I prefer to use a structure like [ y1_n ] [ cos(th) sin(th) ] [ y1_{n-1} ] [ 1 ] [ ] = [ ] [ ] + [ ] x_n. (2) [ y2_n ] [-sin(th) cos(th) ] [ y2_{n-1} ] [ 0 ] For a sinusoid at x_n = cos(th * n) the y1 and y2 will contain inphase and quadrature parts of the response, respectively. So the strength and phase of the incoming signal can be "read" directly from y = y1 + i y2 (or the other way around -- I'm doing this from memory, it's your job to check the math). An additional advantage (and the reason I use this for resonant filters) is that the amount of precision needed to attain a given accuracy will drop by roughly a factor of 2. This means that if you needed double precision to implement (1) adequately you only need single precision to implement (2). Unless your processor has such large gonads that it takes double-precision floating point in stride this can be a big advantage.
> > The non-sliding Goertzel algorithm only gives optimal results when the > block length is a whole number of cycles of the filter's centre > frequency. However, the filter still works well for other block lengths. > You just get some ripple in the block to block output for a stationary > input signal. This means if you can be a little tolerant about the > output level, you can tweak the bandwidth to taste. Typically, this > seems to be what people do. > > With the sliding Goertzel algorithm, it seems that choosing N to be > anything other than an exact whole number of cycles of the filter's > centre frequency makes the results fall apart completely. Is there a way > to tame this behaviour, so the bandwidth of the filter can be tweaked in > a more flexible manner?
With either form you could predict the value of the states resulting from an input some number of samples earlier. With the technique you state and equation (1) you would use an even number of cycles, which would guarantee that for nd = even number of cycles y_{n-1} would be equal to (x_{n-nd} + y_{n-nd-1})/(2 cos(th)), so subtracting out x_{n-nd} would give you the state before you added the thing in in the first place. Similar results would hold for (2), with only y1_n being affected by x_{n-nd}. If you wanted to set the bandwidth to some non-integer number of cycles then you would have to modify both y_n and y_{n-1} in the case of equation (1), or both y1_n and y2_n in the case of equation (2). Just calculate the response of y_n and y_{n-1} at n = nd for an impulse at n = 0; that's your gains for subtracting out the effect. For equation (2) these gains will be (I think, check) cos(th * nd) and sin(th * nd).
> It worries me that if things are this sensitive, > then even when the block length is a multiple of the filter's centre > frequency, the tiniest error in the frequency dependent coefficient > could have bad consequences. It seems to work OK in the cases I have > tried, but can that be guaranteed? >
I don't think it can be guaranteed, no. In fact I _do_ think that "no" can be guaranteed. Because the results of sine and cosine operations are irrational numbers there will always be dangling bits, so at some point the error will build up. Unless you can guarantee that your system will only run for to some definite maximum amount of time before the filters are cleared you cannot guarantee that the inevitable error build-up won't mess things up. I think you could sidestep many of the precision issues by multiplying the incoming signal by e^(i th n) and doing a running average in the resulting complex number. You'll have some precision issues in the computation of e^(i th n) = cos(th n) + i sin(th n), but if you store the _result_ to subtract out of your accumulator, and if you used fixed-point math, you'll be guaranteed of a bit-for-bit match. If you must use a sliding-Goertzel-like-algorithm I suggest you add some damping. Use the transfer function whatever H(z) = ------------------------- z^2 - 2 d cos(th) z + d^2 with d ever so slightly less than 1. I'd suggest you use the form of (2) but that's your choice. For the subtraction you'll definitely need to calculate what to take away from your states; at best you can arrange this to be zero on an even sample boundary and just subtract some d^nd * x_{n-nd}. Using a damped form will give you a filter that is not a perfect sinc function centered on your frequency of interest; it'll be close to a sinc with a little bit of oddball phase shift and imperfect nulls -- but if you're just looking for a bandpass filter it'll do.
> Any help greatly appreciated. > > Steve
Sorry I didn't respond to this earlier -- I was busy last week when I saw this & put it aside "for later". I didn't get to it until Rick's replies brought it back to the front on my newsreader. One final comment: Whenever I see someone propose a sliding average I have to ask "is that really the low-pass filter you want to implement?". So I'll ask the equivalent question here: Is this really the bandpass filter you want to implement? You should be able to implement a 4th-order IIR bandpass filter with only a bit more processor time and much less memory than a sliding Goertzel, without the worries about the metastable filter core running away with your accuracy. True, the filter will have a different shape, most particularly in that it won't have the nulls that a sliding Goertzel does, but do you really need the shape? Similarly, you should be able to implement a FIR bandpass filter. With a DSP it should even be without a great sacrifice of processor time unless it's quite long -- in which case you can demodulate, decimate, and filter with something shorter. If it were me I'd look to alternatives to make sure that the sliding Goertzel was right for my application, and even then I'd strongly consider using a leaky sliding Goertzel to keep things safe. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com