DSPRelated.com
Blogs

Correcting an Important Goertzel Filter Misconception

Rick LyonsJuly 6, 201517 comments

Recently I was on the Signal Processing Stack Exchange web site (a question and answer site for DSP people) and I read a posted question regarding Goertzel filters [1]. One of the subscribers posted a reply to the question by pointing interested readers to a Wikipedia web page discussing Goertzel filters [2]. I noticed the Wiki web site stated that a Goertzel filter:

"...is marginally stable and vulnerable to
numerical error accumulation when computed using
low-precision arithmetic and long input sequences."

That statement can often be misinterpreted. Because of that possible stability misconception, and the fact the literature of the Goertzel algorithm doesn't discuss Goertzel filter stability, I decided to write this blog.

This article is also available in PDF format

This article is available in PDF format for easy printing

Goertzel Filter Descriptions
Typical DSP textbook descriptions of Goertzel filters begin by discussing the network shown in Figure 1(a) that can be used to compute N-point discrete Fourier transform (DFT) X(k) spectral samples [3]–[5].


Figure 1 - Recursive network used to compute a single
DFT spectral sample: (a) implementation;
(b) z-plane pole.

The Figure 1(a) complex resonator's z-domain transfer function is

$$ H_R(z) = {1\over {1 - e^{j2 \pi k/N} z^{-1}}}\tag{1} $$

which, with infinite-precision arithmetic, has a single pole located on the z-plane's unit circle as shown in Figure 1(b).

While the Figure 1(a) resonator can be used to compute individual N-point DFT spectral samples it has a major flaw. When finite-precision arithmetic (as in our digital implementations) is used the network's z-plane pole does not lie exactly on the unit circle. This means the resonator can be unstable and a computed DFT sample value will be incorrect. In the literature of DSP the resonator is referred to as "marginally stable."

We show this in Figure 2 for various z-plane poles, in the frequency range of $ 0 \le 2\pi k/N \le \pi /2 $ radians (zero to one quarter of the input sample rate in Hz), when 5-bit binary arithmetic is used.

Figure 2 - First quadrant z-plane pole locations of a
complex resonator using 5-bit arithmetic.

DSP textbook descriptions of Goertzel filters don't discuss the complex resonator's unstable operation. That's because they move on to improve the computational efficiency of the complex resonator by multiplying Eq. (1)'s numerator and denominator by $ (1 - e^{-j2 \pi k/N}) $ as:

$$ H_G(z) = {1\over {1 - e^{j2 \pi k/N} z^{-1}}} \cdot {{1 - e^{-j2 \pi k/N} z^{-1}} \over  {1 - e^{-j2 \pi k/N} z^{-1}} } \tag{2}$$

$$ \qquad \qquad = {{1 - e^{-j2 \pi k/N} z^{-1}} \over {(1 - e^{j2 \pi k/N} z^{-1})(1 - e^{-j2 \pi k/N} z^{-1})}} \tag{3}$$

$$ \qquad \qquad =  {{1 - e^{-j2 \pi k/N} z^{-1}} \over {1 - 2cos(2 \pi k/N)z^{-1} + z^{-2}}} \tag{4}$$

which is the z-domain transfer function of the standard Goertzel filter. The Eq. (4) Goertzel filter's block diagram, shown in Figure in Figure 3(a), has two conjugate poles and a single pole-canceling zero on the z-plane as shown in Figure 3(b).

Figure 3 - Standard Goertzel filter: (a) filter
implementation; (b) z-plane poles and zero.

The Goertzel Filter Misconception
The potential misconception, regarding finite-precision arithmetic, is the following:

Misconception:
Because the first ratio in Eq. (2), which is only marginally
stable, times a unity-valued ratio equals Eq. (4), then the
implementation of Eq. (4) must also be only marginally stable.

I'm familiar with this misunderstanding because I was painfully guilty of it myself years ago -- and this is the misconception stated on the Goertzel Wikipedia web site [2].

The Stability of a Goertzel Filter
As it turns out, a Goertzel filter, described by Eq. (4), is guaranteed stable! The filter's poles always lie on the z-plane's unit circle. (Proof of that statement is given in Appendix A.)

So, we can say:

The poles of the Figure 3(a) Goertzel filter always lie
exactly on the z-plane's unit circle making the filter
guaranteed stable regardless of the numerical precision
of its coefficients.

We show this Goertzel filter stability behavior in Figure 4 for various first quadrant z-plane poles for 5-bit binary filter coefficients.

Figure 4 First quadrant z-plane pole locations of a
Goertzel filter using 5-bit arithmetic.

The Limitations of the Goertzel Filter
In finishing this blog I'll mention two important characteristics of a Goertzel filter.

[1] While the Goertzel filter is guaranteed stable, quantized coefficients (a finite number of binary bits) limit the locations where the filter's z-plane poles can be placed on the unit circle. A small number of binary bits used to represent the filter's coefficients limits the precision with which we can specify the filter's resonant frequency.

[2] Quantized coefficients make it difficult to build Goertzel filters that have low resonant frequencies. The lower the desired filter resonent frequency the larger must be the number of coefficient binary bits used.

We illustrate both of those characteristics in Figure 5 and Figure 6 for 5- and 8-bit coefficients.

Figure 5 First quadrant z-plane pole locations of a
Goertzel filter using 5-bit arithmetic.

Figure 6 First quadrant z-plane pole locations of a
Goertzel filter using 8-bit arithmetic.


Conclusion
Here we showed that Goertzel filters will never be unstable due to z-domain pole placement on the z-plane's unit circle. However as pointed out by Ollie Niemitalo, another form of instability is possible with Goertzel filters. A Goertzel filter can become unstable, due to finite precision arithmetic being used for the first feedback accumulator (adder) in Figure 3 to produce the v(n) sequence, for very large values of N. So the user must take care to ensure that, for large N (in the tens of thousands for 32-bit floating point numbers), all binary register word widths can accommodate all necessary arithmetic operations. 

APPENDIX A – Proof of Goertzel Filter Guaranteed Stability
Goertzel filters, described by Eq. (4), are guaranteed stable. Proof of that statement was shown to me by the skilled DSP engineer Prof. Pavel Rajmic (Brno University of Technology, Czech Republic). Rajmic's straightforward proof of Goertzel filter stability goes like this:

For mathematical convenience, multiply the Goertzel filter's Eq. (4) numerator and denominator by $z^2$ as:

$$  {{1 - e^{-j2 \pi k/N} z^{-1}} \over {1 - 2cos(2 \pi k/N)z^{-1} + z^{-2}}} \cdot {{z^2} \over {z^2}}  $$

$$  \qquad = {{z^2 - e^{-j2 \pi k/N} z^1} \over {z^2 - 2cos(2 \pi k / N)z + 1}}\tag{A-1} $$

If we substitute $ \alpha = cos(2 \pi k/N) $ into Eq. (A-1) then Eq. (A-1)'s denominator becomes:

$$ z^2 = 2 \alpha z + 1\tag{A-2} $$

The roots of this polynomial will be the z-plane locations of the trtaditional Goertzel filter's poles.

The roots of the polynomial are:

$$ z = {{2 \alpha \pm \sqrt{4 \alpha ^2 - 4}} \over 2} = {{2 \alpha \pm 2\sqrt{\alpha ^2 - 1}} \over 2} = {\alpha \pm \sqrt{\alpha^2 - 1}} \tag{A-3} $$

Because $ \alpha $ is in the range $ -1 \le \alpha \le 1 $, then $ \alpha^2 \le 1 $ and we can rewrite expression (A-3) as:

$$  z = \alpha \pm \sqrt{-(1 - \alpha^2)} = \alpha \pm j \sqrt{1 - \alpha^2}\tag{A-4} $$

The magnitudes of both of the two complex conjugate poles are:

$$ \lvert z \rvert = \sqrt{\alpha^2 + (1 - \alpha^2)} = \sqrt{1} = 1 \tag{A-5}$$

which is what I set out to show. So, a Goertzel filter's poles always lie exactly on the z-plane's unit circle.


References

[1] Signal Processing Stack Exchange, http://dsp.stackexchange.com/

[2] Goertzel algorithm, https://en.wikipedia.org/wiki/Goertzel_algorithm

[3] S. Mitra, "Digital Signal Processing", Fourth Edition, McGraw Hill Publishing, 2011, pp. 618-620.

[4] J. Proakis and D. Manolakis, "Digital Signal Processing-Principles, Algorithms, and Applications", Third Edition, Prentice Hall, Upper Saddle River, New Jersey, 1996, pp. 480-481.

[5] A. Oppenheim, R. Schafer, and J. Buck, "Discrete-Time Signal Processing", Second Edition, Prentice Hall, Upper Saddle River, New Jersey, 1996, pp. 633-634.


[ - ]
Comment by Aaron45July 21, 2015
Excellent article! This has definitely sharpened my understanding of Geortzel stability. But I still ponder what can be said about "long input sequences". Are they only of concern in the context of the coefficient quantization issues described above. Looking at this thread: http://www.dsprelated.com/showthread/comp.dsp/2367-3.php, there are clearly some relative accuracy issues.
[ - ]
Comment by Rick LyonsNovember 11, 2015
Hi Aaron,
I just now saw your Comment. I don't understand your question of "what can be said of long sequences?" If you can rephrase your question I'll do my best to answer it. I took a look at the thread you referenced in your URL address. There's no way I have the time to plow through the details of that long thread.
Regards,
[-Rick-]
[ - ]
Comment by Aaron45December 4, 2015
No problem, Rick. It took me a while to see your response. Since then it looks like a fruitful discussion took place. I think Olli was on to what I was asking. Since then, I've also read alot more about the lack of numerical stability of Goertzel and long input sequences, taking the Gentleman article (http://comjnl.oxfordjournals.org/content/12/2/160.full.pdf) as a base and following citations.

I suspect that most people wouldn't be terribly concerned from a practical standpoint given modest precision and input lengths. But in my case I was dealing with very long input sequences (up to 1e9 samples!) For a while I was looking at pre-decimating with a CIC+FIR as a mitigation. But after further consideration, I realized that the Reincsh modification would always work in my application. The reason is that for very large input sequences, my frequency is well away from pi/2 (where Goertzel performs numerically better than Reinsch). When the frequency is closer to pi/2, the sample size is much smaller where rounding error accumulation is negligible.
[ - ]
Comment by Olli NiemitaloNovember 11, 2015
The Goertzel filter is not numerically stable. It is not unstable due to quantization of coefficients, as you have shown. It is unstable because of accumulation of rounding errors. Let's say the resonator's coefficient for the previous output has a bit depth of 10 bits. Multiplication of an N bit number by an M bit number gives an N+M bit number. So on the 10th iteration the resonator's output needs already a bit depth of about 100 bits. We can't store that and begin to throw away bits and accumulate rounding error.

By the way, I noticed some saving opportunities in the Goertzel filter: https://dsp.stackexchange.com/questions/26971/special-case-peak-notch-filter-for-exactly-1-32-sampling-rate
[ - ]
Comment by Rick LyonsNovember 11, 2015
Hi Ollie.
I want to understand what you're saying here. Can you tell me, what does the phrase "resonator's coefficient for the previous output" mean. I'm trying to figure out what that phrase means relative to my above Figure 3(a).
Regards,
[-Rick-]
[ - ]
Comment by Olli NiemitaloNovember 12, 2015
Hi Rick! I mean the coefficient 2cos(2?k/N) of the recursive part of the Goertzel filter: out[t] = in[t] + 2cos(2?k/N)out[t-1] - out[t-2].
[ - ]
Comment by Rick LyonsNovember 13, 2015
Hi Ollie,
Ah, OK. You're bringing up both a good and an important implementation topic here. Yes, multiplication of a 10 bit number by a 10 bit number can produce a 20-bit number. But after the 10th Goertzel iteration the accumulated v(n) result may not require a full 100-bit word depth. Accumulator register overflow is not guaranteed. The required v(n) accumulator word depth depends on the magnitudes of the numerical values being summed. When implementing the Goertzel algorithm using fixed-point numbers, the user must do one of two things. (1), make sure the v(n) accumulator can accommodate the number of iterations, N, and the magnitudes of the input signal samples. Or (2), truncate the v(n) accumulator output after each accumulation. The following web page discusses the truncation scheme.
http://www.embedded.com/design/configurable-systems/4008873/2/PRODUCT-HOW-TO-Efficient-Fixed-Point-Implementation-of-the-Goertzel-Algorithm-on-a-Blackfin-DSP
[ - ]
Comment by Olli NiemitaloNovember 13, 2015
I did not mean overflow due to input values. Even a fixed-point implementation of the Goertzel algorithm is fed just an unit impulse followed by zeros, the bit string of the fractional part of v(n) will grow in length every iteration, by about the length of the bit string of the coefficient. So (1) is hardly realistic. I tested a 32-bit floating point implementation using the round-to-nearest mode, k=1, N=50, and got an evolution of the magnitude of y(n) that looks a lot like random walk. That is because the rounding error is effectively a pseudorandom sequence of ups and downs. Eventually it enters a loop in its finite states. See http://imgur.com/yMQl2Ex Horizontal axis: iteration count, vertical axis: magnitude of y(n).
[ - ]
Comment by Rick LyonsNovember 19, 2015
Hi Olli, OK, I'm back on duty. Your plotted output curve is interesting. Can you tell me, what was the input signal you used for our testing? Also, I don't know what "round-to-nearest mode" means with regard to floating point numbers.
[-Rick-]
[ - ]
Comment by Olli NiemitaloNovember 24, 2015
Hi Rick, the input signal was just a unit impulse followed by zeros. Floating point rounding mode determines how a result that cannot be exactly represented is rounded. Possible modes for IEEE binary floating point are round to nearest (ties to even), round towards negative infinity, round towards zero, round towards positive infinity. See: https://en.wikipedia.org/wiki/IEEE_floating_point#Rounding_rules

I will paste my code below. It can be compiled with G++, a C++ compiler. If you direct the program's output to a file output.txt, you can plot it in Octave using x = csvread("output.txt"); plot(x(:, 1), x(:, 2));

#define _USE_MATH_DEFINES
#include
#include
#include

main() {
int k = 1;
int N = 50;
int numSamples = 3000000;
fesetround(FE_TONEAREST); // FE_TONEAREST, FE_UPWARD, FE_DOWNWARD, or FE_TOWARDZERO
float coef = 2*cos(2*M_PI*k/N);
float outHistory1 = 0, outHistory2 = 0;
float input = 1; // First input value
for (int count = 0; count < numSamples; count++) {
float out = input + coef*outHistory1 - outHistory2;
outHistory2 = outHistory1;
outHistory1 = out;
input = 0;
double re = outHistory1 - cos(2*M_PI*k/N)*outHistory2;
double im = sin(2*M_PI*k/N)*outHistory2;
printf("%d,%f\n", count, sqrt(re*re + im*im));
}
}
[ - ]
Comment by Olli NiemitaloNovember 24, 2015
Missing includes for the above: math.h, stdio.h, fenv.h
[ - ]
Comment by Rick LyonsNovember 27, 2015
Hi Olli,
Perhaps, if you wish, we could continue our conversation using private e-mails.
I'm at R_dot_Lyons_at_ieee_dot_org. It's up to you.
[ - ]
Comment by nilbhatMarch 15, 2023

Hi Rick,  Thanks for the article. Its quite insightful. Can you point me to literature which explains figure 1a is true representation of single bin DFT.  I am not able to comprehend how is that equivalent to single bin DFT. 

[ - ]
Comment by Rick LyonsMarch 16, 2023

Hello nilbhat.

Good for you for asking a good question.

Let’s look at an example of an N = 6 point DFT and focus on the second (k = 1) DFT bin X(1). Now we ask, What is the contribution of the second input sample, x(1), to the final desired X(1) DFT bin value? The standard 6-point DFT equation tells us that the X(1) bin output will comprise six terms added together, right? (Six complex numbers). And in those six terms the contribution of the x(1) input sample will be x(1) times e^-j*2*pi*1/6. (The ‘1’ in the exponent is for k = 1.) OK, keep this in mind.

Looking at this blog’s recursive Figure 1(a) network, think about applying a six-sample input sequence to the network’s input. Next, apply a single zero-valued sample to the network’s input. Now ask yourself, “How many times did the x(1) input sample circulate through the feedback loop and what is the contribution of the x(1) input sample to the final desired y(n) output sample?”

The x(1) input sample circulated through the feedback loop five times and the contribution of the x(1) input sample, to the final y(n) output sample, will be x(1) times e^j*2*pi*1*5/6.

Well, that x(1) times e^j*2*pi*5/6 is exactly equal to the x(1) times e^-j*2*pi*1/6 that we obtained from the standard 6-point DFT equation. So the Figure1(a) network does indeed compute correct DFT bin output samples.

I hope what I’ve written makes sense. nilbhat, if you have any more questions please let me know.


[ - ]
Comment by nilbhatMarch 21, 2023

Hi Rick, Thanks for the earlier answer. It perfectly makes sense.

Another question I have is, For Goertzel to work does "k" have to be integer in equation 1 and 4.

[ - ]
Comment by Rick LyonsMarch 22, 2023

Hello nilbhat.

The answer is "No". See the following web page:

https://www.dsprelated.com/showarticle/495.php




[ - ]
Comment by fharrisApril 4, 2023

Hello all, the single pole resonator is always imperfect because the pole is located at exp(j*2*pi*f0/fs)=cos(2*pi*f0/fs)+j*sin(2*p*f0/fs) the real and imaginary parts (cosine(theta) +j sine(theta))would satisfy cos^2+sin^2 =1, a point on the unit circle. The problem is cos and sin for arbitrary theta are transcendental numbers that require infinite precision to represent. Hence the point is either inside or outside the unit circle but except for the cardinal directions, is never on the unit circle and the circulating ordered pairs from the resonator are seen to be either increasing or decreasing for long sequences. If we add an AGC we can overcome the change in amplitude.  The AGC can be embedded in a resonator that uses the cordic with its growth per iteration suppressed by the AGC. amazingly good performance. look at the attached paper.

sdr_2006_DDS_2.pdf 


fred h

To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.

Please login (on the right) if you already have an account on this platform.

Otherwise, please use this form to register (free) an join one of the largest online community for Electrical/Embedded/DSP/FPGA/ML engineers: