DSPRelated.com
Forums

selective fft

Started by amara vati September 21, 2004
"Clay Turner" <physics@bellsouth.net> wrote...
> The method I proposed is equiamplitude whereas Reinsch's method is not. And > I believe this in addition to the orthogonality at low frequency is what > helps minimize the errors. I predict other structures would be better near > either 1/4 or 1/2 of the sampling rate.
It would be nice to have an "optimal" Goertzel-like method (or a systematic way to come up with such a method) that for *any* given frequency will achieve O(sqrt(N)) mean error. One option, of course, is to just use the naive DFT formula (which has O(sqrt(N)) mean error if the twiddle factors are accurate), but with an accurate trig. recurrence instead of precomputed twiddle factors. There are trig. recurrences that have O(sqrt(log N)) mean error, and require only O(log N) storage, and I suspect they would do the trick. I tried using the naive DFT formula with Singleton's trig. recurrence (which requires O(1) storage), but the results (for test 2, see earlier) were a bit weird -- the errors jumped back and forth between the O(N) line and the O(sqrt(N)) line for different N's, and I couldn't detect the pattern; maybe there is some fix, though. In any case, such algorithms (implemented straightforwardly) are much more expensive than Goertzel's algorithm: 6/4 or 6/6 mults/adds per input, vs. 1/2 mults/adds per input for Goertzel. Cordially, Steven G. Johnson PS. You can do even better than O(sqrt(N)) error with the naive DFT, and in fact should be able to get the FFT's O(sqrt(log N)) mean error, by using e.g. cascade summation (with negligible added computational cost and O(log N) storage). However, this loses the nice property of going through the data in order, which is important for many applications. There might instead be some Kahan-like summation trick to get similar accuracy, but of course this adds more computational cost per input.
>"Clay Turner" <physics@bellsouth.net> wrote... >> I predict other structures would be better >> near either 1/4 or 1/2 of the sampling rate. > >It would be nice to have an "optimal" Goertzel-like method (or a >systematic way to come up with such a method) that for *any* given >frequency will achieve O(sqrt(N)) mean error. >
OK, I realize this is major zombie thread, but I wanted to post some results from some experiments on this topic. If this has already been done, please let me know. What I did was create a general purpose application to plug in new resonators (specifically the ones from here: http://www.claysturner.com/dsp/ResonatorTable.pdf). You can plug in resonators in matrix form (for non-staggered update) and execute without having to modify the general algorithm specified here: http://www.claysturner.com/dsp/digital_resonators.pdf. I also implemented the traditional forms of Reinsch as staggered update. See an example output below. Besides curiosity, I am motivated by a need to avoid unacceptable errors in another application I'm developing. I don't know that I found anything terribly promising, but was surprised by some things. Maybe they're bugs in my code. If anyone's interested I published it here: https://bitbucket.org/hexamer/testgoertzelresonators A binary is available here: https://bitbucket.org/hexamer/testgoertzelresonators/downloads/TestGoertzelResonators.exe It is coded to run on a machine with AVX2 (Haswell). If you don't configure FMA3, it may work on pre-AVX2 machines. 1) With the exception of Reinsch(pi) all resonators show poor error growth (looks like N^2) near pi 2) The matrix form of Reinsch that Clay Turner published in this paper seems to have poor error performance near 0, and &ldquo;reasonable&rdquo; (i.e. not N^2) error performance near pi/2, which is contrary to all other publications, including results in this thread. Perhaps because it's a parallel, not staggered form? 3) The standard staggered forms of Reinsch don't seem to have N^2 error performance near pi/2 as the literature would suggest. Admittedly I haven't tried to fit the data yet. 4) In fact, I haven't yet seen any of the resonators exhibit what looks like N^2 error growth near pi/2 5) Use of FMA3 does bring a modest performance benefit to Goertzel, but surprisingly seems to have no benefit to numerical error. 6) The Coupled resonator produces normal errors up to pi/2, but then just starts producing strange results from pi/2 to pi. Example output:
>TestGoertzelResonators.exe --resType=6 --minN=8 --maxN=25 --fine=30
Resonator Type 6 is Reinsch(0) N, log(err) theta ~ 0, log(err) theta ~ pi/2, log(err) theta ~ pi 256, -14.1091, -13.5427, -12.0575 512, -13.7318, -13.019, -11.4822 1024, -13.9116, -12.6285, -11.2495 2048, -12.5309, -11.9725, -10.5061 4096, -12.8532, -11.7525, -10.3899 8192, -11.8121, -11.6945, -8.96993 16384, -12.913, -11.1648, -8.31467 --------------------------------------- Posted through http://www.DSPRelated.com
On Tuesday, 21 September 2004 11:00:31 UTC+5:30, amara vati  wrote:
> which is the best fft method if want the spectrum only for a selective > set of points, say only 10-20% of the points in the entire spectrum. > > amar
FFT is highly optimized with total computations of O(NlogN). For computing DFT for a set of M points, you need atleast O(N.M)(with some scaling constant). I once tried various algorithms, including Goetzels and all of them were worse compared to fft in time. One way to reduce time is to reduce FFT size (from N to say N/2), if the indices on which you want FT are 'special'. For example, they are all even or odd or multiples of 4.. etc