DSPRelated.com
Forums

Show me the numbers

Started by Cedron May 28, 2015
> >Whether the resulting bin value functions (with window function
included)
>are as readily invertible, I simply don't know. You are welcome to try >it. > >> >>-- >> >>r b-j rbj@audioimagination.com >> >>"Imagination is more important than knowledge." > >Thanks for you input. > >Ced >--------------------------------------- >Posted through http://www.DSPRelated.com
Here is a candidate window function: w_n = 2 [ cos( Pi/N ) - cos( (n+1/2) 2Pi/N ) ] = 4 sin( (n+1) Pi/N ) sin( n Pi/N ) Ced --------------------------------------- Posted through http://www.DSPRelated.com
>On Saturday, May 30, 2015 at 4:54:59 AM UTC-7, Randy Yates wrote: >... >> How do you classify an >> algorithm that gives you "exact" results when no noise is present and
a
>> good estimate when it is? >... >> Randy Yates >> Digital Signal Labs >> http://www.digitalsignallabs.com > >"Robust" comes to mind. > >Dale B. Dalrymple
That's not what "Robust" means at all. At least not how I learned it. Robust means tolerant to perturbations without moving to a different equilibrium point. In this context it means insensitive to noise. Julien gave an example of an estimator that was exact in the noiseless case, but not robust at all with noise. I would think that "unbiased" is closest, but that implies (as I understand it) the error distribution remains centered on the solution as the noise level increases. Perhaps "baseline unbiased" or "inherently unbiased" are accurate. I think Randy asked a very good question, and I would like to know the answer too. That's why I repeated it. This is one of those jargon things. Is there a specific term that defines this concept? Come on, estimation theory experts, please give us an answer. Ced --------------------------------------- Posted through http://www.DSPRelated.com
[...snip...]

>In other words, when the peak >magnitude of the DFT is the wrong bin, the estimators don't have much >chance of getting things right. >
I'd like to point out that this doesn't apply to my equation. It can be applied on any three bin set. Of course, applying it at the peak, where the SNR is highest, is best. Given enough noise, it won't work very well either, and as a practical matter the argument of the acos call should be checked to be within valid range or you will get a NAN. [...snip...]
> >Awesome job on the paper and simulations. >
Totally agree.
> >Eric Jacobsen >Anchor Hill Communications >http://www.anchorhill.com
Ced --------------------------------------- Posted through http://www.DSPRelated.com
On Mon, 1 Jun 2015 18:22:29 +0000 (UTC), spope33@speedymail.org (Steve
Pope) wrote:

>Eric Jacobsen <eric.jacobsen@ieee.org> wrote: > >>Regarding why the estimators diverge at ~5dB, this is probably due to >>what is known as the "threshold effect", which is where the magnitude >>maximizer of the DFT starts to be influenced by noise rather than the >>peak location of the signal energy. In other words, when the peak >>magnitude of the DFT is the wrong bin, the estimators don't have much >>chance of getting things right. > >One could, in simulation, conduct an experiment in which the >maximizer-selection is geniated (that is, it uses side information >to be always correct even in low SNR) and isolate whether this is the cause.
Yes. It's a well-known effect and whether one does the hard-selection as you suggest is sometimes indicated when showing results in a paper in order to avoid confusion. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
Cedron <103185@DSPRelated> wrote:

>R_n is defined in (773-4).
I don't see it defined there. (And ultimately you only use R1 in your result.) Steve
>Cedron <103185@DSPRelated> wrote: > >>R_n is defined in (773-4). > >I don't see it defined there. (And ultimately you only use >R1 in your result.) > >Steve
It is part of the equations carried forward from the other article. beta_n = n 2Pi/N (3) R_n = exp( -i beta_n ) (4) R stands for "Root of Unity", they are the stars of the show from the very first blog article. I kept the "one" subscript, even though it wasn't necessary within this article, to be consistent with the other articles. I will slip in a change in the narrative to clarify this for sure. I appreciate the suggestions. Side question: What is the proper name for an estimator that gives an exact answer in the no noise (non-stochastic) case? Ced --------------------------------------- Posted through http://www.DSPRelated.com
On Mon, 01 Jun 2015 16:25:29 -0500, "Cedron" <103185@DSPRelated>
wrote:

>>Cedron <103185@DSPRelated> wrote: >> >>>R_n is defined in (773-4). >> >>I don't see it defined there. (And ultimately you only use >>R1 in your result.) >> >>Steve > >It is part of the equations carried forward from the other article. > >beta_n = n 2Pi/N (3) > >R_n = exp( -i beta_n ) (4) > >R stands for "Root of Unity", they are the stars of the show from the very >first blog article. I kept the "one" subscript, even though it wasn't >necessary within this article, to be consistent with the other articles. > >I will slip in a change in the narrative to clarify this for sure. I >appreciate the suggestions. > >Side question: What is the proper name for an estimator that gives an >exact answer in the no noise (non-stochastic) case?
"Estimator." The no-noise or no-distortion or no-interference case is an unrealistic edge case for real applications. So, a practical estimator is always an estimator in the real word, especially when using a small number of frequency-domain samples computed from a finite-time-length vector.
>Ced >--------------------------------------- >Posted through http://www.DSPRelated.com
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Monday, June 1, 2015 at 5:44:14 PM UTC-4, Eric Jacobsen wrote:
> On Mon, 01 Jun 2015 16:25:29 -0500, "Cedron" <103185@DSPRelated> > wrote: > > >>Cedron <103185@DSPRelated> wrote: > >> > >>>R_n is defined in (773-4). > >> > >>I don't see it defined there. (And ultimately you only use > >>R1 in your result.) > >> > >>Steve > > > >It is part of the equations carried forward from the other article. > > > >beta_n = n 2Pi/N (3) > > > >R_n = exp( -i beta_n ) (4) > > > >R stands for "Root of Unity", they are the stars of the show from the very > >first blog article. I kept the "one" subscript, even though it wasn't > >necessary within this article, to be consistent with the other articles. > > > >I will slip in a change in the narrative to clarify this for sure. I > >appreciate the suggestions. > > > >Side question: What is the proper name for an estimator that gives an > >exact answer in the no noise (non-stochastic) case? > > "Estimator." The no-noise or no-distortion or no-interference case > is an unrealistic edge case for real applications. So, a practical > estimator is always an estimator in the real word, especially when > using a small number of frequency-domain samples computed from a > finite-time-length vector. > > > >Ced > >--------------------------------------- > >Posted through http://www.DSPRelated.com > > Eric Jacobsen > Anchor Hill Communications > http://www.anchorhill.com
Cedron, I've mentioned some links to McEachern's Gaussian Ratio method in a previous post, so I thought I'd add some numbers. I programmed an example - then I remembered that I have a tutorial (and derivation) regarding the method at: http://fftguru.com/fftguru.com.tutorial4.pdf By using a Gaussian window on the data, you're creating a series of Gaussian filters in the frequency domain. Those filters have a Gaussian mathematical description. By using the natural log of the FFT's magnitude outputs, the frequency of an input sinusoid that exists between any two consecutive bins can be solved for. So it's 'exact' in the same sense of the word that you have been using. The difference from your approach is that the window being used is Gaussian instead of rectangular, and the McEachern method uses two bins instead of three. And although Julian mentions truncating the tails of the Gaussian, I have never had to do that with either the McEachern method, or the three bin method used by Gasior at CERN. And the method being used by Julian appears to be quite different than either of them. Here's a program that I cooked up last night. It runs under the Dev-C++ compiler. I hope it prints out properly (Courier New font): // for comp.dsp: Program to estimate frequency and amplitude of a // real sinusoid based on McEachern's Gaussian Ratio method, as modified // by Kevin McGee at fftguru.com - 1) generate input sinusoid, 2) compute // Gaussian window, 3) multiply data record by Gaussian window, // 4) take fft, 5) compute amplitude squared, 6) compute natural logs, // 7) estimate frequency and amplitude. #include <cstdio> #include <cstdlib> #include <iostream> #include <cmath> using namespace std; void fft_recur(long, double* r, double* i); // prototype declaration int main (int nNumberofArgs, char* pszArgs[ ] ) { // arguments for Dev C++ I/O const long N = 32; double sample_rate = 32., r[N] = {0.}, i[N] = {0.}, w[N] = {0.} ; long n; double t, twopi = 6.2831853071795865; double F, fbin, amp, power, c; // generate input sinusoid [STEP 1] for (n = 0; n < N; n++) { t = twopi*n/sample_rate; r[n] = 1.*cos(7.5*t) ; } // end for // compute Gaussian window [STEP 2] double alpha = 6.0 ; for (n = -N/2; n < N/2; n++) { w[n + N/2] = exp(-2.*(n*alpha/N)*(n*alpha/N)) ; } // end for // multiply input data by window points [STEP 3] for (n = 0; n < N; n++) { r[n] = w[n]*r[n]; // we're overwriting r[n] with the windowed r[n] } //end for //take fft [STEP 4] fft_recur(N, r, i); // note: fft is not scaled // compute amplitude squared ([STEP 5] for (n = 0; n < N/2+1; n++) { r[n] = r[n]*r[n]+i[n]*i[n]; // r[n] contains amplitude squared } //end for // compute natural log of FFT amplitude outputs [STEP 6] for (n = 0; n < N/2+1; n++) { r[n] = .5*log( r[n] ) ; // r[n] is now the natural log of the amplitude } // end for // of FFT bin 'n': .5*log(A squared) = log(A) // compute frequency and amplitude estimates [STEP 7] cout<<"\n\nfrequency and amplitude estimation results\n\n"; c = 8.*(alpha/twopi)*(alpha/twopi); for (n = 2; n < N/2-2; n++) { fbin = n+.5+.5*c*(r[n+1]-r[n]); F = (sample_rate/N)*fbin; amp = sqrt(c)*exp(r[n]+((fbin-n)*(fbin-n))/c)*(2.0*sqrt(twopi/2.)/N) ; printf("%2d\t%15.12f\t%15.12f\n",n,F,amp); } //end for system ("PAUSE"); return 0; } // end main //******************** fft_recur *********************** void fft_recur(long N, double *r, double *i) { long h, i1, j = 0, k, i2 = N/2, l1, l2 = 1; double c = -1.0, s = 0.0, t1, t2, u1, u2; for (h = 0; h < N-2; h++) { // *** bit reverse starts here *** if (h < j) { t1 = r[h]; r[h] = r[j]; r[j] = t1; t2 = i[h]; i[h] = i[j]; i[j] = t2; } k = i2; while (k <= j) { j = j - k; k = k/2; } j = j + k; } //****** bit reverse done ****** for (k = 1; k < N; k = k*2) { l1 = l2; l2 = l2*2; u1 = 1.0; u2 = 0.0; for (j = 0; j < l1; j++) { for (h = j; h < N; h = h + l2) { i1 = h + l1; t2 = (r[i1] - i[i1])*u2 ; t1 = t2 + r[i1]*(u1 - u2) ; t2 = t2 + i[i1]*(u1 + u2) ; r[i1] = r[h] - t1; i[i1] = i[h] - t2; r[h] = r[h] + t1; i[h] = i[h] + t2; } // end for over h t1 = u1 * c - u2 * s; u2 = u1 * s + u2 * c; u1 = t1; //x = u1 - u2; y = u1 + u2; } // end for over j s = - sqrt((1.0 - c) / 2.0); c = sqrt((1.0 + c) / 2.0); } // end for over k } // end fft******************************************** The program prints out the frequency and amplitude estimates based on adjacent FFT output bins. In the above, N is 32, sample rate is 32, the Gaussian beamwidth is 6.0, and the input frequency is 7.5, so the output estimates of bin 7 are the correct ones (the bin 7 estimate is calculated using the natural log of FFT output bins 7 and 8). Here are the numbers for the zero noise case (bin number, frequency estimate, amplitude estimate): 7 7.500000000560 1.000000000016 Not too bad. The zero or low noise case is actually useful if you have a strong interfering sinusoid masking the presence of a weaker one. In that case, you can estimate the strong one and subtract it out of your overall signal, after which you can estimate the weaker one. Of course, you also have to have a phase estimate of the stronger tone in order to do that, but that's usually not a problem if you have a good frequency and amplitude estimate. But I'm getting ahead of myself - first, some notes on the above program. I've been working with the Gaussian ratio technique ever since I first saw it published in Electronic Design News back in 1994. It is indeed an 'exact' technique. There are equations that describe what a Gaussian filter looks like in the frequency domain as a function of frequency. Because a sinusoid existing between two adjacent FFT bins contributes to both of them, you can 'exactly' solve for its frequency, in terms of the sample rate, FFT size and Gaussian beamwidth. And note that the beamwidth is a variable - it can be changed depending on the signal(s) and requirements that you're working with. For instance, in the above program, I use a beamwidth of 6.0, which I would expect to work extremely well over a certain frequency range. I'd use a lower number if I wanted less accuracy, but a broader range. Or, I might estimate using two different beamwidths, one for each of two ranges. There are a lot of possibilities, given the number of variables under control of the user (eg: beamwidth, sample rate, N). I have a variation of the above technique that I derived based on two arbitrarily spaced DFT bins. It's really useful in some situations. Kevin McGee
kevinjmcee  <kevinjmcgee@netscape.net> wrote:

>Cedron, I've mentioned some links to McEachern's Gaussian Ratio method >in a previous post, so I thought I'd add some numbers. I programmed an >example - then I remembered that I have a tutorial (and derivation) >regarding the method at: > >http://fftguru.com/fftguru.com.tutorial4.pdf > >By using a Gaussian window on the data, you're creating a series of >Gaussian filters in the frequency domain. Those filters have a Gaussian >mathematical description. By using the natural log of the FFT's >magnitude outputs, the frequency of an input sinusoid that exists >between any two consecutive bins can be solved for. So it's 'exact' in >the same sense of the word that you have been using. The difference >from your approach is that the window being used is Gaussian instead of >rectangular, and the McEachern method uses two bins instead of three. >And although Julian mentions truncating the tails of the Gaussian, I >have never had to do that with either the McEachern method, or the three >bin method used by Gasior at CERN.
How can you apply a Gaussian window to a block of data without truncating the tails of the window? (Periodic extension perhaps? And so when do you truncate that?) S.
On Monday, June 1, 2015 at 11:58:57 PM UTC-4, Steve Pope wrote:
> kevinjmcee <kevinjmcgee@netscape.net> wrote: > > >Cedron, I've mentioned some links to McEachern's Gaussian Ratio method > >in a previous post, so I thought I'd add some numbers. I programmed an > >example - then I remembered that I have a tutorial (and derivation) > >regarding the method at: > > > >http://fftguru.com/fftguru.com.tutorial4.pdf > > > >By using a Gaussian window on the data, you're creating a series of > >Gaussian filters in the frequency domain. Those filters have a Gaussian > >mathematical description. By using the natural log of the FFT's > >magnitude outputs, the frequency of an input sinusoid that exists > >between any two consecutive bins can be solved for. So it's 'exact' in > >the same sense of the word that you have been using. The difference > >from your approach is that the window being used is Gaussian instead of > >rectangular, and the McEachern method uses two bins instead of three. > >And although Julian mentions truncating the tails of the Gaussian, I > >have never had to do that with either the McEachern method, or the three > >bin method used by Gasior at CERN. > > How can you apply a Gaussian window to a block of data without > truncating the tails of the window? > > (Periodic extension perhaps? And so when do you truncate that?) > > S.
My mistake - I'm not truncating in the sense that I'm setting coefficients to zero at either end of the window. And the window coefficients do indeed go from non-zero to non-zero, so, yes, it is actually truncated at the ends. But I have used Gaussian windows that were of such narrow beamwidth that, for all practical purposes, they went to zero at the ends (when the coefficient was too small to be represented by the number system being used). Kevin McGee