Here are some more test results. The test conditions are similar to the ones presented in the "Show me the numbers" thread with the key difference is that these tests are for a complex signal while those were for a real signal. Candan's formulas were developed for the complex case. The formula presented in my blog article was developed for the real case. Although the real and complex cases are similar, they are not identical. A formula which is exact in one will not be exact in the other. This comparison includes my real case formula (inexact within this complex context), Candan's 2013 revision, and two complex exact formulas derived by me and not published. One is a 2 bin version, the other a 3 bin version, and they are both exact in the noiseless case. Ten sample points with 10,000 runs per row. All formulas shared the same DFT bins. The chart shows error values. Because of the high quality of the results, all values have been multiplied by a thousand. For each formula, the left column is the average and the right column is the standard deviation. Target Noise Level = 0.000 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.1 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.2 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.3 0.000 0.000 0.000 0.000 -0.000 0.000 0.000 0.000 3.4 0.000 0.000 -0.000 0.000 -0.000 0.000 -0.000 0.000 3.5 -0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.6 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.7 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.8 -0.000 0.000 -0.000 0.000 -0.000 0.000 -0.000 0.000 3.9 -0.000 0.000 -0.000 0.000 -0.000 0.000 -0.000 0.000 Target Noise Level = 0.001 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 -0.002 0.160 -0.002 0.225 -0.001 0.167 -0.001 0.167 3.1 -0.005 0.170 -0.005 0.186 -0.005 0.171 -0.005 0.171 3.2 0.002 0.186 0.002 0.161 0.002 0.181 0.002 0.181 3.3 0.001 0.211 0.001 0.141 0.001 0.198 0.001 0.198 3.4 0.001 0.248 -0.001 0.131 -0.000 0.223 -0.000 0.223 3.5 -0.003 0.294 -0.002 0.127 -0.003 0.252 -0.003 0.252 3.6 0.003 0.351 -0.001 0.130 0.003 0.290 0.003 0.290 3.7 0.005 0.438 -0.000 0.142 0.004 0.343 0.004 0.343 3.8 -0.006 0.548 -0.000 0.161 -0.005 0.413 -0.005 0.413 3.9 0.007 0.707 0.001 0.188 0.003 0.498 0.003 0.498 Target Noise Level = 0.010 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 -0.009 1.601 0.000 2.235 -0.010 1.673 -0.010 1.673 3.1 -0.021 1.689 -0.013 1.869 -0.025 1.705 -0.025 1.705 3.2 0.017 1.840 0.004 1.613 0.008 1.794 0.008 1.794 3.3 0.001 2.125 -0.006 1.418 -0.015 1.982 -0.015 1.982 3.4 0.065 2.486 0.009 1.312 0.051 2.204 0.050 2.205 3.5 0.044 2.931 0.017 1.277 0.050 2.518 0.049 2.518 3.6 -0.007 3.567 -0.006 1.320 -0.003 2.924 -0.004 2.924 3.7 0.036 4.377 -0.018 1.428 0.020 3.452 0.019 3.452 3.8 0.064 5.481 0.003 1.616 0.014 4.112 0.013 4.112 3.9 -0.153 6.997 -0.039 1.890 -0.118 4.981 -0.120 4.981 Target Noise Level = 0.100 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 0.098 15.910 -0.074 22.467 -0.053 16.715 -0.053 16.714 3.1 -0.048 16.886 -0.063 18.868 -0.034 17.026 -0.036 17.025 3.2 0.111 18.636 -0.073 16.047 0.102 18.202 0.095 18.201 3.3 0.001 21.007 -0.116 14.315 -0.173 19.660 -0.185 19.659 3.4 0.196 24.869 -0.084 13.211 0.079 22.247 0.060 22.245 3.5 0.466 29.700 0.129 12.745 0.242 25.526 0.211 25.522 3.6 -0.235 35.678 0.075 12.866 -0.371 29.350 -0.424 29.346 3.7 1.416 44.395 0.179 14.354 0.449 34.600 0.365 34.593 3.8 1.188 55.484 0.280 16.155 0.305 41.388 0.167 41.372 3.9 2.433 70.928 0.024 18.579 0.194 49.663 -0.036 49.636 Here is a summary of the formulas. The derivation of my 3 Bin Real can be found in my blog article titled "Exact Frequency Formula for a Pure Real Tone in a DFT". Note the specificity of the term "Real Tone" in the title. My two "new" complex formulas will likely be the subject of a future blog article. I actually derived them a while ago. Candan's derivation can be found in his paper titled "Analysis and Further Improvement of Fine Resolution Frequency Estimation Method From Three DFT Samples". I used the version of his formulas found in Jacobsen's paper titled "A Brief Examination of Current and a Proposed Fine Frequency Estimator Using Three DFT Samples". I simplified the formula by cancelling a factor of Pi/N when combining equations (2) and (3). First Root of Unity r = e^( -i 2Pi/N ) Shared Vectors W = ( -1, 1 + r, -r ) Z = ( Z_{k-1}, Z_k, Z_{k+1} ) Cedron 3 Bin Real b_k = cos( k 2Pi/N ) WB = ( -b_{k-1}, (1+r)b_k, r b_{k+1} ) f = (N/2Pi) acos[ (WB*Z)/WZ ] Cedron 3 Bin Complex q = ( -r, 1 + r, -1 )Z / WZ f = k + (N/2Pi) atan2[ Imag( q ), Real( q ) ] Cedron 2 Bin Complex q = ( -Z_k + Z_{k+1} ) / ( -Z_k + r Z_{k+1} ) f = k + (N/2Pi) atan2[ Imag( q ), Real( q ) ] Candan 2013 q = ( 1, 0, -1 )Z / ( -1, 2, -1 )Z f = k + atan( Real( q ) tan(Pi/N) ) There are some interesting patterns in the data. First, it appears that my 3 Bin Complex and Candan's may be mathematically identical. I have not proved this yet. Any differences in their results could be attributed to precision limitations. Second, the 2 bin formula seems to outperform the 3 bin version in the region between the bins. Near the bins, the 3 bin formula seems to work better. In these tests, the 3 bin set is always anchored on bin 3. In practice, the values above 3.5 would be replaced by anchoring the 3 bin set on bin 4. From Jacobsen's data, it appears he did this. So when comparing the 2 Bin and the 3 Bin formula, only the values between 3.0 and 3.5 are relevant. Third, I am a little surprised, and very pleased, at how well the formula for the real case did in the complex case. This is consistent with the results obtained independently by Jacobsen and Julien's excellent paper. The differences between the two cases are going to be most pronounced near the DC and Nyquist bins. Fourth, the standard deviation swamps the averages. All these estimators appear to be unbiased. Fifth, as in the real valued signal case, the standard deviations seem to be proportional to the RMS of the noise. The omitted columns from the last report are the RMS of the Signal, average RMS of the noise, and the standard deviation of the RMS of the noise. The RMS of the signal is 1.000, the RMS of the noise is near the target, and the std. dev. is fairly low with 10,000 runs. The noise is the same random function as before applied in two dimensions, a square and not a circle. So, it is still a crappy noise model, but it did the job. As with my frequency formula for the real case, if anyone has seen either of my two complex case formulas eslewhere, I would appreciate being told about it. Julien has graciously agreed to include the new formulas in his analysis. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Show me some more numbers
Started by ●June 4, 2015
Reply by ●June 5, 20152015-06-05
Crickets?> f = k + (N/2Pi) atan2[ Imag( q ), Real( q ) ] >f = k + (N/2Pi) ln( q ) / i I wanted to give the alternative form of this equation for both the 2 Bin and 3 Bin Complex formulas.>Candan 2013 > > q = ( 1, 0, -1 )Z / ( -1, 2, -1 )Z > f = k + atan( Real( q ) tan(Pi/N) ) >Nobody caught the mistake in this formula. I got it right in the test code, but wrong in the write-up. It should be: f = k + (N/Pi) atan( Real( q ) tan(Pi/N) ) After wading through a bunch of gobblygook, I was able to prove that the Candan 2013 formula is indeed exact in the complex case. I was able to do it without using any series solutions. Since exact means exact, Candan 2013 and my 3 Bin Complex formulas are indeed mathematically equivalent which explains the closely matching results. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Reply by ●June 5, 20152015-06-05
Here is another run of the same test using a real valued signal with real valued noise. This time the results weren't so good so the numbers are multiplied by 100 instead of 1000. Target Noise Level = 0.000 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 3.1 0.000 0.000 -0.854 0.000 0.956 0.000 0.956 0.000 3.2 0.000 0.000 -4.580 0.000 0.722 0.000 0.721 0.000 3.3 0.000 0.000 -8.282 0.000 -1.010 0.000 -1.013 0.000 3.4 0.000 0.000 -9.031 0.000 -3.333 0.000 -3.335 0.000 3.5 0.000 0.000 -5.807 0.000 -4.646 0.000 -4.646 0.000 3.6 0.000 0.000 -0.365 0.000 -3.828 0.000 -3.832 0.000 3.7 -0.000 0.000 4.139 0.000 -1.303 0.000 -1.315 0.000 3.8 -0.000 0.000 5.469 0.000 1.114 0.000 1.105 0.000 3.9 -0.000 0.000 3.455 0.000 1.656 0.000 1.655 0.000 Target Noise Level = 0.001 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 0.000 0.023 0.001 0.032 0.000 0.024 0.000 0.024 3.1 0.000 0.024 -0.855 0.027 0.956 0.024 0.956 0.024 3.2 0.000 0.026 -4.580 0.024 0.722 0.026 0.721 0.026 3.3 -0.000 0.030 -8.282 0.022 -1.010 0.028 -1.013 0.028 3.4 -0.000 0.035 -9.031 0.020 -3.333 0.031 -3.335 0.031 3.5 -0.000 0.042 -5.807 0.019 -4.646 0.035 -4.646 0.035 3.6 -0.000 0.051 -0.365 0.019 -3.828 0.041 -3.832 0.041 3.7 -0.001 0.062 4.139 0.021 -1.303 0.049 -1.315 0.049 3.8 0.000 0.078 5.469 0.023 1.113 0.059 1.105 0.059 3.9 -0.002 0.098 3.454 0.027 1.655 0.071 1.653 0.071 Target Noise Level = 0.010 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 -0.000 0.225 0.003 0.320 0.000 0.236 0.000 0.236 3.1 -0.001 0.239 -0.852 0.267 0.955 0.242 0.955 0.242 3.2 -0.003 0.263 -4.582 0.240 0.720 0.256 0.719 0.256 3.3 0.004 0.306 -8.279 0.219 -1.005 0.280 -1.008 0.280 3.4 -0.004 0.357 -9.031 0.198 -3.339 0.310 -3.341 0.310 3.5 -0.005 0.414 -5.809 0.186 -4.652 0.346 -4.652 0.346 3.6 -0.006 0.507 -0.365 0.191 -3.833 0.408 -3.838 0.408 3.7 -0.002 0.609 4.136 0.205 -1.303 0.479 -1.315 0.479 3.8 0.010 0.770 5.473 0.234 1.118 0.586 1.109 0.586 3.9 0.017 0.979 3.455 0.270 1.665 0.709 1.663 0.709 Target Noise Level = 0.100 Freq Dawg Real Dawg 2 Bin Dawg 3 Bin Candan 2013 ---- ------------- ------------- ------------- ------------- 3.0 0.007 2.254 0.020 3.188 0.000 2.360 0.000 2.360 3.1 -0.002 2.385 -0.852 2.690 0.940 2.410 0.940 2.409 3.2 -0.008 2.645 -4.592 2.390 0.707 2.564 0.704 2.564 3.3 -0.027 3.039 -8.306 2.171 -1.048 2.779 -1.053 2.779 3.4 0.064 3.572 -8.997 2.001 -3.294 3.078 -3.299 3.078 3.5 0.052 4.246 -5.805 1.865 -4.639 3.498 -4.644 3.497 3.6 0.044 4.990 -0.374 1.893 -3.824 4.012 -3.838 4.011 3.7 0.021 6.172 4.098 2.052 -1.356 4.863 -1.385 4.860 3.8 0.269 7.865 5.478 2.356 1.168 5.941 1.130 5.936 3.9 0.245 10.190 3.460 2.739 1.609 7.167 1.560 7.157 Notice that the formulas developed for a complex valued signal do not do very well at all with a real valued signal. Also notice that the way the noise distribution is transformed through the formulas stays qualitatively the same. That is, for the three bin cases it increases as it moves away from the centered configuration, and for the two bin it is lowest when the frequency is halfway between the two bins. More comprehensive noise testing of these formulas and more can be found at Julien's site: http://www.tsdconseil.fr/log/scriptscilab/festim/index-en.html He has also posted his SciLab code and instructions for using the code he developed for the comparisons. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Reply by ●June 6, 20152015-06-06
I might be a bit late, but for what it's worth here is a derivation of a 2 bin real tone formula: http://vicanek.de/dsp/FreqFromTwoBins.pdf It is similar in spirit to Cedron's 3 bin formula, however I tried to streamline the math. I havent't tested it with noise present. --------------------------------------- Posted through http://www.DSPRelated.com
Reply by ●June 6, 20152015-06-06
On Friday, June 5, 2015 at 2:27:27 PM UTC-7, Cedron wrote:> Here is another run of the same test using a real valued signal with real > valued noise. This time the results weren't so good so the numbers are > multiplied by 100 instead of 1000. >...> Notice that the formulas developed for a complex valued signal do not do > very well at all with a real valued signal.The accuracy of this observation will vary from the 3rd bin of 10 to, say, the 30th bin of 100. Controlling the regions where these differences occur is one of the reasons to use non-rectangular windows. Table 1 in J.C. Burgess: Digital spectrum analysis of periodic signals J. Acoust. Soc. Am., Vol. 58, No. 3, September 1975 gives some examples of the size and range of the effect for different transform sizes.> Also notice that the way the > noise distribution is transformed through the formulas stays qualitatively > the same. That is, for the three bin cases it increases as it moves away > from the centered configuration, and for the two bin it is lowest when the > frequency is halfway between the two bins. >...> CedIf you don't fix the "anchoring" you might at least plot for frequencies from 2.5 to 3.5 so that half your table entries aren't crap. Calculation over a symmetric region also makes it easier to see if the underlying assumptions are justified. It is useful when you wish to make comparisons, to use the same noise signal in every case. Then equivalent algorithms will give equivalent results to numerical accuracy. It would also save noise calculation time if you upgrade to a realistic noise generator with greater computational load. Dale B. Dalrymple
Reply by ●June 6, 20152015-06-06
dbd <d.dalrymple@sbcglobal.net> wrote:>It is useful when you wish to make comparisons, to use the same noise >signal in every case. Then equivalent algorithms will give equivalent >results to numerical accuracy. It would also save noise calculation time >if you upgrade to a realistic noise generator with greater computational >load.This is very important. Similarly, when sweeping over an SNR range in a simulation, one should use the same noise signal, added in but at different amplitudes, to compare SNR's. You could in theory use different noise signals across algorithms and SNR ranges and get accurate results eventually, but they will converge much much more slowly. Steve
Reply by ●June 6, 20152015-06-06
>On Friday, June 5, 2015 at 2:27:27 PM UTC-7, Cedron wrote: >> Here is another run of the same test using a real valued signal with real >> valued noise. This time the results weren't so good so the numbers are >> multiplied by 100 instead of 1000. >> >... >> Notice that the formulas developed for a complex valued signal do not do >> very well at all with a real valued signal. > >The accuracy of this observation will vary from the 3rd bin of 10 to, say, >the 30th bin of 100. Controlling the regions where these differences occur >is one of the reasons to use non-rectangular windows. Table 1 in > >J.C. Burgess: Digital spectrum analysis of periodic signals >J. Acoust. Soc. Am., Vol. 58, No. 3, September 1975 > >gives some examples of the size and range of the effect for different >transform sizes.We hashed this issue out pretty well in the Matlab beginner thread. If the sample count (and bin count) were upped to a hundred, the correct bin for the signal would still be three. It is correct that a complex signal will be more accurate at bin 30 for a 30.5 cycles per frame signal than bin 3 for a 3.5 cycle per frame signal. In either case, the unwindowed results of a real signal formula is going to be better than the windowed or not version of a complex signal formula.> >> Also notice that the way the >> noise distribution is transformed through the formulas stays >qualitatively >> the same. That is, for the three bin cases it increases as it moves away >> from the centered configuration, and for the two bin it is lowest when >the >> frequency is halfway between the two bins. >> >... >> Ced > >If you don't fix the "anchoring" you might at least plot for frequencies >from 2.5 to 3.5 so that half your table entries aren't crap. Calculation >over a symmetric region also makes it easier to see if the underlying >assumptions are justified.I don't consider seeing how the function behaves just outside its intended range as crap. I think it is useful. When comparing a two bin against a three bin, considering only the intended range, one of them is going to have to be re-anchored. I'll do that just for you in this next run. I am evaluating Martin Vicanek's formula from the link he provided in this post. It looks good so far. It's a mighty fine day outside weatherwise so this won't get done till this evening at the earliest. I will do as you wish, would you prefer 2.5 to 3.5 or 3.0 to 3.9?> >It is useful when you wish to make comparisons, to use the same noise >signal in every case. Then equivalent algorithms will give equivalent results >to numerical accuracy. It would also save noise calculation time if you >upgrade to a realistic noise generator with greater computational load. > >Dale B. DalrympleAll the formulas use the same DFT bins calculated for the same signal for each run. I thought I said that pretty clearly. I have proven that my 3 bin complex and Candan's 2013 are mathematically equivalent so any differences you see are due to numerical precision limitations. I don't get your last sentence. How would a greater computational load save calculation time? The program runs in a few seconds, so making it take a little longer is not a big deal. I think the noise generation that I have is adequate for the job. I am happy to let others, like Julien did, make their own comparisons. I am still looking to see if my three bin complex or two bin complex formulas have been done before. At this point I am pretty confident in saying that my three bin real signal formula was done first by me. The bottom line in these two runs in this thread is if you have a complex signal use the complex signal formulas and if you have a real signal use the real signal formulas. You can draw the same conclusion looking at Julien's analysis. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Reply by ●June 6, 20152015-06-06
>dbd <d.dalrymple@sbcglobal.net> wrote: > >>It is useful when you wish to make comparisons, to use the same noise >>signal in every case. Then equivalent algorithms will give equivalent >>results to numerical accuracy. It would also save noise calculationtime >>if you upgrade to a realistic noise generator with greater computational >>load. > >This is very important. Similarly, when sweeping over an SNR >range in a simulation, one should use the same noise signal, >added in but at different amplitudes, to compare SNR's. > >You could in theory use different noise signals across algorithms and >SNR ranges and get accurate results eventually, but they will >converge much much more slowly. > >SteveAlright, I misinterpreted what Dale said. You want me to use the same set of noise additions for every frequency for every noise level. I'm not sure it's that important. I'm doing 10,000 runs per row, so the results should be pretty close to the theoretical statistical value. Multiple runs confirm this as the numbers don't vary much. Standard deviation and RMS are the same when your average is zero. So the proportionality between the noise levels and the resulting standard deviations makes perfect sense. There are enough runs to see that this relationship is true even if the values don't jump exactly be a factor of 10 between each noise level. The formulas are all working from the same signals so the side by side comparisons are still valid. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Reply by ●June 6, 20152015-06-06
Cedron <103185@DSPRelated> wrote:> Pope wrote>>dbd <d.dalrymple@sbcglobal.net> wrote:>>>It is useful when you wish to make comparisons, to use the same noise >>>signal in every case. Then equivalent algorithms will give equivalent >>>results to numerical accuracy. It would also save noise calculation >time >>>if you upgrade to a realistic noise generator with greater >computational >>>load.>>This is very important. Similarly, when sweeping over an SNR >>range in a simulation, one should use the same noise signal, >>added in but at different amplitudes, to compare SNR's.>>You could in theory use different noise signals across algorithms and >>SNR ranges and get accurate results eventually, but they will >>converge much much more slowly.>Alright, I misinterpreted what Dale said. You want me to use the same set >of noise additions for every frequency for every noise level.Well... were we working on a project together, that is what I would want. :)>I'm not sure it's that important. I'm doing 10,000 runs per row, so the >results should be pretty close to the theoretical statistical value. >Multiple runs confirm this as the numbers don't vary much.If you run it until is converges and all you are interested is in a statistically accurate result then you're fine. See below for other possible cases of interest.>Standard deviation and RMS are the same when your average is zero. So the >proportionality between the noise levels and the resulting standard >deviations makes perfect sense. There are enough runs to see that this >relationship is true even if the values don't jump exactly be a factor of >10 between each noise level.>The formulas are all working from the same signals so the side by side >comparisons are still valid.A runsize of 10,000 might be, in most situations, good for finding the SNR operating point at which an algorithm has a 0.1% error rate, that is, and estimate of the SNR at which the algorithm fails to make an adequate estimate 0.1% of the time. (Where "adequate" means the downstream system using the estimate functions as opposed to does not function). Now, 0.1% would be a reasonable spec for the marginal contribution of a frequency estimator to an overall 1% packet-error rate spec in a receiver (this is a typical spec for a wireless device). With the runsize of 10,000 having a 0.1% error rate, you have 10 errors out of 10,000 simulated estimates in the run, and you can state with some confidence that the SNR at which this occurs is your operating point per the above spec. (I hope you are still following me, I know I ramble sometimes.) Now suppose in addition to testing whether your algorithm meets spec, you are comparing two algorithms. Suppose the Cedron algorithm, at a single SNR near the operating point, exhibited 10 errors out of 10,000, and a second proposed algorithm exhibited 12 errors. Suppose for the sake of argument that the 9988 datapoints for which the second algorithm obtained the correct answer, the Cedron algorithm also got the correct answer. [*] So we now have 12 datapoints at which Cedron got all 12 correct, but the second proposed algorithm only got 10 out of 12 correct. Can we assert that Cedron outperforms the second algorithm? Well, if both algorithms were presented with identical noise patterns, you can assert this. Whereas if the two algorithms were presented with different noise patterns, the assertion is much weaker -- the 12 vs. 10 successes could just be a random effect of different noise patterns. Furthermore, had you used identical noise, you could analyze in detail the two events in which Cedron succeded and the second algorithm failed, and perhaps obtain more insight into why Cedron is better. With non-identical noise, you can't even perform this analysis. So, in my view, for the purposes of asserting the type of things you're trying to assert, you would be on much much firmer ground applying the same noise patterns to competing algorithms. Applying the same noise pattern to every SNR, for similar reasons, gets you an accurate curve faster. The curve will more closely intersect your actual operating point with a shorter runsize. Steve [*] This "sake of argument" assumption is not true in general, but for those cases similar arguments can be constructed.
Reply by ●June 6, 20152015-06-06
On Saturday, June 6, 2015 at 12:05:21 PM UTC-7, Cedron wrote: ...> We hashed this issue out pretty well in the Matlab beginner thread. If > the sample count (and bin count) were upped to a hundred, the correct bin > for the signal would still be three. It is correct that a complex signal > will be more accurate at bin 30 for a 30.5 cycles per frame signal than > bin 3 for a 3.5 cycle per frame signal. >For the same signal duration, increasing the sample frequency and transform size will separate the two components of the real signal and reduce the "self interference" of a real signal with a complex estimation algorithm. Windowing can accomplish the same result. Note that the most accurate estimators will be designed for the window applied. A transform size of 10 is rarely used in instrumentation. Values in the hundreds are common. Your choice makes a real signal an unrealistic error source for people who might mistakenly apply your tables to how things work in the real world.> In either case, the unwindowed results of a real signal formula is going > to be better than the windowed or not version of a complex signal > formula. >This statement does not take into account the general cases where transform sizes are chosen based on signal characteristics beyond just frequency, where there is interference as well as noise, windows are chosen to cope with interference and algorithms are designed to match the windows. You are still a long ways from the real world. You persist in making strong statements about regions you have not yet explored. When that leads to false claims, don't be surprised if people talk.> >If you don't fix the "anchoring" you might at least plot for frequencies > >from 2.5 to 3.5 so that half your table entries aren't crap. Calculation > >over a symmetric region also makes it easier to see if the underlying > >assumptions are justified. > > I don't consider seeing how the function behaves just outside its intended > range as crap.Crap is claiming to implement an algorithm as published and then posting the results of something else.> I think it is useful. When comparing a two bin against a > three bin, considering only the intended range, one of them is going to > have to be re-anchored. I'll do that just for you in this next run.Please make your tables correct for everyone.> I am evaluating Martin Vicanek's formula from the link he provided in this > post. It looks good so far. It's a mighty fine day outside weatherwise > so this won't get done till this evening at the earliest. > > I will do as you wish, would you prefer 2.5 to 3.5 or 3.0 to 3.9? >It doesn't matter which. It does matter that it is done consistently with your description. ...> CedDale B. Dalrymple