On Thursday, February 16, 2017 at 12:35:58 PM UTC-8, Cedron wrote:
> [...snip...]
>
> >>If they want the best, people should be using my formula.
> ...
This is the first obvious falacy in the post.
> ...
> "Best" is something that can only be determined on a measurement of one
> dimension.
That's two. As you proceed to detail, there are many dimensions, all of which can be considered. That this is comp.dsp says that the current context involves digital computation and you have serious misconceptions about the value of these dimensions in generating solutions by digital computation are.
>There are many dimensions that a frequency formula can me
> evaluated on regardless of application. They include:
>
> * Robustness. Resistance to noise and other signals in the DFT.
>
> * Accuracy.
>
> * Precision.
>
> * Execution Footprint.
> A. Execution Speed
> B. Memory requirements
>
> * Comprehensibility of the math
I have never seen a operational requirement or system design request for proposal that included this.
>
> * Jacobsen's perception of computational complexity
I think this is a point of great misunderstanding in attempted communication here. I thnk there are much different interpretations of computational complexity and how to weight it. I expect Eric and any other practicing engineer to consider more than just calculation counts and accuracy.
>
> It is true that which solution is best is application dependent, which is
> then based on weightings of criteria like those I just listed.
Application is another problemation word here. To a practicing engineer, the application is not just the algorithm, but also the implementation environment. Is the system to be designed from scrath? Will the accuracy fit in the available processors' speeds and bit widths, will it fit in available RAM and ROM? Will it require a different math library? These are a few of the concerns in DSP engineering.
> ...
>
> So which common misconceptions are you still clinging to?
>
> * An "exact" computation of frequency based on FTs is not possible
In comp.dsp we deal with discrete Fourier transforms, not Fourier transforms. Fourier transforms can only be symbolically manipulated. For the DFT of actual signals, in theory or practice, we first have to sample which cannot be done exactly. After that, even symbolic manipulation cannot be exact, nor can practical implementations.
>
> * The Fourier Transform is itself an estimator
If you want to talk DFT, you'll have to give a definition of "estimator" so we can see if it is relevant in comp.dsp.
>
> * There is an "time-frequency uncertainty principle"
>
There isn't just one, there are many. I don't use one. Can you find the definition of one that anyone has used in comp.dsp lately?
> * Frequencies can't be determined for tones with wavelengths longer than
> the sample duration
>
> * Two tones with frequencies closer than the bin spacing can't be resolved
> using a DFT
>
I'm not aware of anyone suggesting these last two around here, and they would have been talked about, and not particularly favorably. I think these have to be considered two more "alternate" fallacies.
> So, if you mean by "don't respond well" that I refuse to accept your
> proclamations just based on your say so, well that's quite true.
>
> Please feel free to list what you consider my misconceptions. I try to
> back all my proclamations with both theory and numerical examples so it
> makes it difficult for them to be misconceptions.
I've tried to do so here. In particular, with regard to the lack of relevance of your proclamations to the context of comp,dsp
>
> [...snip...]
>
> >>Again, I want to make the point that if a little bit more calculation
> in
> >>the frequency equation allows you to reduce the size of the DFT then a
> >>relatively huge amount of calculations is saved.
In real world signal processing instrumentation the size of the DFTs calculated is selected based on signal characteristics like number of signal components and their seperations, noise levels, non-stationarity and the characteristincs of the channels that signals have propogated through (SONAR, RADAR, Comms). These aren't important in a threoretical signal processing context, but this is comp.dsp where these things are important.
> ...
> >>Finally, closed form is not synonymous with exact. These are the exact
> >>solutions I am aware of:
> >...
>
> The meaning of mathematically exact is pretty well known. The meaning of
> closed form is a little more nebulous, but it is still pretty well
> understood. Saying that any recent innovation don't really matter is
> rather off hand disparaging in my opinion.
I think comp.dsp is about digital signal processing solutions. It has been for decades, and that's not "off hand".
In comp.dsp both "mathematically exact" and "closed form" have zero weight in selecting "solutions". They are merely quaint starting points that this community quickly abandons to meet the constraints of the real world where we seek solutions.
> ...
Dale B. Dalrymple
Reply by Cedron●February 16, 20172017-02-16
[...snip...]
>>If they want the best, people should be using my formula. I refer you
to
>>Figure 4 in Julien's comparison paper and the data below. Mine beats
>>Candan's in every sense except number of computations where it is
roughly
>>comparable.
>
>First, "best" depends on the requirements of a particular application.
>What's "best" in one application won't be "best" in another.
>
"Best" is something that can only be determined on a measurement of one
dimension. There are many dimensions that a frequency formula can me
evaluated on regardless of application. They include:
* Robustness. Resistance to noise and other signals in the DFT.
* Accuracy.
* Precision.
* Execution Footprint.
A. Execution Speed
B. Memory requirements
* Comprehensibility of the math
* Jacobsen's perception of computational complexity
It is true that which solution is best is application dependent, which is
then based on weightings of criteria like those I just listed.
>Clearly you like yours, and clearly others may not agree and this has
>been pretty clear in the past. I don't see any need to rehash any of
>that.
>
What does "liking" it have to do with its worth in doing the job. The
bilinear matrix form does have a certain appeal in that it shows that the
formula works by taking a correctly weighted average of cosine values to
achieve the cosine of the unknown frequency. What makes it special though
is that it is the first formula that can claim to be exact in the no noise
pure tone case.
[...snip...]
>>If one doesn't mind a little approximation, which shouldn't matter in
most
>>practical applications, a LUT solution for arccos is going to be faster
>>than a complex division. I'm not sure if a acos call might not be
faster
>>either, but I don't feel like testing it.
>
>Again, every application and implementation has its own constraints
>and requirements, so "shouldn't matter" isn't useful in that context.
>
If you don't see the rich irony of your questioning the applicability of
approximations in real life solutions I feel sorry for you.
[...snip...]
>>
>>I don't think of a frequency calculation as an interpolation. That
>>concept comes from thinking of the discrete case as a dirac delta
version
>>of the continuous case. I know that how DSP is mostly taught, but it
also
>>a source of many misconceptions, such as leakage being defined by the
sinc
>>function.
>
>It's been clear that some of us think you've been living with a number
>of misconceptions regarding some related issues. It's also been
>clear that you don't respond well to the discussions, so I'll not
>rehash them.
>
Oh, let's do hash some out.
So which common misconceptions are you still clinging to?
* An "exact" computation of frequency based on FTs is not possible
* The Fourier Transform is itself an estimator
* There is an "time-frequency uncertainty principle"
* Frequencies can't be determined for tones with wavelengths longer than
the sample duration
* Two tones with frequencies closer than the bin spacing can't be resolved
using a DFT
So, if you mean by "don't respond well" that I refuse to accept your
proclamations just based on your say so, well that's quite true.
Please feel free to list what you consider my misconceptions. I try to
back all my proclamations with both theory and numerical examples so it
makes it difficult for them to be misconceptions.
[...snip...]
>>Again, I want to make the point that if a little bit more calculation
in
>>the frequency equation allows you to reduce the size of the DFT then a
>>relatively huge amount of calculations is saved.
The sound of crickets? This little observation makes all your arguments
about "computational complexity" moot, and you don't say a thing?
>
>>Finally, closed form is not synonymous with exact. These are the exact
>>solutions I am aware of:
>
>> Mine: 2 bin real, 3 bin real, 2 bin complex, 3 bin complex (actually
I
>>have n bin version of both real and complex)
>>
>> Martin Vicanek: 2 bin real
>>
>> Michael Plet: two versions of 2 bin real
>>
>> Candan 2013: 3 bin complex, but you would know it was exact from his
>>derivation. It is mathematically equivalent to my 3 bin complex which
>>preceded his.
>
>No, Candan preceded you by quite a bit.
>
Publishing it perhaps. I was never as interested in the complex signal
case as the real signal case. My interest in DSP came from the math I did
stemming from my hobby of recording local music and writing my own
recording program. I have had my formulas for nearly ten years.
There was actually a great deal of progess made this past weekend. I
developed a new 2 bin formula based on a new approach. It slightly
outperforms Vicanek's 2 bin formula in the single tone white noise testing
I have been doing. Vicanek derived a 3 bin formula which does better than
my original 3 bin formula. A short while later, I extended my new 2 bin
formula to a 3 bin formula, and once again do slightly better than his.
I have not tested these formulas with the presence of other tones, or
different noise conditions, so which one is "best overall", at least in
terms of robustness and accuracy is still an open question.
Vicanek has updated his derivation paper at the link given earlier in this
thread to include the 3 bin case. I think it is an excellent piece of
analysis and deserves much broader recognition.
>>Do you know of any others that you can add to the list?
>
>We probably don't have the same idea about you mean, so the answer may
>be yes or no, but it doesn't really matter, anyway.
>
The meaning of mathematically exact is pretty well known. The meaning of
closed form is a little more nebulous, but it is still pretty well
understood. Saying that any recent innovation don't really matter is
rather off hand disparaging in my opinion.
So you said you knew of newer closed form formulas, please provide them.
I'll tell you if they are exact or not. In your own words they will be
either "niche applications or corner cases". They can't be "basic
optimizations" because those "don't seem to have changed much for a
while." If you knew of any that preceded Julien's paper, I'm sure you
would have mentioned them to Julien so he could include them in his
comparison. Vicanek's exact 2 bin solution is newer than that.
One of the major things you seem to miss when you blur the distinction
between having exact equations and approximations is that math is
qualitative as well as quantative. These two aspects can also be thought
of in terms of being theoretical and applied. A mathematical equation is
a descriptive statement. When an equation is exact, it says something
about the nature of what is being described. When it is an approximation,
it is merely describing its behavior.
In the case of a frequency formula for a pure tone in a DFT, I know of
only one pathway to get to an exact answer and know that you have an exact
answer. First you have to develop equations for what the bin values are
for a pure tone. Then you solve a system of those equations for the
frequency. As far as I know, myself, Vicanek, and now Plet are the only
people to have done so. Each of us derived a different form of the bin
value equations, but they are all correct. From these, each of us has
derived one or more exact frequency formulas.
So please do list the closed form equations you mentioned you knew about.
One sure way to rule out if they are exact is if they fail to have zero
error in the noiseless case like your estimator and Candan's do in the
numerical examples I have provided in this thread. To be fair to Candan,
his formula was derived for the complex signal case which is not what I am
testing here.
Ced
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by ●February 10, 20172017-02-10
On Fri, 10 Feb 2017 11:51:12 -0600, "Cedron" <103185@DSPRelated>
wrote:
>>>
>>>Really?
>>>
>>>Considering that P, S, Q, R, P*S, Q*R, and N/(2*PI) can all be
>>>precomputed, the actual evaluation requires fewer computations, and is
>>>less complicated, than your estimator.
>>
>>My estimator isn't really the benchmark these days unless one is
>>really looking for low computational complexity or especially only
>>cares about low computational complexity for low SNR cases. Candan's
>>estimator is still what most people use as a research reference, and
>>it's still pretty hard to beat in the broad sense.
>>
>
>Your estimator isn't as low computational complexity as you imply. Yours
>(Candan's and mine as well) requires a complex division. A complex
>division takes 8 multiplies, 1 divide, and 3 sums. Plet's (and my 2 bin
>real) uses a real division and that's a big advantage.
>
>If they want the best, people should be using my formula. I refer you to
>Figure 4 in Julien's comparison paper and the data below. Mine beats
>Candan's in every sense except number of computations where it is roughly
>comparable.
First, "best" depends on the requirements of a particular application.
What's "best" in one application won't be "best" in another.
Clearly you like yours, and clearly others may not agree and this has
been pretty clear in the past. I don't see any need to rehash any of
that.
>For low SNR cases it is obvious that doing extra calculations for extra
>precision is meaningless.
>
>
>>The arccos is the computational hurlde, but clearly not
>>insurmountable. Even in a LUT it may be undesirable in constrained
>>applications, though. Requirements often get in the way.
>>
>
>If one doesn't mind a little approximation, which shouldn't matter in most
>practical applications, a LUT solution for arccos is going to be faster
>than a complex division. I'm not sure if a acos call might not be faster
>either, but I don't feel like testing it.
Again, every application and implementation has its own constraints
and requirements, so "shouldn't matter" isn't useful in that context.
>>>In addition, since it is an exact solution, not an approximation which
>>>does better with larger N, in any given application a DFT with a smaller
>N
>>>can probably be used greatly reducing the number of overall
>calculations
>>>that have to be made.
>>
>>Remember that many people don't think "exact" is relevant to
>>interpolations made from estimates. The real issues is just whether
>>the performance vs computational complexity tradeoff works for a
>>particular application space of interest. No single estimator does
>>all things for all people. I was just asking what Michael thinks the
>>advantage of this particular estimator might be. Several existing
>>estimators these days are derived from closed-form derivations, so
>>that's not really a distinction.
>>
>
>I don't think of a frequency calculation as an interpolation. That
>concept comes from thinking of the discrete case as a dirac delta version
>of the continuous case. I know that how DSP is mostly taught, but it also
>a source of many misconceptions, such as leakage being defined by the sinc
>function.
It's been clear that some of us think you've been living with a number
of misconceptions regarding some related issues. It's also been
clear that you don't respond well to the discussions, so I'll not
rehash them.
>Again, I want to make the point that if a little bit more calculation in
>the frequency equation allows you to reduce the size of the DFT then a
>relatively huge amount of calculations is saved.
>Finally, closed form is not synonymous with exact. These are the exact
>solutions I am aware of:
> Mine: 2 bin real, 3 bin real, 2 bin complex, 3 bin complex (actually I
>have n bin version of both real and complex)
>
> Martin Vicanek: 2 bin real
>
> Michael Plet: two versions of 2 bin real
>
> Candan 2013: 3 bin complex, but you would know it was exact from his
>derivation. It is mathematically equivalent to my 3 bin complex which
>preceded his.
No, Candan preceded you by quite a bit.
>Do you know of any others that you can add to the list?
We probably don't have the same idea about you mean, so the answer may
be yes or no, but it doesn't really matter, anyway.
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Reply by Cedron●February 10, 20172017-02-10
>
> Candan 2013: 3 bin complex, but you would know it was exact from his
>derivation.
I should proof read more carefully. That should say "wouldn't know it".
Ced
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by Cedron●February 10, 20172017-02-10
>>
>>Really?
>>
>>Considering that P, S, Q, R, P*S, Q*R, and N/(2*PI) can all be
>>precomputed, the actual evaluation requires fewer computations, and is
>>less complicated, than your estimator.
>
>My estimator isn't really the benchmark these days unless one is
>really looking for low computational complexity or especially only
>cares about low computational complexity for low SNR cases. Candan's
>estimator is still what most people use as a research reference, and
>it's still pretty hard to beat in the broad sense.
>
Your estimator isn't as low computational complexity as you imply. Yours
(Candan's and mine as well) requires a complex division. A complex
division takes 8 multiplies, 1 divide, and 3 sums. Plet's (and my 2 bin
real) uses a real division and that's a big advantage.
If they want the best, people should be using my formula. I refer you to
Figure 4 in Julien's comparison paper and the data below. Mine beats
Candan's in every sense except number of computations where it is roughly
comparable.
For low SNR cases it is obvious that doing extra calculations for extra
precision is meaningless.
>The arccos is the computational hurlde, but clearly not
>insurmountable. Even in a LUT it may be undesirable in constrained
>applications, though. Requirements often get in the way.
>
If one doesn't mind a little approximation, which shouldn't matter in most
practical applications, a LUT solution for arccos is going to be faster
than a complex division. I'm not sure if a acos call might not be faster
either, but I don't feel like testing it.
>>In addition, since it is an exact solution, not an approximation which
>>does better with larger N, in any given application a DFT with a smaller
N
>>can probably be used greatly reducing the number of overall
calculations
>>that have to be made.
>
>Remember that many people don't think "exact" is relevant to
>interpolations made from estimates. The real issues is just whether
>the performance vs computational complexity tradeoff works for a
>particular application space of interest. No single estimator does
>all things for all people. I was just asking what Michael thinks the
>advantage of this particular estimator might be. Several existing
>estimators these days are derived from closed-form derivations, so
>that's not really a distinction.
>
I don't think of a frequency calculation as an interpolation. That
concept comes from thinking of the discrete case as a dirac delta version
of the continuous case. I know that how DSP is mostly taught, but it also
a source of many misconceptions, such as leakage being defined by the sinc
function.
Again, I want to make the point that if a little bit more calculation in
the frequency equation allows you to reduce the size of the DFT then a
relatively huge amount of calculations is saved.
Finally, closed form is not synonymous with exact. These are the exact
solutions I am aware of:
Mine: 2 bin real, 3 bin real, 2 bin complex, 3 bin complex (actually I
have n bin version of both real and complex)
Martin Vicanek: 2 bin real
Michael Plet: two versions of 2 bin real
Candan 2013: 3 bin complex, but you would know it was exact from his
derivation. It is mathematically equivalent to my 3 bin complex which
preceded his.
Do you know of any others that you can add to the list?
Ced
===========================================================
The sample count is 100
and the run size is 100
The phase value is 0.785398
Errors are shown at 100x actual value
Target Noise Level = 0.000
Freq Candan2013 CD3Bin NewMP2Bin
---- ------------- ------------- -------------
3.0 0.000 0.000 0.000 0.000 -0.000 0.000
3.1 -0.165 0.000 -0.000 0.000 -0.000 0.000
3.2 -0.509 0.000 0.000 0.000 -0.000 0.000
3.3 -0.711 0.000 0.000 0.000 -0.000 0.000
3.4 -0.532 0.000 -0.000 0.000 0.000 0.000
3.5 -0.001 0.000 0.000 0.000 -0.000 0.000
3.6 -0.336 0.000 -0.000 0.000 0.000 0.000
3.7 -0.437 0.000 0.000 0.000 -0.000 0.000
3.8 -0.303 0.000 0.000 0.000 -0.000 0.000
3.9 -0.096 0.000 0.000 0.000 0.000 0.000
Target Noise Level = 0.001
Freq Candan2013 CD3Bin NewMP2Bin
---- ------------- ------------- -------------
3.0 0.001 0.010 0.001 0.011 0.003 0.018
3.1 -0.166 0.010 0.000 0.010 0.000 0.012
3.2 -0.508 0.011 0.001 0.010 -0.000 0.008
3.3 -0.710 0.010 0.001 0.009 0.001 0.008
3.4 -0.534 0.013 -0.002 0.012 -0.001 0.009
3.5 -0.004 0.018 -0.002 0.015 -0.001 0.011
3.6 -0.337 0.013 -0.001 0.015 0.002 0.015
3.7 -0.438 0.011 -0.001 0.012 -0.004 0.066
3.8 -0.305 0.011 -0.001 0.012 -0.005 0.068
3.9 -0.095 0.011 0.000 0.011 -0.002 0.028
Target Noise Level = 0.010
Freq Candan2013 CD3Bin NewMP2Bin
---- ------------- ------------- -------------
3.0 0.005 0.105 0.004 0.109 -0.002 0.183
3.1 -0.173 0.103 -0.009 0.098 -0.007 0.104
3.2 -0.493 0.117 0.015 0.109 0.008 0.098
3.3 -0.708 0.120 0.003 0.106 0.004 0.084
3.4 -0.543 0.126 -0.011 0.110 -0.005 0.092
3.5 -0.023 0.166 -0.019 0.144 -0.014 0.113
3.6 -0.323 0.134 0.014 0.152 -0.020 0.173
3.7 -0.421 0.124 0.019 0.135 0.051 0.601
3.8 -0.302 0.118 0.002 0.128 0.027 0.733
3.9 -0.093 0.103 0.005 0.106 -0.050 0.336
Target Noise Level = 0.100
Freq Candan2013 CD3Bin NewMP2Bin
---- ------------- ------------- -------------
3.0 0.218 1.009 0.227 1.017 0.178 1.715
3.1 -0.047 1.161 0.093 1.123 0.027 1.263
3.2 -0.584 1.177 -0.089 1.074 -0.127 0.894
3.3 -0.649 1.288 0.035 1.145 -0.048 0.801
3.4 -0.554 1.441 -0.048 1.272 -0.067 0.976
3.5 -0.187 1.769 -0.137 1.506 0.021 1.160
3.6 -0.241 1.190 0.105 1.344 -0.233 1.755
3.7 -0.445 1.252 -0.031 1.377 -0.467 6.959
3.8 -0.314 1.063 -0.019 1.159 1.331 7.047
3.9 -0.104 0.943 -0.026 0.998 0.657 2.764
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by ●February 10, 20172017-02-10
On Fri, 10 Feb 2017 09:51:24 -0600, "Cedron" <103185@DSPRelated>
wrote:
>>>
>>>Now let
>>>
>>>P=Sin(2*PI*j/N), Q=Sin(2*PI*k/N), R=Cos(2*PI*j/N) and S=Cos(2*PI*k/N)
>>>
>>>Then the normalized frequency is estimated by:
>>>
>>>Freq=(N/(2*PI))*Arccos((Im[k]*P*S-Im[j]*Q*R)/(Im[k]*P-Im[j]*Q))
>>>
>>>where Im[] is the imaginary part of X[].
>>>
>>>
>>>Michael Plet
>>
>>Is there any advantage to this method? It still seems very high in
>>computational complexity to me.
>>
>>
>>
>>---
>>This email has been checked for viruses by Avast antivirus software.
>>https://www.avast.com/antivirus
>
>Really?
>
>Considering that P, S, Q, R, P*S, Q*R, and N/(2*PI) can all be
>precomputed, the actual evaluation requires fewer computations, and is
>less complicated, than your estimator.
My estimator isn't really the benchmark these days unless one is
really looking for low computational complexity or especially only
cares about low computational complexity for low SNR cases. Candan's
estimator is still what most people use as a research reference, and
it's still pretty hard to beat in the broad sense.
The arccos is the computational hurlde, but clearly not
insurmountable. Even in a LUT it may be undesirable in constrained
applications, though. Requirements often get in the way.
>In addition, since it is an exact solution, not an approximation which
>does better with larger N, in any given application a DFT with a smaller N
>can probably be used greatly reducing the number of overall calculations
>that have to be made.
Remember that many people don't think "exact" is relevant to
interpolations made from estimates. The real issues is just whether
the performance vs computational complexity tradeoff works for a
particular application space of interest. No single estimator does
all things for all people. I was just asking what Michael thinks the
advantage of this particular estimator might be. Several existing
estimators these days are derived from closed-form derivations, so
that's not really a distinction.
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Reply by Cedron●February 10, 20172017-02-10
>>
>>Now let
>>
>>P=Sin(2*PI*j/N), Q=Sin(2*PI*k/N), R=Cos(2*PI*j/N) and S=Cos(2*PI*k/N)
>>
>>Then the normalized frequency is estimated by:
>>
>>Freq=(N/(2*PI))*Arccos((Im[k]*P*S-Im[j]*Q*R)/(Im[k]*P-Im[j]*Q))
>>
>>where Im[] is the imaginary part of X[].
>>
>>
>>Michael Plet
>
>Is there any advantage to this method? It still seems very high in
>computational complexity to me.
>
>
>
>---
>This email has been checked for viruses by Avast antivirus software.
>https://www.avast.com/antivirus
Really?
Considering that P, S, Q, R, P*S, Q*R, and N/(2*PI) can all be
precomputed, the actual evaluation requires fewer computations, and is
less complicated, than your estimator.
In addition, since it is an exact solution, not an approximation which
does better with larger N, in any given application a DFT with a smaller N
can probably be used greatly reducing the number of overall calculations
that have to be made.
Ced
---------------------------------------
Posted through http://www.DSPRelated.com
Reply by Michael Plet●February 10, 20172017-02-10
On Fri, 10 Feb 2017 15:08:40 GMT, eric.jacobsen@ieee.org wrote:
>On Thu, 09 Feb 2017 21:16:17 +0100, Michael Plet <me@home.com> wrote:
>
>>On Mon, 06 Feb 2017 19:17:27 +0100, Michael Plet <me@home.com> wrote:
>>
>>>Hi Group
>>>
>>>I have derived this estimator. The limited tests I have done shows
>>>high accuracy.
>>>
>>>
>>>Let k be the index of the DFT bin with the largest magnitude.
>>>Let j be the index of the DFT bin with the largest magnitude
>>>neigboring bin k.
>>>That is j=k-1 or j=k+1.
>>>
>>>Now let P=Tan(PI*k/N) and Q=Tan(PI*j/N)
>>>
>>>Then the normalized frequency is estimated by:
>>>
>>>Freq=(N/PI)*Arctan(Sqr(P*Q*(Im[k]*P-Im[j]*Q)/(Im[k]*Q-Im[j]*P)))
>>>
>>>where Sqr is Square root and Im[] is the imaginary part of X[].
>>>
>>>
>>>I was wondering if anyone has a test suite, so it would be possible to
>>>compare with other estimators?
>>>
>>>
>>>Regards,
>>>Michael Plet
>>
>>
>>Using trigonometrical identities I have managed to simplify my formula
>>(losing squares and square root).
>>
>>The new formulation is:
>>
>>
>>Let k be the index of the DFT bin with the largest magnitude.
>>Let j be the index of the DFT bin with the largest magnitude
>>neigboring bin k.
>>That is j=k-1 or j=k+1.
>>
>>Now let
>>
>>P=Sin(2*PI*j/N), Q=Sin(2*PI*k/N), R=Cos(2*PI*j/N) and S=Cos(2*PI*k/N)
>>
>>Then the normalized frequency is estimated by:
>>
>>Freq=(N/(2*PI))*Arccos((Im[k]*P*S-Im[j]*Q*R)/(Im[k]*P-Im[j]*Q))
>>
>>where Im[] is the imaginary part of X[].
>>
>>
>>Michael Plet
>
>Is there any advantage to this method? It still seems very high in
>computational complexity to me.
>
>
>
>---
>This email has been checked for viruses by Avast antivirus software.
>https://www.avast.com/antivirus
The only advantage is that it gives the exact frequency in the
noiseless case and perform well in noisy cases.
Reply by ●February 10, 20172017-02-10
On Thu, 09 Feb 2017 21:16:17 +0100, Michael Plet <me@home.com> wrote:
>On Mon, 06 Feb 2017 19:17:27 +0100, Michael Plet <me@home.com> wrote:
>
>>Hi Group
>>
>>I have derived this estimator. The limited tests I have done shows
>>high accuracy.
>>
>>
>>Let k be the index of the DFT bin with the largest magnitude.
>>Let j be the index of the DFT bin with the largest magnitude
>>neigboring bin k.
>>That is j=k-1 or j=k+1.
>>
>>Now let P=Tan(PI*k/N) and Q=Tan(PI*j/N)
>>
>>Then the normalized frequency is estimated by:
>>
>>Freq=(N/PI)*Arctan(Sqr(P*Q*(Im[k]*P-Im[j]*Q)/(Im[k]*Q-Im[j]*P)))
>>
>>where Sqr is Square root and Im[] is the imaginary part of X[].
>>
>>
>>I was wondering if anyone has a test suite, so it would be possible to
>>compare with other estimators?
>>
>>
>>Regards,
>>Michael Plet
>
>
>Using trigonometrical identities I have managed to simplify my formula
>(losing squares and square root).
>
>The new formulation is:
>
>
>Let k be the index of the DFT bin with the largest magnitude.
>Let j be the index of the DFT bin with the largest magnitude
>neigboring bin k.
>That is j=k-1 or j=k+1.
>
>Now let
>
>P=Sin(2*PI*j/N), Q=Sin(2*PI*k/N), R=Cos(2*PI*j/N) and S=Cos(2*PI*k/N)
>
>Then the normalized frequency is estimated by:
>
>Freq=(N/(2*PI))*Arccos((Im[k]*P*S-Im[j]*Q*R)/(Im[k]*P-Im[j]*Q))
>
>where Im[] is the imaginary part of X[].
>
>
>Michael Plet
Is there any advantage to this method? It still seems very high in
computational complexity to me.
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus
Reply by Michael Plet●February 9, 20172017-02-09
On Mon, 06 Feb 2017 19:17:27 +0100, Michael Plet <me@home.com> wrote:
>Hi Group
>
>I have derived this estimator. The limited tests I have done shows
>high accuracy.
>
>
>Let k be the index of the DFT bin with the largest magnitude.
>Let j be the index of the DFT bin with the largest magnitude
>neigboring bin k.
>That is j=k-1 or j=k+1.
>
>Now let P=Tan(PI*k/N) and Q=Tan(PI*j/N)
>
>Then the normalized frequency is estimated by:
>
>Freq=(N/PI)*Arctan(Sqr(P*Q*(Im[k]*P-Im[j]*Q)/(Im[k]*Q-Im[j]*P)))
>
>where Sqr is Square root and Im[] is the imaginary part of X[].
>
>
>I was wondering if anyone has a test suite, so it would be possible to
>compare with other estimators?
>
>
>Regards,
>Michael Plet
Using trigonometrical identities I have managed to simplify my formula
(losing squares and square root).
The new formulation is:
Let k be the index of the DFT bin with the largest magnitude.
Let j be the index of the DFT bin with the largest magnitude
neigboring bin k.
That is j=k-1 or j=k+1.
Now let
P=Sin(2*PI*j/N), Q=Sin(2*PI*k/N), R=Cos(2*PI*j/N) and S=Cos(2*PI*k/N)
Then the normalized frequency is estimated by:
Freq=(N/(2*PI))*Arccos((Im[k]*P*S-Im[j]*Q*R)/(Im[k]*P-Im[j]*Q))
where Im[] is the imaginary part of X[].
Michael Plet