DSPRelated.com
Forums

Show me some more numbers

Started by Cedron June 4, 2015
Cedron <103185@DSPRelated> wrote:

>>Then replace "error rate" by "average error" (i.e your first column >>of data).
>>> So it is just a matter of tightening up the variance. There >>> are just as many expected cases that need to be increased as >>> those that need to be decreased. So by "cooking" the numbers >>> you can expect values that will be more similar to making a >>> larger set of runs.
>>I don't think so.
>By variance here I meant the variance of the average error values. In >your own words, cooking the noise will make the average error values more >optimistic, meaning closer to zero. This is what a larger set of runs is >expected to do as well.
Just because two series both trend towards zero does not mean they have the same statistics. In this case, your series has the wrong statistics.
>>By rescaling each individual noise pattern to have the same sigma, you >>are destroying your results.
>"Modifying" is not "destroying".
I'm sticking with "destroying" in this instance. In many situations with AWGN noise, it is a fraction of outlying cases with large variance that dominates performance. If you specifically rescale these cases downward, you are destroying your data. (I tried to get at this point earlier, with my examples where 10 or 12 noise patterns out of 10,000 might dominate performance. This is a very real-world effect in addition to being an expected theoretircal effect.)
>>The purpose of creating an ensemble of 1,000 (or 10,000, or as >>many as is necessary) noise patterns is so that you can see how the >>statistics of added white Gaussian noise affect system performance.
>Unfortunately, the precision issue messes with this.
If true you then need to work on your simulator implementation.
>Well, I never claimed AWGN, all I claimed was "near Gaussian".
That's okay to claim this, but it does not prove that any "near-Gaussian" method is valid. It's not so difficult to do it right. It's way less of an intellectual challenge than you faced when deriving your formulae in the first place, so I cannot understand your resistance on this one. Steve
>Cedron <103185@DSPRelated> wrote: > >>>Then replace "error rate" by "average error" (i.e your first column >>>of data). > >>>> So it is just a matter of tightening up the variance. There >>>> are just as many expected cases that need to be increased as >>>> those that need to be decreased. So by "cooking" the numbers >>>> you can expect values that will be more similar to making a >>>> larger set of runs. > >>>I don't think so. > >>By variance here I meant the variance of the average error values. In >>your own words, cooking the noise will make the average error values
more
>>optimistic, meaning closer to zero. This is what a larger set of runs
is
>>expected to do as well. > >Just because two series both trend towards zero does not mean >they have the same statistics. >
"More similar" is not a claim of "sameness".
>In this case, your series has the wrong statistics. >
I find your assertion of "wrong" disturbing. The strongest words you should use are "distorted compared to AWGN." Even so, adding a "somewhat" in front would be appropriate.
>>>By rescaling each individual noise pattern to have the same sigma, you >>>are destroying your results. > >>"Modifying" is not "destroying". > >I'm sticking with "destroying" in this instance. > >In many situations with AWGN noise, it is a fraction of outlying >cases with large variance that dominates performance. If you >specifically rescale these cases downward, you are destroying >your data. (I tried to get at this point earlier, with my >examples where 10 or 12 noise patterns out of 10,000 might >dominate performance. This is a very real-world effect in >addition to being an expected theoretircal effect.) >
You can expect extreme cases to be tempered somewhat, but not as much as you imply. If we were looking for outliers your point might be relevant, but since the only thing being measured is average and standard deviation (of the transformed noise), the effect will be minimal.
>>>The purpose of creating an ensemble of 1,000 (or 10,000, or as >>>many as is necessary) noise patterns is so that you can see how the >>>statistics of added white Gaussian noise affect system performance. >
AWGN is a mathematical model with nice statistical properties. The main difference between the near Gaussian model I used and a true Gaussian distribution is that the near one is bounded. To me, this is more likely to be representative of real life noise. I would expect that near Gaussian is closer to AWGN than real life noise is. I would also expect that real life noise is more similar to near Gaussian than AWGN.
>>Unfortunately, the precision issue messes with this. > >If true you then need to work on your simulator implementation. >
This is a warning to anyone who thinks that double precision floating point will be correct for large numbers of runs of any test type. If it hadn't been for the fact that my three bin complex formula is equivalent to Candan's 2013, I would not have seen this or suspected it.
>>Well, I never claimed AWGN, all I claimed was "near Gaussian". > >That's okay to claim this, but it does not prove that any >"near-Gaussian" method is valid. >
Your conflating of "correct, valid, right" with "conventional, standard" is really bothersome to me. It's like you're an uber-conformist, and you insist everyone else should be too. If you asked me what the square root of nine was and I said negative three, my answer would be "correct, valid, and right", but not "conventional nor standard".
>It's not so difficult to do it right. It's way less of an intellectual >challenge than you faced when deriving your formulae in the first >place, so I cannot understand your resistance on this one. > >Steve
It's not a matter of resistance. I came up with a test, explained it, and showed the results. What is hard to understand is why you insist that it is not good enough and must be done to a certain convention or it is unacceptable. The conclusions that can be drawn from either scenario are not going to differ. I shouldn't have had to do any testing at all. The reason I did it was to counter Jacobsen's arrogance in the Matlab Beginner thread when he said he had seen formulas that claim to be exact come and go and fall apart when presented with noise and thus could not be bothered to examine it. Even after I told him that his estimator was an approximation of my formula and thus similar behavior when exposed to noise should be expected. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Hi, Cedron

  It seems to me that a similar frequency estimation problem was
considered  by so called the high resolution spectrum estiamtion,
especially for the case of a single frequency signal. You could google
Pisarenko, MUSIC or the eigen vector based spectrum estimation. The
Pisaredko frequency estimation is given by

  w=acos((r2+sqrt(r2^2+8r1^2))/(4r1));

where r1 and r2 are the autocorrelation function of the signal. 

  It is interested in to see how your approach against the Pisarenko
approach.

Kathy

---------------------------------------
Posted through http://www.DSPRelated.com
Cedron <103185@DSPRelated> wrote:

> Pope wrote,
>>In this case, your series has the wrong statistics.
>I find your assertion of "wrong" disturbing. The strongest words you >should use are "distorted compared to AWGN." Even so, adding a "somewhat" >in front would be appropriate.
>>>>By rescaling each individual noise pattern to have the same sigma, you >>>>are destroying your results.
>>>"Modifying" is not "destroying".
>>I'm sticking with "destroying" in this instance.
>>In many situations with AWGN noise, it is a fraction of outlying >>cases with large variance that dominates performance. If you >>specifically rescale these cases downward, you are destroying >>your data. (I tried to get at this point earlier, with my >>examples where 10 or 12 noise patterns out of 10,000 might >>dominate performance. This is a very real-world effect in >>addition to being an expected theoretircal effect.)
>You can expect extreme cases to be tempered somewhat, but not as much as >you imply. If we were looking for outliers your point might be relevant, >but since the only thing being measured is average and standard deviation >(of the transformed noise), the effect will be minimal.
(Tangentially, "average" (mean) frequency error is a valid reduced statistic, but many would be more interested in the RMS frequency error, which is going to be more influenced by infrequent yet large error values.) But more important, to me anyway, is simulating AWGN noise to a reasonble precision; other types of noise are valid, but in almost every instance AWGN is the single most interesting case, because AWGN is "worst case" noise -- it impairs the achievable results more than any other type of noise of the same noise power. It also arises in nature by any of a large number of mechanisms (e.g. receiver equivalent input noise). So it's the first case most investigators will look at, and it is usually the case you characterize before looking at more exotic forms of noise.
>AWGN is a mathematical model with nice statistical properties. The main >difference between the near Gaussian model I used and a true Gaussian >distribution is that the near one is bounded.
That is okay, perhaps depending on details (all simulated AWGN generators will be bounded by something). It's really the per-pattern noise-rescaling idea that bothers me, since it likely significantly affects the impairments, but I think you've said you haven't actually done this in your sims, it's just something you propose as improving the interpretation of certain results ... I disagree strongly, but no big deal.
>>>Unfortunately, the precision issue messes with this.
>>If true you then need to work on your simulator implementation.
>Your conflating of "correct, valid, right" with "conventional, standard" >is really bothersome to me. It's like you're an uber-conformist, and you >insist everyone else should be too.
I believe conventions are very useful for creating a baseline from which investigations can proceed. The convention of characterizing first against AWGN, before jumping into variants, is supported by a lot of theory, a lot of practical considerations, *and* in addition to that a lot of convention. It's not just an isolated, randomly-chosen convention. I think/hope if you appreciated this you would not see me as a "uber-conformist" but more as a comm engineer with normal sensibilities about how best to process with simulation experiments. Steve
>Hi, Cedron > > It seems to me that a similar frequency estimation problem was >considered by so called the high resolution spectrum estiamtion, >especially for the case of a single frequency signal. You could google >Pisarenko, MUSIC or the eigen vector based spectrum estimation. The >Pisaredko frequency estimation is given by > > w&not;os((r2+sqrt(r2^2+8r1^2))/(4r1)); > >where r1 and r2 are the autocorrelation function of the signal. > > It is interested in to see how your approach against the Pisarenko >approach. > >Kathy > >--------------------------------------- >Posted through http://www.DSPRelated.com
Hi Kathy, I am not familiar with the Pisarenko method so I can't answer your question myself. What I did notice was that the equation you gave is clearly the root of a quadratic equation. So I did a little researching. First, I read the Wikepedia and found that is was for complex signals. Then I found this paper from 2002: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.78.5602&rep=rep1&type=pdf The thing about this paper was that it tested against real valued signals. Then I found this paper from 2007 with one of the same authors: http://www.ee.cityu.edu.hk/~hcso/eusipco07_3.pdf "Cross-multiplying (3) and (4), we obtain the following approximate equation: 2r1 cos^2(w) - r2 cos(w) - r1 = 0 (5)" There is the quadratic equation, but notice the keyword "approximate". Again, they are studying a real signal. They also come up with an improved formula that according to their figures performs much better. I have come up with 3 bin DFT formulas for both the real and complex case. They are both exact in the noiseless case and do quite well in the presence of noise. A very nice paper produced by Julien Arzi with comparisons of my formulas and a slew of others against both real and complex signals can be found at this site: http://www.tsdconseil.fr/log/scriptscilab/festim/index-en.html Follow the link on the right side labelled: "Comparison of different frequency estimation algorithms (pdf)" The charts in Julien's analysis are similar to the ones in the papers I cited above except the vertical scales are given in powers of ten rather than dbs. The derivation of my real valued signal formula can be found at: http://www.dsprelated.com/showarticle/773.php I have not published the derivations of my 2 bin or 3 bin complex valued signal formulas. I have not revealed my 2 bin real valued signal formula yet. I will do so when Martin Vicanek finishes his work. His paper can be found at: http://vicanek.de/dsp/FreqFromTwoBins.pdf Don't pay any serious attention to my results posted in this or the "Show me the numbers" thread. They are meant to be indicative, they are not comprehensive, i.e. only one frequency range with a fixed phase value, and as you can tell by my discussion with Steve Pope, the noise model is quite suspect. Thanks for bringing up the Pisarenko method. Maybe Julien will add it to his analysis and you can get a direct answer to your question. Ced --------------------------------------- Posted through http://www.DSPRelated.com
[...snip...]

> >(Tangentially, "average" (mean) frequency error is a valid reduced >statistic, but many would be more interested in the RMS frequency error,
>which is going to be more influenced by infrequent yet large error
values.)
>
Well, since the (arithmetic) average (aka mean) is so small compared to the standard deviation in these runs, its variance from zero is hardly significant and the standard deviation is a very good approximation of the RMS. I reported the standard deviation rather than the RMS because of its independence of the mean.
>But more important, to me anyway, is simulating AWGN noise to a >reasonble precision; other types of noise are valid, but in almost >every instance AWGN is the single most interesting case, because >AWGN is "worst case" noise -- it impairs the achievable results more than
>any other type of noise of the same noise power. It also arises >in nature by any of a large number of mechanisms (e.g. receiver >equivalent input noise). So it's the first case most investigators >will look at, and it is usually the case you characterize before >looking at more exotic forms of noise. >
I think my near Gaussian approach is actually a pretty close approximation of AWGN.
>>AWGN is a mathematical model with nice statistical properties. The
main
>>difference between the near Gaussian model I used and a true Gaussian >>distribution is that the near one is bounded. > >That is okay, perhaps depending on details (all simulated AWGN
generators
>will be bounded by something). >
Well, at least by the precision of the variable holding it. ;-)
>It's really the per-pattern noise-rescaling idea that bothers >me, since it likely significantly affects the impairments, but I >think you've said you haven't actually done this in your sims, it's >just something you propose as improving the interpretation of >certain results ... I disagree strongly, but no big deal. >
I did try it. Didn't change much.
> >I believe conventions are very useful for creating a baseline from >which investigations can proceed. The convention of characterizing >first against AWGN, before jumping into variants, is supported by a >lot of theory, a lot of practical considerations, *and* in >addition to that a lot of convention. It's not just an isolated, >randomly-chosen convention. I think/hope if you appreciated >this you would not see me as a "uber-conformist" but more as >a comm engineer with normal sensibilities about how best >to process with simulation experiments. > >Steve
I am the veteran of many "Standards vs Diversity" discussions in many different contexts. You cross a bright line when you say something that is non-standard is wrong. Conventions are rarely baseless, but if you don't understand the basis of a convention and follow it blindly, well then, that's as bad as blindly following policy without understanding the rationale behind it. I don't have to sell you on the benefits of standardization. How about the down side? Standards are a security risk. Standards are an impediment to innovation. I'm a mathematician who leans towards the side of non-conformist. More room for creativity there. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Cedron <103185@DSPRelated> wrote:

> Pope wrote,
>>But more important, to me anyway, is simulating AWGN noise to a >>reasonble precision; other types of noise are valid, but in almost >>every instance AWGN is the single most interesting case, because >>AWGN is "worst case" noise -- it impairs the achievable results more than >>any other type of noise of the same noise power. It also arises >>in nature by any of a large number of mechanisms (e.g. receiver >>equivalent input noise). So it's the first case most investigators >>will look at, and it is usually the case you characterize before >>looking at more exotic forms of noise.
>I think my near Gaussian approach is actually a pretty close approximation >of AWGN.
>>It's really the per-pattern noise-rescaling idea that bothers >>me, since it likely significantly affects the impairments, but I >>think you've said you haven't actually done this in your sims, it's >>just something you propose as improving the interpretation of >>certain results ... I disagree strongly, but no big deal.
>I did try it. Didn't change much.
Okay, but this could be another artifact of too-short runsizes.
>>I believe conventions are very useful for creating a baseline from >>which investigations can proceed. The convention of characterizing >>first against AWGN, before jumping into variants, is supported by a >>lot of theory, a lot of practical considerations, *and* in >>addition to that a lot of convention. It's not just an isolated, >>randomly-chosen convention. I think/hope if you appreciated >>this you would not see me as a "uber-conformist" but more as >>a comm engineer with normal sensibilities about how best >>to process with simulation experiments.
>I am the veteran of many "Standards vs Diversity" discussions in many >different contexts. You cross a bright line when you say something that >is non-standard is wrong. Conventions are rarely baseless, but if you
(Standards != conventions)
>don't understand the basis of a convention and follow it blindly, well >then, that's as bad as blindly following policy without understanding the >rationale behind it.
>I don't have to sell you on the benefits of standardization. How about >the down side? Standards are a security risk. Standards are an >impediment to innovation.
>I'm a mathematician who leans towards the side of non-conformist. More >room for creativity there.
This seems like rather a lot of dancing around rather than admitting to even a slight possibility that you are not promoting the most sound experimental approach. You have done a great job in deriving your formulae. You did a pretty reasonable job demonstrating their performance. It's your philosophy of science of which I am unconvinced, Steve
[...snip...]
> >>I am the veteran of many "Standards vs Diversity" discussions in many >>different contexts. You cross a bright line when you say something
that
>>is non-standard is wrong. Conventions are rarely baseless, but if you > >(Standards != conventions) >
This statement got me thinking. I should have been saying "common conventions" when I used the word conventions. Standards are codified conventions, thus a subset of common conventions. So, you also cross a bright line when you say something that is unconventional is wrong just because it is unconventional. [...snip...]
> >This seems like rather a lot of dancing around rather than admitting >to even a slight possibility that you are not promoting the >most sound experimental approach. >
Calling it "adequate" is hardly claiming it is the most sound simulation approach. I have not been promoting it, I have been defending it. There is more than a slight possibility that it is not the soundest approach. However, I don't think a more sound approach would draw any different conclusions.
>You have done a great job in deriving your formulae. You did >a pretty reasonable job demonstrating their performance. It's >your philosophy of science of which I am unconvinced, > >Steve
Thanks for the compliment. I am still waiting for somebody to step up and say they are familiar with "the gospel" and my formulas aren't in there. I'll accept "pretty reasonable" as a synonym for "adequate", so no disagreement there. I'll also point out that this isn't Science, it's applied mathematics. Ced --------------------------------------- Posted through http://www.DSPRelated.com
Cedron <103185@DSPRelated> replies to my post,

>>(Standards != conventions)
> This statement got me thinking. I should have been saying "common >conventions" when I used the word conventions. Standards are codified >conventions, thus a subset of common conventions.
Okay
>So, you also cross a bright line when you say something that is >unconventional is wrong just because it is unconventional.
I did not say that downplaying the importance of applying AWGN to a system sim is wrong because it is unconventional, it is wrong because it is undershooting the technical goals of doing the simulation in the first place. It is a convention, yes, but it is a convention because the science of the situation indicates that it is important. You seem to be recasting this as me blindly following a convention; that is basically a content-free argument on your part that could be used against any situation where somebody is following a consensus point of view, whether what they are doing makes scientific sense or not. So, yes, while it's important to battle consensus science sometimes, this is not one of those times. An analogy: let's suppose you need to measure the distance between two points on a plane, and your only tool is a yardstick. I suggest you iterate by laying the yardstick end-to-end and counting until you have measured the distance in yards. Let's say the answer is 50 yards. You could say I have "crossed a bright line" by suggesting you must count all 50 repetitions and not just, say, 42. Or that some of the "yards " should only count as three-quarters of a yard. This would be analogous to saying that some of the AWGN datapoints in a sim could be left out, or re-scaled. Is it valid to say counting all 50 yards is consensus science and therefore not to be trusted? Hey, good discussion. Steve
[...snip...]
> >I did not say that downplaying the importance of applying AWGN >to a system sim is wrong because it is unconventional, it >is wrong because it is undershooting the technical goals of doing >the simulation in the first place. >
The purpose of these simulations was to do a side by side comparison of the formulas in the presence of noise. There were no other technical goals, as in comparing the results to any other evaluation. For this goal, knowing the noise type is less important since none of the comparison criteria depends on a solid knowledge of the noise.
>It is a convention, yes, but it is a convention because the >science of the situation indicates that it is important. >You seem to be recasting this as me blindly following a convention; >that is basically a content-free argument on your part that could be >used against any situation where somebody is following a consensus >point of view, whether what they are doing makes scientific sense >or not. >
The screech of the uber-conformist: "That's how everybody does it." Just kidding. I am not saying, nor have I, that following a convention is wrong. What I am saying is judging an approach that doesn't follow convention as wrong just because it doesn't follow convention is wrong. I am also saying, just as with policy, is you should understand the rationale for the convention to know why it is good to follow. In your case, in our discussion, you have endeavoured to do that, I am not trying to cast you as blindly following common convention. What I am faulting you for is your harsh terminology for techniques that don't follow convention.
> >So, yes, while it's important to battle consensus science sometimes, >this is not one of those times. >
Anybody want to talk about Anthropogenic Global Warming?
>An analogy: let's suppose you need to measure the distance between >two points on a plane, and your only tool is a yardstick. I >suggest you iterate by laying the yardstick end-to-end and counting >until you have measured the distance in yards. Let's say the answer is
50
>yards. > >You could say I have "crossed a bright line" by suggesting you >must count all 50 repetitions and not just, say, 42. Or that >some of the "yards " should only count as three-quarters of a >yard. >
False analogy. Suppose I used the ruler and laid out a 3-4-5 triangle. Then I completed the rectangle so one corner was on one point to be measured and the long side of the rectangle pointed right to the other away point. I then put my ruler on corner perpendicular to the first point and aim my ruler at the away point. I mark the crossing point on the other side of the rectangle, then use similar triangles to calculate the distance to the away point. Foul, you scream, that is not what I suggested. It is wrong!
>This would be analogous to saying that some of the AWGN datapoints >in a sim could be left out, or re-scaled. Is it valid to say >counting all 50 yards is consensus science and therefore not to >be trusted?
Now, you are once again inferring that I said the consensus is not to be trusted. On the rescaling issue, what I said, and I stand by it, though I haven't actually proved it, is that the numerical values of a smaller run with the normalization rescaling will be closer to the values of unnormalized results of a longer run than a smaller run with unnormalized values. If you want to numerically contest that, it's your turn to do some coding.
> >Hey, good discussion. >
Yep, I hope others are enjoying it too.
> > >Steve
Ced --------------------------------------- Posted through http://www.DSPRelated.com