Show me some more numbers

Started by June 4, 2015
```[...snip...]
>
>That first sentence makes no sense. You can test your generator and your
>data however you want. That's any algorithm tester's responsibility.
That's
>why it is common to use well tested and documented generators. That's
also
>why seeded generators are used so that anyone else can regenerate the
data
>to test if they wish.

Here is how the PRG is seeded:

//--- Seed the random number generator

srand( (int) time( NULL ) );

Here is the routine:

//============================================================================
double PseudoGaussian( double argNoiseLevel )
{
if( argNoiseLevel == 0.0 ) return 0.0;

double theStretch = 11.23; // 7.94;
double theShift   = theStretch * 0.5;

double theResize  = theStretch / (double) RAND_MAX;

double theSum = 0.0;

for( int t = 0; t < 10; t++ )
{
theSum += (double) random() * theResize - theShift;
}

return argNoiseLevel * theSum * 0.10;
}
//============================================================================

Feel free to test away.  If you want my source code for the signal
testing, provide an email address and I will send it to you.

>
>When generating independent finite sequences of AWGN there is an
expected
>variance in mean known as the 'expected error in mean'. There is an
>expected error in variance known as the 'expected error in variance'. A
generator
>that produces independent sequences with a consistently larger or
>consistently smaller mean than the expected error in mean is broken and
should be
>fixed or replaced and retested. A generator that produces independent
>sequences with consistently larger or consistently smaller error in
variance than
>the expected error in mean is broken and should be fixed or replaced and
>retested.
>
See what I said in my response to Steve Pope.

>Mean and variance  are not the only statistics of a generator that can
be
>used to evaluate the correctness of a generator of AWGN. There are
similar
>expected errors in skew and kurtosis for example.
>

Fancy words for the third and fourth moments.  Do you know what term
follows in this sequence: Position, velocity, acceleration, ?????.

You can measure your distribution in umpteen ways, the mean and RMS is how
it is specified.  All I claimed was "near Gaussian", not AWGN, the routine
fits that description, does it not?

>Real implementations of algorithms operate on finite data sets that do
not
>have an expected mean of zero or expected error in variance of zero and
>are not properly tested by cooked data.
>
>If you think smaller expected error is needed, get it by fixing your
>broken generator, if that is the problem or increasing the size of the
data set,
>not cooking the data.
>
>Dale B. Dalrymple

I'm made a comparison test to test the various formulas, nitpicking about
whether the noise was USDA prime or not is a side issue.  If you really
want to talk about the quality of PRGs, go talk to the encryption folks.

I have provided a 3 Bin Real signal formula, and its derivation.  I have
provided a 3 Bin and a 2 Bin formulas for a Complex signal, without
providing their derivation.  I have derived a 2 Bin Real signal formula,
that I have not provided, nor its derivation.  All these formulas are
exact in the noiseless case.

It turns out my 3 Bin Complex signal formula is equivalent to Candan's
2013.  I wouldn't have known that without these tests.  As a result I now
have an alternative derivation of Candan's 2013 formula that I think is
much cleaner and neater than his.

So I ask again, of the three formulas I have provided, have you, oh
experienced one, ever seen them before?

I know it really irks you for me to say that window functions don't help
with frequency determination, so I have asked you to provide a
window/formula combination that can beat my results.  So far, nothing.
The Gaussian window/parabolic formula is not exact in the discrete case
and has already fallen short in Julien's, and my, testing.  You'll need to
provide another one.

I derived my 2 Bin Real signal formula in response to Martin Vicanek's
paper.  My solution is singular, his is a one parameter family, thus more
comprehensive.  In his paper he says: "Clearly, we can use this freedom to
our advantage and make an optimum choice of e.g. a and b for best signal
to noise ratio."  I have corresponded with him and am trying to help him
solve this.  He is very sharp.  I am also trying to figure out where my
solution is on his continuum in useful terms.  Neither are trivial
problems.  It may turn out that mine is the optimum, or he might find a
better answer.  I'm rooting for the latter.  All are exact in the
noiseless case.

Finally, I am wondering where you stand on the discussion Steve Pope and I
are having.  Is it better to use one set of noise patterns for all the run
trials, or should each one get its own?

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
```
```Cedron <103185@DSPRelated> wrote:

>In this test, for what I am trying to show, I don't really think it
>matters.  Our discussion on whether to use a single set of noise patterns
>for all runs or give each run a fresh pattern is independent of the
>particular noise model used, agreed?

Yes, we were discussing that, but more recently you introduced the idea
of "re-centering" and "re-scaling" the noise so I was responding

Smeve
```
```>Cedron <103185@DSPRelated> wrote:
>
>>In this test, for what I am trying to show, I don't really think it
>>matters.  Our discussion on whether to use a single set of noise
patterns
>>for all runs or give each run a fresh pattern is independent of the
>>particular noise model used, agreed?
>
>Yes, we were discussing that, but more recently you introduced the idea
>of "re-centering" and "re-scaling" the noise so I was responding
>
>
>Smeve

Yes, I did within the context of the discussion.  At issue was how to
differentiate the effects of the noise from the effects of the formulas in
the results.  Centering and rescaling would help with that at the cost of
being less realistic.

I do appreciate your commentary.  I also appreciate you discussing the
real issues involved rather than appealing to convention or authority.

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
```
```Cedron <103185@DSPRelated> wrote:

>>Cedron <103185@DSPRelated> wrote:

>>>Our discussion on whether to use a single set of noise patterns
>>>for all runs or give each run a fresh pattern is independent of the
>>>particular noise model used, agreed?

>>Yes, we were discussing that, but more recently you introduced the idea
>>of "re-centering" and "re-scaling" the noise so I was responding

>Yes, I did within the context of the discussion.  At issue was how to
>differentiate the effects of the noise from the effects of the formulas in
>the results.

Okay

>Centering and rescaling would help with that at the cost of
>being less realistic.

I think that's a pretty bad direction to go in.

Separately from that question, and back to your original point,
consider the following pair of experiments:

1) Evaluate by simulation one frequency-estimation algorithm at one
frequency/phase input to the DFT at one SNR, say SNR1.  Use a set of
runsize (say runsize = 10,000) noise patterns from a N(0,1) generator.
Reduce the sim results to a bias/standard deviation pair.  Observe that
the results have converged because increasing the runsize (with
additional noise patterns, say to 50,000 total) does not materially
affect the reduced results.

2) Now evaluate the same frequency-estimation algorithm at the same
frequency/phase input to the DFT at a different SNR, say SNR2.

Do you use the same 10,000 noise patterns, or a different 10,000?

If you use the same patterns, starting with the same 10,000 patterns
as in case 1) and confirm that this has converged by increasing the
runsize to 50,000 (while still using the same additional 40,000 patterns
as used in case 1) you then know that any unexpected results are the
result of algorithm behavior.

Whereas if you instead use a different 10,000 patterns, and see unexpected
reduced results at that point, you do not yet know for certain that the
unexpected results are due to the algorithm behavior, or due to having
changed the noise patterns.  You are more dependent on increasing
the runsize to be able to make this distinction.

So I say you have a better ability to distinguish algorithm behavior from
the effects of possibly pathological noise noise patterns if you do not
change the noise patterns between experiment 1) and experiment 2).

However, opinions on this are bound to vary, and could depend on the
surrounding scenario and requirements, so I am not preaching this
as gospel; just that is seems logical and scientific to me.

(For similar reasons, and for repeatability, you want to use seeded noise,
and not randomized noise, as mentioned by other contributors to this

Steve
```
```>Cedron <103185@DSPRelated> wrote:
>
>>>Cedron <103185@DSPRelated> wrote:
>
>>>>Our discussion on whether to use a single set of noise patterns
>>>>for all runs or give each run a fresh pattern is independent of the
>>>>particular noise model used, agreed?
>
>>>Yes, we were discussing that, but more recently you introduced the
idea
>>>of "re-centering" and "re-scaling" the noise so I was responding
>
>>Yes, I did within the context of the discussion.  At issue was how to
>>differentiate the effects of the noise from the effects of the formulas
in
>>the results.
>
>Okay
>
>>Centering and rescaling would help with that at the cost of
>>being less realistic.
>
>I think that's a pretty bad direction to go in.
>
I wouldn't call it good or bad.  What you are in essence doing is
shortcutting using a much larger runsize.  The purpose of a larger runsize
is to get the distributions closer to the ideal.

>Separately from that question, and back to your original point,
>consider the following pair of experiments:
>
>1) Evaluate by simulation one frequency-estimation algorithm at one
>frequency/phase input to the DFT at one SNR, say SNR1.  Use a set of
>runsize (say runsize = 10,000) noise patterns from a N(0,1) generator.
>Reduce the sim results to a bias/standard deviation pair.  Observe that
>the results have converged because increasing the runsize (with
>additional noise patterns, say to 50,000 total) does not materially
>affect the reduced results.
>
In my response, I'm going to ignore the precision limitation of large
runsizes I seem to have found, not that it is insignificant, but it is not
germane to the principles being discussed.

Now you have stepped off the presumption that one set of results is
sufficient.  My recommendation was if you were going to to 50,000 runs, do
five 10,000 runs so you have five sets of results to compare.  You can
calculate the overall average and standard deviation from the five sets.

>2) Now evaluate the same frequency-estimation algorithm at the same
>frequency/phase input to the DFT at a different SNR, say SNR2.
>
>Do you use the same 10,000 noise patterns, or a different 10,000?
>
>If you use the same patterns, starting with the same 10,000 patterns
>as in case 1) and confirm that this has converged by increasing the
>runsize to 50,000 (while still using the same additional 40,000 patterns

>as used in case 1) you then know that any unexpected results are the
>result of algorithm behavior.
>
I think there is an underlying assumption of linearity in this argument.
I have been careful to say that the increase is the standard deviations
seems to be roughly proportional (linear) to the RMS of the noise.  This
comes from the analytical view where

VB(Z+E)/V(Z+E) = VBZ/VZ + (Misc terms)E + H.O.T.

Where H.O.T = Higher Order Terms.

At some point, when E gets larger, the H.O.T terms become significant.

>Whereas if you instead use a different 10,000 patterns, and see
unexpected
>reduced results at that point, you do not yet know for certain that the
>unexpected results are due to the algorithm behavior, or due to having
>changed the noise patterns.  You are more dependent on increasing
>the runsize to be able to make this distinction.
>
>So I say you have a better ability to distinguish algorithm behavior from

>the effects of possibly pathological noise noise patterns if you do not
>change the noise patterns between experiment 1) and experiment 2).
>
I see what you are saying, but we are also discussing whether the same
pattern should be used row by row as well.

If you look at the bias of the 2 Bin Complex formula applied to a real
signal, it is fairly complicated.  Now it shows up in the noiseless case
so you know it is not due to the noise.  Suppose though that some how your
formula reacted to noise in a biased way.  If so, having multiple sets of
noise as I suggested would show that bias, and that bias would appear no
matter what noise patterns were thrown against it.  However, if the bias
appears under the circumstance you describe at SNR1, and then reappears at
SNR2, you still can't distinguish whether is was the noise or the formula
creating it.  If the bias appears with totally different noise sets, you
can conclude, but not be certain, that it was due to the formula.

>However, opinions on this are bound to vary, and could depend on the
>surrounding scenario and requirements, so I am not preaching this
>as gospel; just that is seems logical and scientific to me.
>
Being open minded seems to be a quality lacking among many of the posters
here, there is plenty of gospel preached.  For what I have been trying to
show about the behavior of the formulas, I think either method would
suffice.  I will code it your way and post the results, we can take the
discussion from there.

>(For similar reasons, and for repeatability, you want to use seeded
noise,
>and not randomized noise, as mentioned by other contributors to this
>
>Steve

I posted my code in my response to that other contributor.  I do indeed
use a seeded PRG.

You are kind to call him a contributor, I see him just as a heckler.  He
lost a considerable amount of credibility when he said: "For the same
signal duration, increasing the sample frequency and transform size will
separate the two components of the real signal and reduce the "self
interference" of a real signal with a complex estimation algorithm."

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
```
```Cedron <103185@DSPRelated> wrote:

>>>Centering and rescaling would help with that at the cost of
>>>being less realistic.

>>I think that's a pretty bad direction to go in.

>I wouldn't call it good or bad.  What you are in essence doing is
>shortcutting using a much larger runsize.  The purpose of a larger runsize
>is to get the distributions closer to the ideal.

But it's then no longer N(0,1) noise.  That to me is a very big deal.

>>consider the following pair of experiments:

>>1) Evaluate by simulation one frequency-estimation algorithm at one
>>frequency/phase input to the DFT at one SNR, say SNR1.  Use a set of
>>runsize (say runsize = 10,000) noise patterns from a N(0,1) generator.
>>Reduce the sim results to a bias/standard deviation pair.  Observe that
>>the results have converged because increasing the runsize (with
>>additional noise patterns, say to 50,000 total) does not materially
>>affect the reduced results.

>In my response, I'm going to ignore the precision limitation of large
>runsizes I seem to have found, not that it is insignificant, but it is not
>germane to the principles being discussed.

Agreed.

>Now you have stepped off the presumption that one set of results is
>sufficient.  My recommendation was if you were going to to 50,000 runs, do
>five 10,000 runs so you have five sets of results to compare.  You can
>calculate the overall average and standard deviation from the five sets.

This is equivalent, or almost equivalent.  In any case, for anything
other than a quick evaluation, there needs to be a rational way
of concluding that your runsize is large enough to get to
statistically accurate results.  So I think we're in agreeent here.

>>2) Now evaluate the same frequency-estimation algorithm at the same
>>frequency/phase input to the DFT at a different SNR, say SNR2.

>>Do you use the same 10,000 noise patterns, or a different 10,000?

>>If you use the same patterns, starting with the same 10,000 patterns
>>as in case 1) and confirm that this has converged by increasing the
>>runsize to 50,000 (while still using the same additional 40,000 patterns
>
>>as used in case 1) you then know that any unexpected results are the
>>result of algorithm behavior.

>I think there is an underlying assumption of linearity in this argument.

Almost, sort of. There's an assumption that "unexpected results" need to be
investigated.  As you suggest, it might be "unexpected" if the trend
vs. SNR of the standard deviation in the reduced results does not track
the RMS level of the noise as you change SNR.  But this is algorithm-
dependent, even investigation-dependent, so in the above I did not
specifically define this as what is "unexpected".

But at a higher level, I am suggesting the hypothesis is of the form:

"Does the unexpected behavior [however that is defined] result form
an insufficiency in the algorithm, or does it result from
outliers in the noise patterns?"

This is, I think, an attempt to formalize (slightly) the type
of distinctions you and I are discussing.

>I have been careful to say that the increase is the standard deviations
>seems to be roughly proportional (linear) to the RMS of the noise.  This
>comes from the analytical view where
>
>VB(Z+E)/V(Z+E) = VBZ/VZ + (Misc terms)E + H.O.T.
>
>Where H.O.T = Higher Order Terms.

>At some point, when E gets larger, the H.O.T terms become significant.

Yes, this is a very good level of detail for understanding how
it is behaving.

>>[snip]
>>So I say you have a better ability to distinguish algorithm behavior from
>>the effects of possibly pathological noise noise patterns if you do not
>>change the noise patterns between experiment 1) and experiment 2).

> I see what you are saying, but we are also discussing whether the same
> pattern should be used row by row as well.

Yes, we've discussed both.

>If you look at the bias of the 2 Bin Complex formula applied to a real
>signal, it is fairly complicated.  Now it shows up in the noiseless case
>so you know it is not due to the noise.  Suppose though that some how your
>formula reacted to noise in a biased way.  If so, having multiple sets of
>noise as I suggested would show that bias, and that bias would appear no
>matter what noise patterns were thrown against it.  However, if the bias
>appears under the circumstance you describe at SNR1, and then reappears at
>SNR2, you still can't distinguish whether is was the noise or the formula
>creating it.  If the bias appears with totally different noise sets, you
>can conclude, but not be certain, that it was due to the formula.

I see what you are saying also.

>>However, opinions on this are bound to vary, and could depend on the
>>surrounding scenario and requirements, so I am not preaching this
>>as gospel; just that is seems logical and scientific to me.

>Being open minded seems to be a quality lacking among many of the posters
>here, there is plenty of gospel preached.  For what I have been trying to
>show about the behavior of the formulas, I think either method would
>suffice.  I will code it your way and post the results, we can take the
>discussion from there.
>
>>(For similar reasons, and for repeatability, you want to use seeded
>noise,
>>and not randomized noise, as mentioned by other contributors to this

>I posted my code in my response to that other contributor.  I do indeed
>use a seeded PRG.

Good, I'll take a look at your code when I have a chance.

Steve
```
```On Thursday, June 11, 2015 at 9:06:35 AM UTC-7, Cedron wrote:
...
> Being open minded seems to be a quality lacking among many of the posters
> here, there is plenty of gospel preached.  For what I have been trying to
> show about the behavior of the formulas, I think either method would
> suffice.  I will code it your way and post the results, we can take the
> discussion from there.

Everyone has a right to an opinion. Everyone has the opportunity for their opinions to be knowledgeable. Some people think that their opinions on topics they are not knowledgeable about are as significant as their opinions on topics they are well practiced in. Other people who disagree with that are mistaken as being closed-minded.

There has been a lot of "gospel" practiced by those here. People have pointed you to some of that practice, but you have mistaken it for haystacks.

>
> >(For similar reasons, and for repeatability, you want to use seeded
> noise,
> >and not randomized noise, as mentioned by other contributors to this
> >
> >Steve
>
> I posted my code in my response to that other contributor.  I do indeed
> use a seeded PRG.

So I take it that you believe that the code you posted is responsive to the issues for which generator seeding is used.  The code that performs the seeding is:

-------------------------------------------
Here is how the PRG is seeded:
//--- Seed the random number generator

srand( (int) time( NULL ) );
-------------------------------------------

>
> You are kind to call him a contributor, I see him just as a heckler.  He
> lost a considerable amount of credibility when he said: "For the same
> signal duration, increasing the sample frequency and transform size will
> separate the two components of the real signal and reduce the "self
> interference" of a real signal with a complex estimation algorithm."
>
> Ced

Yes, that statement of mine was in error. The "sample frequency and" should be removed. Fortunately you were able to correct it and to understand the intent of the sentence and the point of the intended change.

In your noise generation you used "time(NULL)" as a seed. This serves to make the actual seed value used impossible to determine by both the original coder and everyone else. That makes it impossible for anyone to repeat the run, which is the purpose of the seeding. Was this your intent? Was this a simple lapse, a failure to understand noise generators or a failure to comprehend the the basic principles of testing algorithms with noise? How do you think hiding the seed value contributes to the discussion?

Dale B. Dalrymple
```
```dbd  <d.dalrymple@sbcglobal.net> wrote:

>On Thursday, June 11, 2015 at 9:06:35 AM UTC-7, Cedron wrote:

>> Pope wrote

>>> (For similar reasons, and for repeatability, you want to use seeded
>>> noise, and not randomized noise, as mentioned by other contributors

>> I posted my code in my response to that other contributor.  I do indeed
>> use a seeded PRG.

>So I take it that you believe that the code you posted is responsive to
>the issues for which generator seeding is used.  The code that performs
>the seeding is:
>
>-------------------------------------------
>Here is how the PRG is seeded:
>//--- Seed the random number generator
>
>         srand( (int) time( NULL ) );
>-------------------------------------------

In my terminology, this is "randomized", and I think we all agree,
inappropriate for most purposes in comm system performance simulations
including the simulations at hand.

The "randomize" terminology dates way back from BASIC.  I do not know if
anyone uses the term in this way anymore.

There are times, when implementing a comm system, you actually
do want to randomize some parameter.  A good example is the
random backoff in a CSMA algorithm.  If you use a deterministic
algorithm to generate the backoffs, there is the possiblity
of two nodes being by chance at the same point in the deterministic
sequence, and always colliding from then on.

I was faced with this situation once when designing a packet
radio.  I didn't have a really good source of randomized values for
the CSMA backoffs, but I ultimately utilized a combination of the counter
updated by the timer interrupt, and the most recent return address on
the stack, and so far as I was able to determine, this was sufficient.

More critical are cryptographic requirements for random keys.  The
first few versions of PGP tried to randomize but the methods were
not sufficient, such that for the first five years or so of PGP's
existence it was easily cracked despite the strength of the
underlying crypto algorithms themselves.  Oops.

Steve
```
```>
>There has been a lot of "gospel" practiced by those here. People have
>pointed you to some of that practice, but you have mistaken it for
haystacks.
>
Forests and trees.

Let's step up a level.  I derived and presented three frequency
determination formulas.  On the most difficult one, I have written a blog
article detailing its derivation.

It is my opinion that the real significance of these formulas is
theoretical.  It seems to be your opinion, and many here, that theoretical
isn't very important, it's practicality that matters.  So I wrote a
testing program to demonstrate to you "real worlders" that the formulas
work, and work well.

Now you are just quibbling about was the demonstration program up to your
testing conventions.  Guess what?  It doesn't matter.  Either my formulas
exist in "the gospel" or they don't.  As I have explained, I have searched
for them without finding them.  I have asked experts and been directed to
literature, although covering the same topic, don't include them.

So you have seen my formulas.  Have you seen them elsewhere?  Never mind
how I tested them.  Julien and Jacobsen have both tested them
independently and their results are similar to mine.  You have not
questioned their testing methodology, I presume then it meets your
standards.  The formulas work well, they ought to be in "the gospel".

I have asked you specifically as an advocate of window functions, to
provide one that outperforms my formulas.  You have not.  Until you can, I
don't see any point in jumping through your hoops.

I asked your opinion on the issue that Steve Pope and I are discussing.
You have not, no big deal, it isn't an obligation.  But since you profess
such expertise on proper testing, I would think it would be an easy one

>
>So I take it that you believe that the code you posted is responsive to
>the issues for which generator seeding is used.  The code that performs
the
>seeding is:
>
>-------------------------------------------
>Here is how the PRG is seeded:
>//--- Seed the random number generator
>
>         srand( (int) time( NULL ) );
>-------------------------------------------
>

The PRG is a single seeded generator.  The seeding code could easily be
replaced with an input value or hard coded.  As per my discussion with
Steve Pope, I prefer many different patterns, thus I test with it as
stated.  You don't have my testing code anyway, so repeatability is a moot
issue.  Guess what?  The results don't change much run after run.

>>
>> You are kind to call him a contributor, I see him just as a heckler.
He
>> lost a considerable amount of credibility when he said: "For the same
>> signal duration, increasing the sample frequency and transform size
will
>> separate the two components of the real signal and reduce the "self
>> interference" of a real signal with a complex estimation algorithm."
>>
>> Ced
>
>Yes, that statement of mine was in error. The "sample frequency and"
>should be removed. Fortunately you were able to correct it and to
understand the
>intent of the sentence and the point of the intended change.
>

It still wouldn't be correct.  The phrase that needs to be removed is "For
the same signal duration".  In order to increase resolution, you need to
increase the duration.  This is independent of the sample count.  On the
other hand, the sample frequency, which for a fixed duration corresponds
to the sample count is inextricably linked to the transform size.

>In your noise generation you used "time(NULL)" as a seed. This serves to
>make the actual seed value used impossible to determine by both the
original
>coder and everyone else. That makes it impossible for anyone to repeat
the
a
>simple lapse, a failure to understand noise generators or a failure to
>comprehend the the basic principles of testing algorithms with noise? How
do
>you think hiding the seed value contributes to the discussion?
>
>Dale B. Dalrymple

The encryption folks would laugh at that statement.  Again all this talk
about proper randomization, Steve and my discussion, repeatability, proper
testing, is all besides the point.

As far as I can tell I have come up with a major innovation.  Rather than
trying to shoot it down, you should be exploring it.  That's what I mean
by being closed minded.  Not open to the possibility that an improved
method exists.  Many of the people here, in which I include you, seem to
behave as if their doctrine needs to be defended rather than seeking out
new innovations.  This is more akin to how a religious organization

I stand by my heckler characterization.  I'm tired of it.

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
```
```>dbd  <d.dalrymple@sbcglobal.net> wrote:
>
>>On Thursday, June 11, 2015 at 9:06:35 AM UTC-7, Cedron wrote:
>
>>> Pope wrote
>
>>>> (For similar reasons, and for repeatability, you want to use seeded
>>>> noise, and not randomized noise, as mentioned by other contributors
>
>>> I posted my code in my response to that other contributor.  I do
indeed
>>> use a seeded PRG.
>
>>So I take it that you believe that the code you posted is responsive to
>>the issues for which generator seeding is used.  The code that performs
>>the seeding is:
>>
>>-------------------------------------------
>>Here is how the PRG is seeded:
>>//--- Seed the random number generator
>>
>>         srand( (int) time( NULL ) );
>>-------------------------------------------
>
>In my terminology, this is "randomized", and I think we all agree,
>inappropriate for most purposes in comm system performance simulations
>including the simulations at hand.
>
>The "randomize" terminology dates way back from BASIC.  I do not know if
>anyone uses the term in this way anymore.
>
>There are times, when implementing a comm system, you actually
>do want to randomize some parameter.  A good example is the
>random backoff in a CSMA algorithm.  If you use a deterministic
>algorithm to generate the backoffs, there is the possiblity
>of two nodes being by chance at the same point in the deterministic
>sequence, and always colliding from then on.
>
>I was faced with this situation once when designing a packet
>radio.  I didn't have a really good source of randomized values for
>the CSMA backoffs, but I ultimately utilized a combination of the counter

>updated by the timer interrupt, and the most recent return address on
>the stack, and so far as I was able to determine, this was sufficient.
>
>More critical are cryptographic requirements for random keys.  The
>first few versions of PGP tried to randomize but the methods were
>not sufficient, such that for the first five years or so of PGP's
>existence it was easily cracked despite the strength of the
>underlying crypto algorithms themselves.  Oops.
>
>
>Steve

Just like the precision limitation, I don't think the seeding of the PRG,
or even the noise type, is germane to the issue of whether the same noise
pattern should be used for each row/level, of the test to better determine
the behavior of the formulas.

There are many tricks to using computer state variables to reseed PRGs on
the fly so any encryption based on them can be cracked.  You have
mentioned some of them.  This is a different topic suitable for a
different newsgroup.

Sorry, I don't have the program recoded yet for pattern reuse.

Ced
---------------------------------------
Posted through http://www.DSPRelated.com
```