Reply by Raymond Toy January 19, 20062006-01-19
>>>>> "Gordon" == Gordon Sande <g.sande@worldnet.att.net> writes:
Gordon> Ask google about Box-Muller transformation. Turns uniforms into Gordon> Gaussians exactly. There ae a couple related transforms that Gordon> are also possible. Then there are the various schemes put forward Gordon> by Marsaglia. I thought I'd add a few of the other relatively simple methods for generating Gaussians. 1. Box-Muller using trig functions already given. 2. Box-Muller using rejection technique: a. u1, u2 = uniform random variable in [-1, 1] b. w = u1^2 + u2^2 c. Repeat a-b until w < 1. d. s = sqrt(-2*log(w)/w) e. x1 = s*u1, x2 = s*u2 This is faster than 1 if your trig functions are slow. 3. Ahrens, NA algorithm. a. u = random [0,1] b. B = 0 if u < 1/2, otherwise B = 1 c. Generate an exponential variate E. (One method is -log(u2), u2 being uniform [0,1], but there are many others.) d. S = E + E. e. Generate a Cauchy variate, C. (One method is tan(pi*(u2-1/2)), where u2 is uniform [0,1], but it's possible to use the bits of u above for this.) f. x = sqrt(S/(1+C*C)). g. Y = C*X. h. If B = 0, return X, otherwise return -X. On the next call, we can just return Y. 4. Ratio of uniforms. Knuth describes this. a. u = uniform [0, 1], u2 = uniform [0,1] b. x = sqrt(8/e)*(u2-0.5)/u c. xs = x^2 d. if xs < 1/2 - u*(4*exp(-1/4)) return e. if (1.4 + 4*exp(-1.35)/u) <= xs <= -4*log(u) then return x. f. Otherwise repeat. 5. Marsaglia's Ziggurat method. This is a bit complex, so I won't describe it here. Of these, I've found Marsaglia's method to be the fastest, but it does require storage of 127 numbers for a table. Ray
Reply by Jerry Avins January 18, 20062006-01-18
Gordon Sande wrote:
> On 2006-01-18 16:31:48 -0400, Jerry Avins <jya@ieee.org> said: > >> Gordon Sande wrote: >> >> ... >> >>> Ask google about Box-Muller transformation. Turns uniforms into >>> Gaussians exactly. There ae a couple related transforms that >>> are also possible. Then there are the various schemes put forward >>> by Marsaglia. >> >> >> Thanks for contributing to my education. > > > >> I think "Gaussian exactly" is a bit strong, though. > > > The Box-Muller transformation is exact, unlike the piecewise > polynomial that arises from summing 12 uniforms and subtracting 6. > It is zero for absolute values above 6. > >> Few computers can exactly represent the tails. :-) > > > If the "uniforms" are not exact because they are discrete then > the resulting "Gaussians" will be discrete as well. The discreteness > is not caused by the transformation. If you choose to keep only > the Gaussian tails than you are doing an experiment that is > very prone to expose the discreteness of the uniforms. If you > keep the middle of the Gaussians then you will not notice the > discreteness. The issue is quite general as it also applies to > exponential variables that might be used for low rate Poisson > processes with a long time between events. Etc. Etc. > > If you want to chase the Gaussian tails then you had better get > some multiple precision software and start with a very high precision > uniform such as the Mesenne Twister or one of the other high order > recurrence generators.
Of course you're right. I tend to bridle at the term "exact". Someone who asked for a hole in the exact center of something, refusing to give a tolerance because any error is too large, sensitized me. (Told that the best my setup could do was .2 mil, he asked what a mil was. When told, he settled for 1 mil, the precision of my three-jaw chuck.) Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Gordon Sande January 18, 20062006-01-18
On 2006-01-18 16:31:48 -0400, Jerry Avins <jya@ieee.org> said:

> Gordon Sande wrote: > > ... > >> Ask google about Box-Muller transformation. Turns uniforms into >> Gaussians exactly. There ae a couple related transforms that >> are also possible. Then there are the various schemes put forward >> by Marsaglia. > > Thanks for contributing to my education.
> I think "Gaussian exactly" is a bit strong, though.
The Box-Muller transformation is exact, unlike the piecewise polynomial that arises from summing 12 uniforms and subtracting 6. It is zero for absolute values above 6.
> Few computers can exactly represent the tails. :-)
If the "uniforms" are not exact because they are discrete then the resulting "Gaussians" will be discrete as well. The discreteness is not caused by the transformation. If you choose to keep only the Gaussian tails than you are doing an experiment that is very prone to expose the discreteness of the uniforms. If you keep the middle of the Gaussians then you will not notice the discreteness. The issue is quite general as it also applies to exponential variables that might be used for low rate Poisson processes with a long time between events. Etc. Etc. If you want to chase the Gaussian tails then you had better get some multiple precision software and start with a very high precision uniform such as the Mesenne Twister or one of the other high order recurrence generators.
Reply by Jerry Avins January 18, 20062006-01-18
Naebad wrote:
> "Jerry Avins" <jya@ieee.org> wrote in message > news:wdqdnQYS9OboxVPeRVn-qg@rcn.net... > >>Naebad wrote: > > >>What is the DC level of a single average? Assuming that the RNG is >>itself zero mean, you can omit the subtraction step. >> >>Jerry >>-- >>Engineering is the art of making what you want from things you can get. >> > > > I think it's 6 but it's so long since I did this...
It depends on the range of the averaged random variables. If they run from -1 to 1, then the mean is zero if 0 t0 1, than half the number averaged. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Jerry Avins January 18, 20062006-01-18
Gordon Sande wrote:

   ...

> Ask google about Box-Muller transformation. Turns uniforms into > Gaussians exactly. There ae a couple related transforms that > are also possible. Then there are the various schemes put forward > by Marsaglia.
Thanks for contributing to my education. I think "Gaussian exactly" is a bit strong, though. Few computers can exactly represent the tails. :-) From http://mathworld.wolfram.com/Box-MullerTransformation.html: Eric W. Weisstein, Box-Muller Transformation. A transformation which transforms from a two-dimensional continuous uniform distribution to a two-dimensional bivariate normal distribution (or complex normal distribution). If x_1 and x_2 are uniformly and independently distributed between 0 and 1, then z_1 and z_2 as defined below have a normal distribution with mean mu==0 and variance sigma^2==1. z_1 = sqrt(-2lnx_1)cos(2pix_2) z_2 = sqrt(-2lnx_1)sin(2pix_2). Is seems that it needs floating point, two calls to random(), a log, and a sine (or cosine). In some cases, half a dozen or so calls to an integer random() and as many adds may be preferable. Whatever, it's real neat. I'll remember it. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Randy Yates January 18, 20062006-01-18
y = randn(1);

(The "1" means "1x1" array, i.e., a scalar).

--RY

Reply by Naebad January 18, 20062006-01-18
"Jerry Avins" <jya@ieee.org> wrote in message
news:wdqdnQYS9OboxVPeRVn-qg@rcn.net...
> Naebad wrote:
> What is the DC level of a single average? Assuming that the RNG is > itself zero mean, you can omit the subtraction step. > > Jerry > -- > Engineering is the art of making what you want from things you can get. >
I think it's 6 but it's so long since I did this... Naebad
Reply by Vladimir Vassilevsky January 18, 20062006-01-18

allanherriman@hotmail.com wrote:


> Start with two independent rectangular on [0,1] rvs, R1 and R2. > > Calculate > Z1 = sqrt(-2 ln (R1)) cos(2 pi R2) > Z2 = sqrt(-2 ln (R1)) sin(2 pi R2) > > Z1 and Z2 are gaussian rvs, with zero mean and sd of 1. > > This of course uses nasty logs and trig. However, on a modern FPU, > they should be pretty quick, perhaps much faster than generating and > summing 12 good rvs. > > Accuracy in the tails still sucks somewhat. This is determined by the > precision of R1. The 'tails' come from really small R1 values (since > sin/cos are limited to +/-1, the log is the only thing that can > generate large numbers, and this happens when its argument is close to > zero). More bits in R1 help. > > > The last time I played with these things, I found that 32 bits wasn't > anywhere near enough. I think I might have been doing BER simulations > on a modem, and the tails mattered a lot!
I fully agree with your notes on the tails of the generated gaussian distribution, especially if it is critical to have good statistics at the range over 4x standard deviation. The double precision accuracy is required for R1, and the good 64bit random number generator is required also. Indeed it is faster to generate two gaussian numbers using single precision float and rand(), and add those to make better distribution. The single precision works well up to 3xSD, so the sum should be good enough for up to 6xSD. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Reply by Randy Yates January 18, 20062006-01-18
allanherriman@hotmail.com writes:
> [...] > Gordon Sande wrote: >> Turns uniforms into Gaussians exactly. > > I argue that your statement is only true for infinite precision > arithmetic. Practical precisions may result in a poor approximation. > I suggest that the user works out what precision they actually need.
Ayup. B-M "only" requires ln(), sqrt(), and sin/cos. What a fixed-point nightmare! Summing a few uniforms is MUCH easier (and probably just as or more accurate for a given computation time). -- % Randy Yates % "Bird, on the wing, %% Fuquay-Varina, NC % goes floating by %%% 919-577-9882 % but there's a teardrop in his eye..." %%%% <yates@ieee.org> % 'One Summer Dream', *Face The Music*, ELO http://home.earthlink.net/~yatescr
Reply by January 18, 20062006-01-18
Gordon Sande wrote:
> On 2006-01-18 12:15:43 -0400, Jerry Avins <jya@ieee.org> said: > > > Gordon Sande wrote: > >> On 2006-01-18 11:03:27 -0400, Jerry Avins <jya@ieee.org> said: > >> > >>> Andor wrote: > >>> > >>>> Naebad wrote: > >>>> > >>>> > >>>>>> can any one please suggest me how to generate guassian white noise with > >>>>>> zero mean and variance 1 in matlab? > >>>>>> > >>>>>> thanks. > >>>>>> > >>>>> > >>>>> If you have uniform distributed noise you can take say 12 samples ... > >>>> > >>>> > >>>> > >>>> There's that number again. There must be something about it that I > >>>> don't know about :-). > >>> > >>> > >>> Six works pretty well too. 13 is better; you have to stop somewhere. > >> > >> > >> The variance of the uniform is 1/12 so 12 leads to a variance of 1 > >> for the sum. Adequate as an excuse but a bit weak as a reason. > >> The resulting approximate Gaussian should only be used until you > >> have 10 minutes to do it better. > > > > Ten minutes and the necessary smarts. What's a better way? > > Ask google about Box-Muller transformation. Turns uniforms into > Gaussians exactly. There ae a couple related transforms that > are also possible. Then there are the various schemes put forward > by Marsaglia.
Ah yes. Box-Muller was the name I was trying to remember.
> Turns uniforms into Gaussians exactly.
I argue that your statement is only true for infinite precision arithmetic. Practical precisions may result in a poor approximation. I suggest that the user works out what precision they actually need. Regards, Allan