On Mon, 22 Sep 2008 16:51:54 -0700, Jerry Avins wrote:
> On Sep 22, 7:25 pm, spop...@speedymail.org (Steve Pope) wrote:
>> Greg Berchin <gberc...@comicast.net> wrote:
>>
>> >On Mon, 22 Sep 2008 20:44:46 +0000 (UTC), spop...@speedymail.org
>> >>Whereas the values of the discrete Fourier transform of a white noise
>> >>signal have a normal distribution.
>> >I'm not doubting the statements from you and Vladimir, but I've never
>> >seen the derivation of this. Where did you find this info?
>>
>> Let's see... the DFT is a linear operation, that is to say, any given
>> output of a DFT is a linear combination of the inputs to the DFT, so if
>> the inputs are all normal, then any output is normal, since the sum of
>> normal variables is normal.
>>
>> This applies to both the real part of an output, and the imaginary part
>> of an output. They would each be normal.
>
> Noise needn't be bormally distributed in order to be white. Why would
> its parts be Gaussian?
>
> Jerry
Good point. But I'll bet that most samples, after "fft-izing" will look
Gaussian. If they don't you could use a longer sample (for stationary
noise). Only pathological cases (like most accurate models of
atmospheric noise at LF and MF) would have the requisite infinite
variance to make this expedient less than speedy.
--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com
Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by Tim Wescott●September 23, 20082008-09-23
On Mon, 22 Sep 2008 19:16:44 -0400, Greg Berchin wrote:
> On Mon, 22 Sep 2008 20:44:46 +0000 (UTC), spope33@speedymail.org (Steve
> Pope) wrote:
>
>>Whereas the values of the discrete Fourier transform of a white noise
>>signal have a normal distribution.
>
> I'm not doubting the statements from you and Vladimir, but I've never
> seen the derivation of this. Where did you find this info?
>
> Greg
Well, each bin of the FFT is a weighted sum of the original sample --
only the weights have a special relationship to the weights of the other
bins.
So, the weighted sum of a series of iid Gaussian variables is...
--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com
Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by Tim Wescott●September 23, 20082008-09-23
On Mon, 22 Sep 2008 07:31:56 -0700, Greg Berchin wrote:
> Vladimir Vassilevsky wrote:
>
>> This is not right. That way are generating a sum of the synchronously
>> phase manipulated signals with the breaks of phase at the same point.
>> there will be the artifacts at the edges of the subsequent FFT blocks,
>> and you will see it in the frequency domain, too. You have to make
>> random magnitude in the addition to the random phase.
>
> I think that your description of the problem is right but your solution
> is wrong. The resulting "white" noise will be pseudorandom with period
> equal to the FFT size. But using a random FFT magnitude won't change
> that. The Fourier Transform of white noise has constant magnitude and
> random phase.
>
> Greg
Nuh uh. The _expected_ value of the Fourier transform of white noise has
constant magnitude and random phase, but not the _actual_ value.
To choose a simple example, the FFT of a series of zero-mean, iid
Gaussian samples will have bins whose complex and real parts are all iid
Gaussian variables. But they will _not_ have constant magnitude.
This concept extends to the continuous time Fourier transform, but then
if you choose to start with really white noise you have to wrestle with
all the infinities.
--
Tim Wescott
Control systems and communications consulting
http://www.wescottdesign.com
Need to learn how to apply control theory in your embedded system?
"Applied Control Theory for Embedded Systems" by Tim Wescott
Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Reply by emre●September 23, 20082008-09-23
>recommend other signal processing texts, such as Rich Lyons' for
starters,
I meant Richard Lyons.. Sorry.
Reply by emre●September 23, 20082008-09-23
>If you would like to catch up on these things, I believe you could use
>Schaum's Outlines, on Probability, Random Variables, and Random
Processes,
>and Digital Signal Processing. They are both affordable and easy to
>follow.
Actually, I haven't read the latter one above, but people seem to be happy
with Schaum's Outline of Signal Processing (not digital, there is no review
on that at Amazon.) Depending on the time you can commit, I would also
recommend other signal processing texts, such as Rich Lyons' for starters,
and possibly Oppenheim's after that...
As far as the topic of Probability is concerned, Schaum's Outline may
indeed be a good starting point. You might later choose to read H. Stark's
"Probability, Random Processes, and Estimation Theory".
Emre
Reply by emre●September 23, 20082008-09-23
>Sorry but I don't use Matlab (I code all in C) and my mathematics
>background is a bit weak. What do you mean by "sigma" in the context
>of "sigma/sqrt(2)"? Does it have anything to do with the summation
>operator?
sigma (the Greek letter written in small -- not capital as in summation
operator) is usually used to denote the standard variation (the square root
of the variance) of a normal (Gaussian) random variable.
>Also, if you generate the real and imaginary parts separately,
>wouldn't the distribution of the samples in the complex plane be a
>square?
Close. It is called "circularly symmetric". It is interesting, but it is
the case that real and imaginary parts of the noise turn out to be
independent in many cases, most prominently for thermal noise.
If you are trying to simulate thermal noise in your measurement device,
then Gaussian distribution is likely the right thing to use. If your
original signal is real, then you have to take care about the generation
directly in the frequency domain, because the FT of a real signal has
conjugate symmetry: you should generate half of the samples (say, those
corresponding to positive frequency indices,) and set the other half to the
according the the conjugate symmetry. You can refer to any good signal
processing book for this property.
If you would like to catch up on these things, I believe you could use
Schaum's Outlines, on Probability, Random Variables, and Random Processes,
and Digital Signal Processing. They are both affordable and easy to
follow.
Hope this helps.
Emre
Reply by emre●September 23, 20082008-09-23
>If it is not constant magnitude, then I think you have a dilemma:
>at what frequencies would it have above-average magnitude?
>Logically there can't be any.
As Jerry just pointed, if the real and imaginary parts are independent
Gaussian (with same variance), then the magnitude is Rayleigh distributed
random. It is not constant. The precise statement should be that the
average (expected value) of the magnitude is constant over frequencies.
This is the characteristic of white noise. I believe this is what you
really meant.
Emre
Reply by Jerry Avins●September 23, 20082008-09-23
On Sep 23, 7:34�am, Greg Berchin <gberc...@sentientscience.com> wrote:
> emre wrote:
> > Are you saying that a one-to-one
> > transformation, that is the Fourier transform (FT), of a random signal is
> > not random? �But this is a contradiction, since there is only one function
> > (scaled delta) that is the (inverse) FT of a constant function.
>
> There's a bit of apples and oranges comparison here. �The real part
> and the imaginary part of the FT of a random signal may very well be
> zero-mean random (but not necessarily normally distributed). �But the
> OP asked about magnitude and phase. �The magnitude cannot be zero
> mean, since it is everywhere nonnegative, and its distribution will
> depend upon the distribution exhibited by the real and imaginary
> parts. �The phase can be zero mean, but it is unlikely that it will
> exhibit any distribution other than uniform.
If both the real and imaginary parts are Gaussian, the magnitude will
be Rayleigh.
Jerry
Reply by Greg Berchin●September 23, 20082008-09-23
emre wrote:
> Are you saying that a one-to-one
> transformation, that is the Fourier transform (FT), of a random signal is
> not random? But this is a contradiction, since there is only one function
> (scaled delta) that is the (inverse) FT of a constant function.
There's a bit of apples and oranges comparison here. The real part
and the imaginary part of the FT of a random signal may very well be
zero-mean random (but not necessarily normally distributed). But the
OP asked about magnitude and phase. The magnitude cannot be zero
mean, since it is everywhere nonnegative, and its distribution will
depend upon the distribution exhibited by the real and imaginary
parts. The phase can be zero mean, but it is unlikely that it will
exhibit any distribution other than uniform.
Greg
Reply by Michel Rouzic●September 23, 20082008-09-23
emre wrote:
> As others have pointed, you should not fix the magnitude. Try generating
> real and imaginary parts separately, and summing them as follows:
> x = sigma/sqrt(2) * ( randn(n,1) + i * randn(n,1) );
> in Matlab language. You can use this to characterize white noise in the
> "frequency domain" as you wish. (The variance of each element in the
> sequence x is sigma^2, make sure you use the correct value. In
> communications sigma^2/2 is used often for noise variance, and this can be
> confusing.)
>
> Emre
Sorry but I don't use Matlab (I code all in C) and my mathematics
background is a bit weak. What do you mean by "sigma" in the context
of "sigma/sqrt(2)"? Does it have anything to do with the summation
operator?
Also, if you generate the real and imaginary parts separately,
wouldn't the distribution of the samples in the complex plane be a
square?