DSPRelated.com
Forums

FFT, sampling rates and noise bandwidth.

Started by kyle October 23, 2005
Ok :)

I'm trying to simulate a DAQ system.  We have a fixed number of samples to
make from a signal source and we're trying to measure the amplitude of a
synchronously sampled sinusoidal signal.  I'm doing this with an FFT.

In my simulation I am assuming a noise power per hertz (N0) and scaling by
half the sampling rate, which is assuming a perfect rectangular filter at
the Nyquist frequency (Fs/2) which gives me a noise power.  I then take
the sqrt of this power value and use it to scale a gaussian random
variable (RANDN in Matlab).

The signal is mixed in additively as a constant magnitude sinusoid.  It is
synchronously generated with the sample number and is centre-cell.  (So
really it's only defined in terms of digital frequency, and scales with
the sampling rate).

The FFT is magnitude normalized by 1 over sqrt(N)*sqrt(Fs/2)  which allows
you to see the noise as a Volts/root-Hz measurement.

Now the problem I have is the observation of the following.

- Regardless of the sampling rate, the same number of samples are made for
each 'run' and FFT (it's a 65536-long signal)
- The sampling rate is only actually used to scale the variance of the
noise generator and then to normalize the FFT.
- The signal input into the time series is identical for every run.
- As the sampling rate increases, the noise bandwidth increases and the
noise power is supposed to scale accordingly. 
- However the end result is that the peak representing the injected
sinusoid drops relative to a fixed noise floor (which stays constant).

(I guess if I didn't normalize my FFT for Volts/root-Hz and just plotted
it scaled by 1/sqrt(N) I'd see the signal level remain constant and the
noise floor increase).

Now this makes sense based on what I've described, the only thing that's
changing as I increase the sampling rate in the simulation is an increase
in sigma^2 as defined by N0*Fs/2.

My problem is understanding how this relates to the real world.  If we had
a real DAQ system that had a perfect (or at least very good) AA-filter at
Fs/2 and we increased the sampling rate (while appropriate scaling the
frequency of the sinusoid) would the same effect be noticed?  My
physical-world assumption is that the noise floor in Volts/Root-Hz should
remain at a fixed level regardless of how fast we sample it (so long as
the  AA filter works properly) and the signal inputted into the system is
definately always the same voltage etc.  So the FFT should remain
identical as the sampling rate increases.


I hope that all made sense, can anyone give me an answer to this
conflict?

Is there something wrong with the assumption for generating digital noise?


My code looks like this (Matlab):

..
t=0:65535;
N0 = 1;  % Watts/Hertz
Fs = 30000; 
NoisePower = N0*Fs/2;  % Noise power in our bandwidth of interest
NoiseSignal = RANDN(1,65536)*sqrt(NoisePower);
Signal = sin(t*2*pi*10/65536);  % digital frequency of 10/65536
CombinedSignal = Signal + NoiseSignal;
..
X = fft(CombinedSignal);
X = abs(X) * 1 / (sqrt(Fs/2)*sqrt(65536));  % normalize per root-hertz
..

Cheers :)



		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
kyle wrote:
> Ok :) > > I'm trying to simulate a DAQ system. We have a fixed number of samples to > make from a signal source and we're trying to measure the amplitude of a > synchronously sampled sinusoidal signal. I'm doing this with an FFT. > > In my simulation I am assuming a noise power per hertz (N0) and scaling by > half the sampling rate, which is assuming a perfect rectangular filter at > the Nyquist frequency (Fs/2) which gives me a noise power. I then take > the sqrt of this power value and use it to scale a gaussian random > variable (RANDN in Matlab). > > The signal is mixed in additively as a constant magnitude sinusoid. It is > synchronously generated with the sample number and is centre-cell. (So > really it's only defined in terms of digital frequency, and scales with > the sampling rate). > > The FFT is magnitude normalized by 1 over sqrt(N)*sqrt(Fs/2) which allows > you to see the noise as a Volts/root-Hz measurement. > > Now the problem I have is the observation of the following. > > - Regardless of the sampling rate, the same number of samples are made for > each 'run' and FFT (it's a 65536-long signal)
What makes this an "observation"? I see it as a decision you made.
> - The sampling rate is only actually used to scale the variance of the > noise generator and then to normalize the FFT.
It seems from the description that your noise is intended to simulate truncation error. Why not simply truncate?
> - The signal input into the time series is identical for every run.
So you're not sampling the same frequency every time?
> - As the sampling rate increases, the noise bandwidth increases and the > noise power is supposed to scale accordingly.
Until you normalize it.
> - However the end result is that the peak representing the injected > sinusoid drops relative to a fixed noise floor (which stays constant). > > (I guess if I didn't normalize my FFT for Volts/root-Hz and just plotted > it scaled by 1/sqrt(N) I'd see the signal level remain constant and the > noise floor increase).
Right. So what's the question?
> Now this makes sense based on what I've described, the only thing that's > changing as I increase the sampling rate in the simulation is an increase > in sigma^2 as defined by N0*Fs/2. > > My problem is understanding how this relates to the real world. If we had > a real DAQ system that had a perfect (or at least very good) AA-filter at > Fs/2 and we increased the sampling rate (while appropriate scaling the > frequency of the sinusoid) would the same effect be noticed? My > physical-world assumption is that the noise floor in Volts/Root-Hz should > remain at a fixed level regardless of how fast we sample it (so long as > the AA filter works properly) and the signal inputted into the system is > definately always the same voltage etc. So the FFT should remain > identical as the sampling rate increases.
Are you changing the filter along with the signal frequency?
> I hope that all made sense, can anyone give me an answer to this > conflict? > > Is there something wrong with the assumption for generating digital noise?
I don't really know what you expect the generated noise to represent.
> My code looks like this (Matlab):
... Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
>> - The sampling rate is only actually used to scale the variance of the >> noise generator and then to normalize the FFT. > >It seems from the description that your noise is intended to simulate >truncation error. Why not simply truncate? >
I actually do that too.. The added noise is to simulate white noise from the system itself. What I'm trying to simulate is a system with a given noise floor (quoted in Volts/root-hertz or watts/hertz) that is always shaped with an AA-filter to be the full sampling bandwidth. Added to this noise is a sinusoidal signal of fixed voltage that is always at the same digital frequency (so as we increase the sampling rate, the frequency of this signal scales). I used the method I described before to generate this noise and mix it in with the signal. My understanding of noise tells me that regardless of the sampling rate, as long as you are using an effective AA filter you should always measure the same noise power per hertz (i.e. the noise floor remains unchanged relative to the signal strength in the spectrum (power or amplitude)). What my simulation tells me is that as you increasae the sampling rate, the noise floor rises (or since I'm normalizing for the sampling rate, the signal strength seems to fall). So either my understanding is flawed, my code is wrong or both hehe :) This message was sent using the Comp.DSP web interface on www.DSPRelated.com
kyle wrote:
>>>- The sampling rate is only actually used to scale the variance of the >>>noise generator and then to normalize the FFT. >> >>It seems from the description that your noise is intended to simulate >>truncation error. Why not simply truncate? >> > > > I actually do that too.. The added noise is to simulate white noise from > the system itself. > > What I'm trying to simulate is a system with a given noise floor (quoted > in Volts/root-hertz or watts/hertz) that is always shaped with an > AA-filter to be the full sampling bandwidth. Added to this noise is a > sinusoidal signal of fixed voltage that is always at the same digital > frequency (so as we increase the sampling rate, the frequency of this > signal scales). > > I used the method I described before to generate this noise and mix it in > with the signal. > > My understanding of noise tells me that regardless of the sampling rate, > as long as you are using an effective AA filter you should always measure > the same noise power per hertz (i.e. the noise floor remains unchanged > relative to the signal strength in the spectrum (power or amplitude)). > > What my simulation tells me is that as you increasae the sampling rate, > the noise floor rises (or since I'm normalizing for the sampling rate, the > signal strength seems to fall). > > So either my understanding is flawed, my code is wrong or both hehe :)
You're changing too many variables at once. When you collect N samples of M cycles of sine wave every run, the time of the run is proportional to the period of the sine. The noise you measure is integrated over that time. Your change goes the other way, so something else is going on Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
>> So either my understanding is flawed, my code is wrong or both hehe :) > >You're changing too many variables at once. When you collect N samples >of M cycles of sine wave every run, the time of the run is proportional >to the period of the sine. The noise you measure is integrated over that
>time. Your change goes the other way, so something else is going on
Well I got back to the lab today and tested the reality of my simulation by sampling a band-limited noise source added to a synchronous centre-cell sine wave with an HP spectrum analyzer at two different sampling rates.. the HP normalizes in the same fashoin as my simulation and it had the same result - signal strength drops as you increase the sampling rate (above a constant normalized (for Volts/root-hertz) noise floor) However doubling the sampling rate while also doubling the resolution (which means doubling the sampling TIME) got back the original signal strength. This makes sense as you increase the SNR by taking more points in the FFT. I still don't have a nice mathematical understanding why - but I'm happy enough.. thanks for your replies though! :) This message was sent using the Comp.DSP web interface on www.DSPRelated.com
kyle <kyleblay@blerk.org> wrote:
> >>> So either my understanding is flawed, my code is wrong or both hehe :) >> >>You're changing too many variables at once. When you collect N samples >>of M cycles of sine wave every run, the time of the run is proportional >>to the period of the sine. The noise you measure is integrated over that > >>time. Your change goes the other way, so something else is going on > > Well I got back to the lab today and tested the reality of my simulation > by sampling a band-limited noise source added to a synchronous centre-cell > sine wave with an HP spectrum analyzer at two different sampling rates.. > the HP normalizes in the same fashoin as my simulation and it had the same > result - signal strength drops as you increase the sampling rate (above a > constant normalized (for Volts/root-hertz) noise floor) > > However doubling the sampling rate while also doubling the resolution > (which means doubling the sampling TIME) got back the original signal > strength. > > This makes sense as you increase the SNR by taking more points in the > FFT. > > I still don't have a nice mathematical understanding why - but I'm happy > enough.. thanks for your replies though!
The power of sine wave in Fourier domain is represented by the Dirac delta, if you scale result as PSD it will be smeared by bin width. It's common mistake - such deterministic signals doesn't have "power spectral density", they have the power ;)
kyle wrote:
>>>So either my understanding is flawed, my code is wrong or both hehe :) >> >>You're changing too many variables at once. When you collect N samples >>of M cycles of sine wave every run, the time of the run is proportional >>to the period of the sine. The noise you measure is integrated over that > > >>time. Your change goes the other way, so something else is going on > > > Well I got back to the lab today and tested the reality of my simulation > by sampling a band-limited noise source added to a synchronous centre-cell > sine wave with an HP spectrum analyzer at two different sampling rates.. > the HP normalizes in the same fashoin as my simulation and it had the same > result - signal strength drops as you increase the sampling rate (above a > constant normalized (for Volts/root-hertz) noise floor) > > However doubling the sampling rate while also doubling the resolution > (which means doubling the sampling TIME) got back the original signal > strength.
What are you calling resolution? Doubling the sample rate and collecting twice as much data requires the same time as before. If you also doubled the time, you'd have four times as much data. ... Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
>> However doubling the sampling rate while also doubling the resolution >> (which means doubling the sampling TIME) got back the original signal >> strength. > >What are you calling resolution? Doubling the sample rate and collecting
>twice as much data requires the same time as before. If you also doubled
>the time, you'd have four times as much data.
The resolution is the length of the sample (in samples) - probably not precise but just borrowing the term from the HP spectrum analyzer ;-) This message was sent using the Comp.DSP web interface on www.DSPRelated.com