> Hello,
> I was recently working with a DSP engineer who was making an argument to
> me that if I didn't oversample (above Nyquist) that I would be "losing
> information," and I still fail to understand exactly what information I
> would be losing. The context of our particular application, is that we are
> dwelling on a signal for a fixed period of time (a design constraint), but
> we can sample at whatever rate we want. In my mind, higher sample rates
> losely translate into more expensive A/Ds and higher processing overhead.
> I suggested we sample just above Nyquist and perform our 1024 point FFT.
> He said he would prefer to sample at 3x Nyquist. Okay, more samples are
> better, I guess, but I don't know how they give more information other
> than adding length to the FFT. The dwell time sets the "bin width" and
> increasing N, by decreasing Ts to support a fixed dwell, does nothing more
> than increase the number of bins in the FFT. The window function used
> dictates, to some degree the energy split or "straddling loss" between
> bins if the frequency of the sampled signal fails to fall directly on a
> FFT bin, but I can pick whatever window I want without regard to the
> sample rate. We are doing a complex FFT, and phase information is
> important in this particular application, but I fail to see that there is
> any loss of phase info, regardless of the window function or sample rate.
> I have been working with DFTs/FFTs for a number of years, althought I am an
> analyst, not a DSP Engineer. Could someone tell me what fundamental
> concept I am missing.
When dealing with perfect samples from a perfectly bandlimited
signal, then sampling above Nyquist is invertable, which means
that no information is lost. However in the real world, there
are effects due to the finite slope of any bandlimiting filters,
which can either lose information from the signal of interest
by pass-band ripple, and/or add alias information from having a
non-zero stop-band. Also the phase of a sharp filter may change
more rapidly near the transition which may amplify any phase noise.
A higher sampling rate not only allows a flatter filter both in
magnitude and phase response, but even if the filter isn't changed,
contaminates signal data with less of any alias noise, due to
folding less of the stop band into the samples. The more noise
you add to a set of samples, the less information it can carry.
Also, but I'm not sure of this, quantization errors (either in
the sampling or inside the FFT) might have a greater effect on
signals near the Nyquist frequency of a given sampling rate than
on those farther below it. Again, adding error reduces information
carrying capacity.
IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
Reply by ●April 5, 20062006-04-05
Two additional thoughts.
If there is noise in the adc, then sampling at a higher rate will cause
that noise to be averaged over three samples rather than just the one.
This might improve the snr a little.
A long time ago Crystal (now Cirrus Logic) shipped two different
versions of their audio delta-sigma adcs. One version had a flat
response to 20kHz when sampling at 44.1kHz, but allowed a couple of
kHz of aliasing. The other version had a corner frequency of about
18kHz and did not have significant aliasing.
I imagine that the first one was intended to make the response curves
look good in product reviews, while the second was intended for more
serious use.
John
Reply by Jerry Avins●April 5, 20062006-04-05
JohnReno wrote:
> Hello,
> I was recently working with a DSP engineer who was making an argument to
> me that if I didn't oversample (above Nyquist) that I would be "losing
> information," and I still fail to understand exactly what information I
> would be losing. The context of our particular application, is that we are
> dwelling on a signal for a fixed period of time (a design constraint), but
> we can sample at whatever rate we want. In my mind, higher sample rates
> losely translate into more expensive A/Ds and higher processing overhead.
> I suggested we sample just above Nyquist and perform our 1024 point FFT.
> He said he would prefer to sample at 3x Nyquist. Okay, more samples are
> better, I guess, but I don't know how they give more information other
> than adding length to the FFT. The dwell time sets the "bin width" and
> increasing N, by decreasing Ts to support a fixed dwell, does nothing more
> than increase the number of bins in the FFT. The window function used
> dictates, to some degree the energy split or "straddling loss" between
> bins if the frequency of the sampled signal fails to fall directly on a
> FFT bin, but I can pick whatever window I want without regard to the
> sample rate. We are doing a complex FFT, and phase information is
> important in this particular application, but I fail to see that there is
> any loss of phase info, regardless of the window function or sample rate.
> I have been working with DFTs/FFTs for a number of years, althought I am an
> analyst, not a DSP Engineer. Could someone tell me what fundamental
> concept I am missing.
John,
I suspect that your colleague's explanation is a rationalization of what
he has learned from experience. "Lose information" doesn't provide a
basis for needing to sample faster than twice the highest frequency of
interest, but the need is there nonetheless.
Any frequencies in the signal higher than half the sampling frequency
will corrupt the samples with aliases. It is not practically possible to
remove them without also removing part of the signal of interest. Either
way, information is lost; not because of the sampling process, but
because of what we need to do to get a clean signal or the consequences
of not getting one. Raising the sampling frequency allows the use of
filters that don't affect the band of interest. If that's what your
colleague means by retaining more information, then I concur.
Because a practical sampling frequency is always above the Nyquist
frequency, there is another effect of critical sampling than we mostly
ignore because it's never evident. It is not possible to know the
amplitude of a signal at exactly half the sampling frequency. While it
doesn't alias, we can't reconstruct it. The mathematics of sampling
shows that we can reconstruct any frequency below that, no matter how
slightly below, with perfect accuracy. The real world doesn't have hard
edges like that. In fact, the mathematics is based on an unchanging
signal and an infinite observation time. With real-world signals, the
closer a frequency is to the Nyquist limit, the longer it takes to
determine its amplitude. With changing signals -- the only interesting
kind, after all -- reasonably prompt data analysis requires some headroom.
Your colleague's choice of 3x may be appropriate in some applications,
but much less is needed in many (CDs sample at 44.1KHz and claim to
reproduce 20) and more in some (5 and even 10x can make it easier to
design stable servos).
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by Andor●April 5, 20062006-04-05
John wrote:
...
> >He said he would prefer to sample at 3x Nyquist. Okay, more samples are
> >better, I guess, but I don't know how they give more information other
> >than adding length to the FFT. The dwell time sets the "bin width" and
> >increasing N, by decreasing Ts to support a fixed dwell, does nothing
> more
> >than increase the number of bins in the FFT. The window function used
> >dictates, to some degree the energy split or "straddling loss" between
> >bins if the frequency of the sampled signal fails to fall directly on a
> >FFT bin, but I can pick whatever window I want without regard to the
> >sample rate. We are doing a complex FFT, and phase information is
> >important in this particular application, but I fail to see that there
> is
> >any loss of phase info, regardless of the window function or sample rate.
You have essentially done the whole analysis above to show that no
additional "information" is revealed by regarding the DFT output of
oversampled signals. Any other questions? :-)
krishna_sun82 wrote:
> Nyquist rate simply specifies the minimum rate at which the signal has to
> be sampled, so that it can be reconstructed successfully. If your sampling
> rate is much more than the Nyquist rate, more will be the information you
> are getting.
Krishna,
do we agree that "successful reconstruction" means that the signal is,
well, successfully reconstructed? In that case it would be interesting
to hear from you how additional "information" that you supposedly get
by oversampling can be used to, um, even more succesfully reconstruct
the signal?
Regads,
Andor
Reply by krishna_sun82●April 5, 20062006-04-05
>Hello,
>I was recently working with a DSP engineer who was making an argument to
>me that if I didn't oversample (above Nyquist) that I would be "losing
>information," and I still fail to understand exactly what information I
>would be losing. The context of our particular application, is that we
are
>dwelling on a signal for a fixed period of time (a design constraint),
but
>we can sample at whatever rate we want. In my mind, higher sample rates
>losely translate into more expensive A/Ds and higher processing overhead.
>I suggested we sample just above Nyquist and perform our 1024 point FFT.
>He said he would prefer to sample at 3x Nyquist. Okay, more samples are
>better, I guess, but I don't know how they give more information other
>than adding length to the FFT. The dwell time sets the "bin width" and
>increasing N, by decreasing Ts to support a fixed dwell, does nothing
more
>than increase the number of bins in the FFT. The window function used
>dictates, to some degree the energy split or "straddling loss" between
>bins if the frequency of the sampled signal fails to fall directly on a
>FFT bin, but I can pick whatever window I want without regard to the
>sample rate. We are doing a complex FFT, and phase information is
>important in this particular application, but I fail to see that there
is
>any loss of phase info, regardless of the window function or sample rate.
>I have been working with DFTs/FFTs for a number of years, althought I am
an
>analyst, not a DSP Engineer. Could someone tell me what fundamental
>concept I am missing.
>Thanks,
>John
>
>
Nyquist rate simply specifies the minimum rate at which the signal has to
be sampled, so that it can be reconstructed successfully. If your sampling
rate is much more than the Nyquist rate, more will be the information you
are getting. How much more depends on the nature of the signal and how
much entropic it is. In fact, keeping the signal as analog preserves all
the information and by making it digital we are actually impairing the
information available. Then why are we going for digital signals? Just
because samples are easier and faster to process by modern digital
computers compared to processing the analog signals by analog circuits.
So, what your colleague says is true.
- Krishna
Reply by JohnReno●April 5, 20062006-04-05
Hello,
I was recently working with a DSP engineer who was making an argument to
me that if I didn't oversample (above Nyquist) that I would be "losing
information," and I still fail to understand exactly what information I
would be losing. The context of our particular application, is that we are
dwelling on a signal for a fixed period of time (a design constraint), but
we can sample at whatever rate we want. In my mind, higher sample rates
losely translate into more expensive A/Ds and higher processing overhead.
I suggested we sample just above Nyquist and perform our 1024 point FFT.
He said he would prefer to sample at 3x Nyquist. Okay, more samples are
better, I guess, but I don't know how they give more information other
than adding length to the FFT. The dwell time sets the "bin width" and
increasing N, by decreasing Ts to support a fixed dwell, does nothing more
than increase the number of bins in the FFT. The window function used
dictates, to some degree the energy split or "straddling loss" between
bins if the frequency of the sampled signal fails to fall directly on a
FFT bin, but I can pick whatever window I want without regard to the
sample rate. We are doing a complex FFT, and phase information is
important in this particular application, but I fail to see that there is
any loss of phase info, regardless of the window function or sample rate.
I have been working with DFTs/FFTs for a number of years, althought I am an
analyst, not a DSP Engineer. Could someone tell me what fundamental
concept I am missing.
Thanks,
John