On Tue, 16 Apr 2013 05:22:43 -0500, SRB wrote:
<< snip >>
>>Next, noise in real systems isn't just quantization noise. When there is
> wi=
>>deband background noise at levels noticeable compared to the
>>quantization
> n=
>>oise, increasing bandwidth by oversampling can increase the noise in the
> si=
>>gnal bandwidth.
>
> Thank you for pointing this out. This makes me realize that my above
> assumption about the effect of oversampling, filtering and decimation on
> SNR only applies if the dominant source of noise is quantisation noise.
>
Actually your statement misses reality a bit.
And this whole subject is full of complications, subtleties, and
opportunities to mislead yourself.
To rant a bit: as signal processing engineers, we're taught to think in
the frequency domain. Thinking in the frequency domain means that we
are, implicitly at least, using Fourier transform math to describe and
attempt to solve our problems. This is exactly correct when we are
dealing with exactly linear, time-invariant systems. It can be made to
be exactly correct with systems that are time-varying in known ways. It
can be used to some benefit with systems that are nonlinear -- but only
if we are careful.
The whole issue of analyzing noise in an ADC involves nonlinearities,
both from quantization and from the transfer curve of the ADC not being
perfect, and it involves sampling from continuous to discrete time which
is a time-varying process. So it basically throws all possible
complications at using the frequency domain as a basis for analysis.
Which is a really long justification for a short recommendation: don't
hesitate to do some time-domain analysis of what's going on. Some of the
issues that you're dealing with here that seemingly involve a lot of High
Math, mysterious smoke, and mirrors that can only be manufactured in a
factory that sacrifices virgins can be cut right through to complete
sensibility if you just think about them in he time domain.
Specifically, the question of whether SNR performance is enhanced by
oversampling, and what noise sources have effects that can be mitigated
by oversampling and what noise sources don't.
Consider an ADC. We say "it has quantization noise" -- but it doesn't.
An ADC quantizes, yes. But quantization by itself is a deterministic
process; it's not random noise at all. It is nonlinear, however, and
frequency domain analysis can't deal with that directly. Quantization
noise is a fiction that we create, to make a nonlinear effect into
account when we are using frequency domain analysis on our system. We do
this because frequency domain analysis cannot handle nonlinear components
-- but it can handle injected signals.
Because quantization is really a nonlinear effect it is dependent on the
signal -- this is one of the "it depends" things that I was talking
about. What it depends on, primarily, is how fast the signal is varying,
how large the signal is, how much noise there is ahead of the
quantization, and (to some extent) whether the signal is synchronized to
the sampling.
If you are measuring a slowly-varying signal with an ADC that is
dominated by quantization, then oversampling won't do you any good at all
-- you'll just be measuring the same damn thing over and over again, then
averaging the heck out of it.
If you are measuring a tiny signal with an ADC that is dominated by
quantization, then oversampling won't do any good either -- all you'll
ever see on the output of your ADC will be the average input value, not
the tiny signal you want to see.
If you are measuring a signal that spans several LSB's of the ADC, then
the quantization effect will be be spread out -- here, you may find that
it does, indeed, show up as "white". To the extent that you are sampling
slower than the stair-step effect of the quantization, oversampling will
help.
If you are measuring a signal that is accompanied by noise or other
signals, and if the unwanted signals and/or noise is large enough, then
the quantization error will be whitened, and you can treat your
quantization noise as white. Again, to the extent that you are sampling
slower than the stair-step effect of the quantization, oversampling will
help.
Many so-called "high-speed" ADCs have considerable noise in their analog
front ends -- often far in excess of other noise sources in your
circuit. ADC designers often set the number of bits in the ADC such that
the RMS noise is several LSB's -- this is a fault if you want to take one
measurement and trust it, but it is a benefit if you want to oversample,
because the ADC itself is providing noise that is great enough in
magnitude to swamp out the quantization noise. At this point, you _can_
improve SNR with oversampling.
Again, this can be easier to see in the time domain, or at least by
mixing time and frequency domains: with broadband noise in the ADC front
end (or the preceding circuits), each ADC sample, no matter when it is
taken, will have a fixed amount of noise added in. Staying in the time
domain, you can then assert that the more samples you take from the ADC
and average, the more you'll beat the noise down. Switching to the
frequency domain, you can assert (correctly) that the total noise _power_
from quantization and random noise is constant, but increasing the
sampling rate will automatically reduce the noise _density_, so filtering
to a fixed bandwidth will improve your noise performance.
The two noise sources that you _can't_ improve by oversampling are
bandlimited noise in your front-end, such as the noise that rides into
your antenna with your signal (assuming radio), and any noise (generally
quantization noise) that you insert in the process of your computations.
The former you have to deal with at the circuits, antenna design, or
systems level; the latter you have to deal with by designing for
sufficient data path widths.
--
My liberal friends think I'm a conservative kook.
My conservative friends think I'm a liberal kook.
Why am I not happy that they have found common ground?
Tim Wescott, Communications, Control, Circuits & Software
http://www.wescottdesign.com