# Oversampling w/ Drop interpolation

Started by August 17, 2004
```I had a question posed to me today that got me thinking.  Here was the
question:

"If the signal is oversampled by 10 x fs  and then only every 10'th
data point is used for the final data set (without any filtering of
the rest of the data points)  will there be any difference in the
noise from data that is sampled at only fs?  The result is the same
number of points."

My answer was yes, that noise would still be reduced.  Even though the
noise would be aliased back into the frequencies of interest, the
overall noise power would be reduced because drop sampling attenuates
the higher frequencies.

Was my answer correct and complete?

- Dennis
```
```Dennis M wrote:
> I had a question posed to me today that got me thinking.  Here was the
> question:
>
> "If the signal is oversampled by 10 x fs  and then only every 10'th
> data point is used for the final data set (without any filtering of
> the rest of the data points)  will there be any difference in the
> noise from data that is sampled at only fs?  The result is the same
> number of points."
>
> My answer was yes, that noise would still be reduced.  Even though the
> noise would be aliased back into the frequencies of interest, the
> overall noise power would be reduced because drop sampling attenuates
> the higher frequencies.
>
> Was my answer correct and complete?
>
> - Dennis

No.

OUP
```
```Dennis M wrote:

> I had a question posed to me today that got me thinking.  Here was the
> question:
>
> "If the signal is oversampled by 10 x fs  and then only every 10'th
> data point is used for the final data set (without any filtering of
> the rest of the data points)  will there be any difference in the
> noise from data that is sampled at only fs?  The result is the same
> number of points."
>
> My answer was yes, that noise would still be reduced.  Even though the
> noise would be aliased back into the frequencies of interest, the
> overall noise power would be reduced because drop sampling attenuates
> the higher frequencies.
>
> Was my answer correct and complete?
>
> - Dennis

It depends. If the anti-alias filter's cut-off matched the higher
sampling rate, then you need to finish the filtering job before
decimating. (That's sometimes a reasonable way to design a system.) If
the anti-alias filter has a cut-off suitable for the lower rate, then
there is no difference between the decimated sample set and one taken
through the same filter at the lower rate.

Imagine two samplings of the same signal with identical ADCs, one at 1x,
the other at 10x, with one of the 10x samples coinciding with a 1x
sample. The coinciding 10x sample can't differ from the the 1x sample in
any way. How could it be "better"?

Jerry
--
... the worst possible design that just meets the specification - almost
a definition of practical engineering.                     .. Chris Bore
&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;

```
```> Imagine two samplings of the same signal with identical ADCs, one at 1x,
> the other at 10x, with one of the 10x samples coinciding with a 1x
> sample. The coinciding 10x sample can't differ from the the 1x sample in
> any way. How could it be "better"?

Good point - that makes sense.  It wouldn't be better.

But doesn't drop sampling have a LPF effect?  I thought it was like
convolving a rect() w/ a delta() time domain.  In the freq domain that
looks like a sinc().  The power spectrum would be sinc^2() and that
looks as if it attenuates higher frequencies.  With aliasing, those
higher frequnecy lobes would fold back into the normal spectrum but
wouldn't there still be some attenuation?  Maybe not since the aliased
"lobes" would add up to the same amount of energy...

- Dennis
```
```Dennis M wrote:
..
> But doesn't drop sampling have a LPF effect?

It's the other way around: you have to lowpass _before_ you drop
samples. The effect of just droping samples is as follows:

If you drop k of every n samples (k < n), you generate a new signal at
sampling rate = k/n F_s, where F_s is your original sampling rate. The
spectrum of the new digital signal is mirrored around k/(2n) F_s, and
periodic with period k/n F_s. If your original signal contained
components above k/(2n) F_s, then these components will be aliased by
drop sampling.

Regards,
Andor

```
```Dennis M wrote:

>>Imagine two samplings of the same signal with identical ADCs, one at 1x,
>>the other at 10x, with one of the 10x samples coinciding with a 1x
>>sample. The coinciding 10x sample can't differ from the the 1x sample in
>>any way. How could it be "better"?
>
>
> Good point - that makes sense.  It wouldn't be better.
>
> But doesn't drop sampling have a LPF effect?

No. You agreed that the 1x sample train and every tenth -- the proper
tenth -- samples of the 10x sample train are identical. Furthermore, the
nine other derived sample trains from the 10x sampling are equally good.
(Proof left as an exercise.)

>                I thought it was like
> convolving a rect() w/ a delta() time domain.  In the freq domain that
> looks like a sinc().  The power spectrum would be sinc^2() and that
> looks as if it attenuates higher frequencies.  With aliasing, those
> higher frequnecy lobes would fold back into the normal spectrum but
> wouldn't there still be some attenuation?  Maybe not since the aliased
> "lobes" would add up to the same amount of energy...

Don't use math as an excuse not to think. (I use thinking as an excuse
not to do math. That way bites too!) Remember: we suppose that the
analog signal contains no energy above half the lower sampling
frequency, so decimating won't cause aliasing. Otherwise, we would need
to low-pass before decimating, melting down the whole argument.

I don't know where the flaw in your argument is because I can't guess
what preconceptions might have caused it. I can imagine you thinking
that those extra samples have to be good for something, and they can be.
If the 10x sampling is dithered a little (circuit noise might be
enough), you can average them instead of throwing nine away. There will
be extra bits in the sum, and some of them will be worth keeping. 16x
sampling gets you two extra bits of precision, say 10-bit results from
an 8-bit DAC. 256x gives you 4 extra bits; do you see the pattern?

Jerry
--
... the worst possible design that just meets the specification - almost
a definition of practical engineering.                     .. Chris Bore
&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;

```
```On Tue, 17 Aug 2004 13:57:56 -0400, Jerry Avins <jya@ieee.org> wrote:

>Dennis M wrote:
>
>> I had a question posed to me today that got me thinking.  Here was the
>> question:
>>
>> "If the signal is oversampled by 10 x fs  and then only every 10'th
>> data point is used for the final data set (without any filtering of
>> the rest of the data points)  will there be any difference in the
>> noise from data that is sampled at only fs?  The result is the same
>> number of points."
>>
>> My answer was yes, that noise would still be reduced.  Even though the
>> noise would be aliased back into the frequencies of interest, the
>> overall noise power would be reduced because drop sampling attenuates
>> the higher frequencies.
>>
>> Was my answer correct and complete?
>>
>> - Dennis
>
>It depends. If the anti-alias filter's cut-off matched the higher
>sampling rate, then you need to finish the filtering job before
>decimating. (That's sometimes a reasonable way to design a system.) If
>the anti-alias filter has a cut-off suitable for the lower rate, then
>there is no difference between the decimated sample set and one taken
>through the same filter at the lower rate.
>
>Imagine two samplings of the same signal with identical ADCs, one at 1x,
>the other at 10x, with one of the 10x samples coinciding with a 1x
>sample. The coinciding 10x sample can't differ from the the 1x sample in
>any way. How could it be "better"?
>
>Jerry

Hi Jerry,
I wonder if this is a homework problem.

[-Rick-]

```
```Rick Lyons wrote:

> On Tue, 17 Aug 2004 13:57:56 -0400, Jerry Avins <jya@ieee.org> wrote:
>
>
>>Dennis M wrote:
>>
>>
>>>I had a question posed to me today that got me thinking.  Here was the
>>>question:
>>>
>>>"If the signal is oversampled by 10 x fs  and then only every 10'th
>>>data point is used for the final data set (without any filtering of
>>>the rest of the data points)  will there be any difference in the
>>>noise from data that is sampled at only fs?  The result is the same
>>>number of points."
>>>
>>>My answer was yes, that noise would still be reduced.  Even though the
>>>noise would be aliased back into the frequencies of interest, the
>>>overall noise power would be reduced because drop sampling attenuates
>>>the higher frequencies.
>>>
>>>Was my answer correct and complete?
>>>
>>>- Dennis
>>
>>It depends. If the anti-alias filter's cut-off matched the higher
>>sampling rate, then you need to finish the filtering job before
>>decimating. (That's sometimes a reasonable way to design a system.) If
>>the anti-alias filter has a cut-off suitable for the lower rate, then
>>there is no difference between the decimated sample set and one taken
>>through the same filter at the lower rate.
>>
>>Imagine two samplings of the same signal with identical ADCs, one at 1x,
>>the other at 10x, with one of the 10x samples coinciding with a 1x
>>sample. The coinciding 10x sample can't differ from the the 1x sample in
>>any way. How could it be "better"?
>>
>>Jerry
>
>
> Hi Jerry,
>   I wonder if this is a homework problem.
>
> [-Rick-]

Rick,

It doesn't have that flavor to me. For one thing, the misconceptions
seem to have been independently arrived at. For another, few instructors
I know would be clever (or malicious) enough to pose such a question.

Jerry
--
... the worst possible design that just meets the specification - almost
a definition of practical engineering.                     .. Chris Bore
&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;

```
```"Dennis M" <dennis.merrill@thermo.com> wrote in message
> I had a question posed to me today that got me thinking.  Here was the
> question:
>
> "If the signal is oversampled by 10 x fs  and then only every 10'th
> data point is used for the final data set (without any filtering of
> the rest of the data points)  will there be any difference in the
> noise from data that is sampled at only fs?  The result is the same
> number of points."
>
> My answer was yes, that noise would still be reduced.  Even though the
> noise would be aliased back into the frequencies of interest, the
> overall noise power would be reduced because drop sampling attenuates
> the higher frequencies.
>
> Was my answer correct and complete?

Dennis,

Well, it would be good to clean up your question a bit:

"If the signal is oversampled by 10 x fs" ...
seems to imply that fs has been properly selected in the first place.  If
you had added the relation of fs to the underlying signal, it would have
helped.  e.g. fs=3*B where B is the signal bandwidth.  Then you could have
said "the signal is oversampled by 10x at 30*B."

"will there be any difference in the noise from data that is sampled at only
fs?"...
It may depend on what you mean by "THE noise".

Consider this:

If the signal of bandwidth B includes the noise also limited to bandwidth B,
then the extra samples are redundant - the spectra are identical except for
the repetition at the sampling frequency.

If the signal of bandwidth B has not been sampled and A/D converted, then
the "noise" will include the A/D quantization noise and the noise spectrum
will cover the entire range to fs.  Now, if you decimate without filtering
the noise spectra will alias onto one another and the noise would appear to
increase as a result.  Note that the resulting noise is the same as if you'd
sampled and A/D converted at the lower rate in the first place.

I'm not an expert in the method but I recall a scheme that would sample and
A/D at a high frequency and subsequentyly low pass or band pass the result
to match the underlying signal.  This process reduces the quantization noise
I guess.  I imagine it would have to include an increase in the word
length - otherwise, how could that be?  I've not thought about it.....

Let's see: you sample and A/D at some extra high frequency and some word
length L, introducing quantization noise.  The spectrum of the noise extends
to the sampling frequency and, for our purposes here, is flat.  The
underlying signal is at fs/30.
Now, we lowpass filter the sequence at fs in preparation for decimation by
10.
The lowpass filtering would appear to eliminate all of the high frequency
noise - thus reducing the noise energy.
Now, when we decimate we replicate the spectrum 10 times - which increases
the noise energy again (if one is using the original fs as the "bandwidth"
of comparison).
We note that the spectrum to fs/10 is now nearly filled with noise.

How is this different from having sampled at fs/10 in the first place?

Here's another way to look at the spectra:
If the spectra, including noise, is limited to B then sampling at ever
higher frequencies only introduces zeros in the spectrum and no new energy.
This isn't "interesting".  So, if the sample rate is reduced, there's no
change in the interesting part (the nonzero part) of the spectrum.  You can
reduce the sample rate until the nonzero parts of the repeating spectra
approach but just don't overlap - which gets you to the Nyquist criterion.
In practice you don't go quite that far or even close in some cases.

However, if quantization noise extends the spectrum beyond B then the result
is obviously different.  This seems to be the case you're interested in.
"Drop sampling" / decimation doesn't reduce the higher frequencies at all.
It disguises them because they've been folded into lower frequencies - thus
incereasing the lower frequency noise energy.

None of this discussion makes sense unless you include the inherent
quantization at each and every step.  If the word length is increased as
part of the lowpass filtering process then the quantization noise might be
reduced (in principal at least).  Is it?

Seems like a great candidate for doing some analysis or simulation - the
arm-waving approach here is useful to me in creating a framework on which to
pose more quantitative questions.

Fred

```
```Fred Marshall wrote:

...

> I'm not an expert in the method but I recall a scheme that would sample and
> A/D at a high frequency and subsequentyly low pass or band pass the result
> to match the underlying signal.  This process reduces the quantization noise
> I guess.  I imagine it would have to include an increase in the word
> length - otherwise, how could that be?  I've not thought about it.....

...

Long ago, I had a data acquisition computer made by Analog Devices. It
was an 8086 machine that had "fast" 12-bit A-D conversions and slow
16-bit A-D conversions built into the motherboard. I had planned to run
the converters simultaneously, but I discovered to my chagrin that there
was only one ADC. It was 12 bits wide, but its specs -- especially
stability and differential linearity -- were 16 bits good. To get a
16-bit reading, the input was mixed with a pseudo-random sequence of
ones and zeros one LSB in amplitude, and the sum of 256 readings taken.
The result was rounded down to 12 bits. The precision increase foes with
the square root of the number of items averaged, a familiar result.

In the absence of noise and threshold uncertainty (in other words, in
theory but not in practice), you can replace the pseudo-random sequence
with a ramp that is zero at the first measurement and almost 1 LSB at
the last, all the while keeping the signal steady with the S&H. It's
easy to see that if the signal is 1/4 LSB higher than the threshold, 3/4
of the readings will be unaffected by the ramp, 1/4 will be on the next
step up, and the average of all of them will be spot on.

Jerry
--
... the worst possible design that just meets the specification - almost
a definition of practical engineering.                     .. Chris Bore
&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;

```