Reply by jim August 23, 20042004-08-23

Jerry Avins wrote:

> > The sum is 4 bits larger. Two of those bits are meaningless. Consider > that the 16 samples are slightly corrupted approximations of some true > value.
As I said, I haven't been following real close. Do these 16 samples represent the signal at the same point in time (16 independent observations) or are they spread out at regular intervals of time. I was assuming the latter.
>Averaging a number of them can give a better estimate of that > true value than any one of them. It turns out that the precision of the > estimate increases at best with the square root of the number of > independent elements averaged. "At best" because the enabling conditions > aren't always met. There must be no bias, or "instrument error".
Was the discussion about using an average as a filter or was there a possibility of using a more suitable filter. If we are confined to only using the average then I would agree with your analysis.
> Successive errors must not be correlated. The error distribution must be > Gaussian or nearly so. (When thermal noise is too small, we add dither. > In this oversampling case, the signal variation from the first sample to > the 15th does the job PROVIDED THAT HIGH FREQUENCIES HAVE BEEN REMOVED.)
Again, that's true if the filter in question is just the average of the 16 samples. -jim -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Reply by Jerry Avins August 23, 20042004-08-23
Fred Marshall wrote:

> "Jerry Avins" <jya@ieee.org> wrote in message > news:4129fe80$0$21757$61fed72c@news.rcn.com... > >>Fred Marshall wrote: >> >> >>>"Jerry Avins" <jya@ieee.org> wrote in message >>>news:4127aaa1$0$21752$61fed72c@news.rcn.com... >>> >> >>The mean of a large number of approximate but unbiased measurements >>converges to the "true" value. Oversampling and low-pass filtering acts >>as an averaging process. In the best of circumstances -- excellent >>differential linearity in the DAC, appropriate dither noise -- the >>precision increases with the square root of the number of elements >>averaged. Sixteen x oversampling and decimating then yields a fourfold >>increase in precision, or two extra bits. Caveat: the original analog >>signal must be bandlimited so that sampling at the final rate wouldn't >>cause aliasing. Otherwise, oversampling won't create redundant samples. >> > > > OK. Well, that's a better explanation than my arm-waving one. > The point I was trying to make is that you have to increase the word length > at the (digital) filter output relative to the filter input in order to > improve the noise. > > I understand the theory of large numbers but hadn't thought about it in this > context. So, I'm still a bit leery of that framework. Normally I think of > it in terms of unweighted averages - while here we'd be using a weighted > average (the filter). No matter. > > I rather like to think of the operations on the spectra. This is a bit > contrived but constructive approach: > > Oversample a bandlimited signal and quantize. > This yields white noise of some energy level. > Lowpass filter the sequence in a filter with infinite precision. > There will be less noise by virtue of the filtering - and the infinite > precision at least doesn't perturb that result. > Now quantize the result. > The new quantization will add noise back - so you don't want to add back as > much as you removed - or you will be right back where you started. > This implies a longer word length. > (Note there's no decimation - yet). > > Now, if you already know how many bits will be in the results then "infinite > precision" isn't really what you get anyway. And, that was central to your > discussion, right?
I guess so. Most simply put, wherever the extra bits come from -- an oracle, averaging, a wider DAC -- smaller quantization steps make less quantization noise. In the end, it's that simple. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Fred Marshall August 23, 20042004-08-23
"Jerry Avins" <jya@ieee.org> wrote in message
news:4129fe80$0$21757$61fed72c@news.rcn.com...
> Fred Marshall wrote: > > > "Jerry Avins" <jya@ieee.org> wrote in message > > news:4127aaa1$0$21752$61fed72c@news.rcn.com... > > > The mean of a large number of approximate but unbiased measurements > converges to the "true" value. Oversampling and low-pass filtering acts > as an averaging process. In the best of circumstances -- excellent > differential linearity in the DAC, appropriate dither noise -- the > precision increases with the square root of the number of elements > averaged. Sixteen x oversampling and decimating then yields a fourfold > increase in precision, or two extra bits. Caveat: the original analog > signal must be bandlimited so that sampling at the final rate wouldn't > cause aliasing. Otherwise, oversampling won't create redundant samples. >
OK. Well, that's a better explanation than my arm-waving one. The point I was trying to make is that you have to increase the word length at the (digital) filter output relative to the filter input in order to improve the noise. I understand the theory of large numbers but hadn't thought about it in this context. So, I'm still a bit leery of that framework. Normally I think of it in terms of unweighted averages - while here we'd be using a weighted average (the filter). No matter. I rather like to think of the operations on the spectra. This is a bit contrived but constructive approach: Oversample a bandlimited signal and quantize. This yields white noise of some energy level. Lowpass filter the sequence in a filter with infinite precision. There will be less noise by virtue of the filtering - and the infinite precision at least doesn't perturb that result. Now quantize the result. The new quantization will add noise back - so you don't want to add back as much as you removed - or you will be right back where you started. This implies a longer word length. (Note there's no decimation - yet). Now, if you already know how many bits will be in the results then "infinite precision" isn't really what you get anyway. And, that was central to your discussion, right? Fred
Reply by Jerry Avins August 23, 20042004-08-23
jim wrote:

> > Jerry Avins wrote: > >>Fred Marshall wrote: >> >> >>>"Jerry Avins" <jya@ieee.org> wrote in message >>>news:4127aaa1$0$21752$61fed72c@news.rcn.com... >>> >>>................................ >>> >>> >>>>With 16x oversampling, two extra bits of precision are justified. The >>>>noise reduction is certainly at least equivalent. Since a total of four >>>>available, one might guess that the noise reduction is more. Since those >>>>last two bits are random, I don't think so. >>> >>> >>>Jerry, >>> >>>I don't follow this paragraph.... >>> >>>Fred >> >>The mean of a large number of approximate but unbiased measurements >>converges to the "true" value. Oversampling and low-pass filtering acts >>as an averaging process. In the best of circumstances -- excellent >>differential linearity in the DAC, appropriate dither noise -- the >>precision increases with the square root of the number of elements >>averaged. Sixteen x oversampling and decimating then yields a fourfold >>increase in precision, or two extra bits. Caveat: the original analog >>signal must be bandlimited so that sampling at the final rate wouldn't >>cause aliasing. Otherwise, oversampling won't create redundant samples. > > > Something not right here: > If you are filtering (which should by design remove high frequency > components) along with your decimation then why would the signal have to > be bandlimited to the "final" rate to prevent aliasing.
It's not to prevent aliasing. Filtering after over sampling averages the samples, thereby potentially extending the precision. An alternative to filtering and decimating is adding 16 successive n-bit samples and rounding the n+4-bit sum to n+2 bits.
> And I think the bit precision would increase by 4 for 16x > oversampling. As in, one increased bit per doubling of the sample rate.
The sum is 4 bits larger. Two of those bits are meaningless. Consider that the 16 samples are slightly corrupted approximations of some true value. Averaging a number of them can give a better estimate of that true value than any one of them. It turns out that the precision of the estimate increases at best with the square root of the number of independent elements averaged. "At best" because the enabling conditions aren't always met. There must be no bias, or "instrument error". Successive errors must not be correlated. The error distribution must be Gaussian or nearly so. (When thermal noise is too small, we add dither. In this oversampling case, the signal variation from the first sample to the 15th does the job PROVIDED THAT HIGH FREQUENCIES HAVE BEEN REMOVED.)
> But I haven't been following the thread very closely, so maybe I > misunderstand what is being discussed. > > -jim > > > -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- > http://www.newsfeeds.com - The #1 Newsgroup Service in the World! > -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
-- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by jim August 23, 20042004-08-23

Jerry Avins wrote:
> > Fred Marshall wrote: > > > "Jerry Avins" <jya@ieee.org> wrote in message > > news:4127aaa1$0$21752$61fed72c@news.rcn.com... > > > > ................................ > > > >>With 16x oversampling, two extra bits of precision are justified. The > >>noise reduction is certainly at least equivalent. Since a total of four > >>available, one might guess that the noise reduction is more. Since those > >>last two bits are random, I don't think so. > > > > > > Jerry, > > > > I don't follow this paragraph.... > > > > Fred > > The mean of a large number of approximate but unbiased measurements > converges to the "true" value. Oversampling and low-pass filtering acts > as an averaging process. In the best of circumstances -- excellent > differential linearity in the DAC, appropriate dither noise -- the > precision increases with the square root of the number of elements > averaged. Sixteen x oversampling and decimating then yields a fourfold > increase in precision, or two extra bits. Caveat: the original analog > signal must be bandlimited so that sampling at the final rate wouldn't > cause aliasing. Otherwise, oversampling won't create redundant samples.
Something not right here: If you are filtering (which should by design remove high frequency components) along with your decimation then why would the signal have to be bandlimited to the "final" rate to prevent aliasing. And I think the bit precision would increase by 4 for 16x oversampling. As in, one increased bit per doubling of the sample rate. But I haven't been following the thread very closely, so maybe I misunderstand what is being discussed. -jim -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
Reply by Jerry Avins August 23, 20042004-08-23
Fred Marshall wrote:

> "Jerry Avins" <jya@ieee.org> wrote in message > news:4127aaa1$0$21752$61fed72c@news.rcn.com... > > ................................ > >>With 16x oversampling, two extra bits of precision are justified. The >>noise reduction is certainly at least equivalent. Since a total of four >>available, one might guess that the noise reduction is more. Since those >>last two bits are random, I don't think so. > > > Jerry, > > I don't follow this paragraph.... > > Fred
The mean of a large number of approximate but unbiased measurements converges to the "true" value. Oversampling and low-pass filtering acts as an averaging process. In the best of circumstances -- excellent differential linearity in the DAC, appropriate dither noise -- the precision increases with the square root of the number of elements averaged. Sixteen x oversampling and decimating then yields a fourfold increase in precision, or two extra bits. Caveat: the original analog signal must be bandlimited so that sampling at the final rate wouldn't cause aliasing. Otherwise, oversampling won't create redundant samples. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Fred Marshall August 23, 20042004-08-23
"Jerry Avins" <jya@ieee.org> wrote in message
news:4127aaa1$0$21752$61fed72c@news.rcn.com...

................................
> > With 16x oversampling, two extra bits of precision are justified. The > noise reduction is certainly at least equivalent. Since a total of four > available, one might guess that the noise reduction is more. Since those > last two bits are random, I don't think so.
Jerry, I don't follow this paragraph.... Fred
Reply by Dennis M August 23, 20042004-08-23
> made me laugh. Your words describe, perfectly, > about 75% of my career!
Hee hee. Thank goodness for the other 25%! - Dennis
Reply by Jerry Avins August 21, 20042004-08-21
Fred Marshall wrote:

   ...

> Thus one must conclude that the lowpass filter must output a longer word > length - which is easy enough to imagine in that the filter adds multiple > samples and would generally require a longer word length to fully represent > the results. You can calculate how many bits at the output are necessary to > not add any new "quantization". Only doing this can reduce the quantization > noise - even if you drop some of the added LSBs from the calculation above - > you can't drop *all* of them and get lower quanitzation noise.
Isn't that going at it the long way round? Taking advantage of the extra samples to derive a greater word length reduces the quantization noise because the quantization steps are smaller. Q.E.D. With 16x oversampling, two extra bits of precision are justified. The noise reduction is certainly at least equivalent. Since a total of four available, one might guess that the noise reduction is more. Since those last two bits are random, I don't think so. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Fred Marshall August 21, 20042004-08-21
"Dennis M" <dennis.merrill@thermo.com> wrote in message
news:da1c3548.0408201032.4be40a55@posting.google.com...
> > However, if quantization noise extends the spectrum beyond B then the
result
> > is obviously different. This seems to be the case you're interested in. > > "Drop sampling" / decimation doesn't reduce the higher frequencies at
all.
> > It disguises them because they've been folded into lower frequencies -
thus
> > incereasing the lower frequency noise energy. > > Thanks for hanging with me on this one. There seems to be a real are > to asking the question correctly and I think I've blown it. You seem > to be onto what I was originally asking. I'm assuming bandlimited > input B and fs = 2*B and oversampling rate fs` = K x fs (where K can > even be non-integer, thus the term drop sample interpolation). With > not prefiltering the noise power due to quantization noise would be > the same - I see that now. > > But when that quantization noise gets folded back in due to aliasing > would the spectrum of the noise look the same as if it had simply been > sampled at the lower frequency (i.e. not oversampled). Not in my > mind! > > I think you're right that I just need to sit down run some Matlab on > it. That will probably clear it up much quicker. I'll do that and > hopefully I can put together a post at some later date that describes > why this is not as simple of a problem as it might appear at first. >
Dennis, OK. Well you threw folks for a loop with "drop sample interpolation". Sampling the original (continuous) data at a rate higher than 2*B gets you lots of samples with no interpolation whatsoever. If you're starting with sampled data and then interpolating or increasing the sample rate then I guess I understand what you're meaning. But usually one uses all the original samples in computing samples of an increased sample rate. So, nothing is dropped. Theoretically you cannot sample at 2*B - you *must* sample at fs > 2*B. Practically fs > 2.5*B to 10*B depending on the application. I don't know how one prefilters away the quantization noise. But perhaps I'm in a different context.... I mean that the quantization inherent with digital output from a sampling process results in quantization noise. Has nothing much to do with the (continuous) signal - so no prefiltering can change the noise inherent at the output. Now, if the "signal" is already sampled and quantized then that's a different matter - and, as I said before, filtering must also include something with regard to finer quantization. If the noise is white - as would be the case for quantization noise - then folding the noise could lead to more white noise I do believe. But, "more" in this context probably yields the same noise as seen in the 2.5*B case. By this I mean: If you sample the same signal at two rates (and lets allow the rates to be related by an integer for now) let's say 2.5*B and 25*B. 1) We see that 10 repetitions of the 2.5*B spectrum fit exactly into the 25*B spectrum. We can view this repeated spectrum as a signal sampled at 25*B if we wish. 2) Zeroing the middle of the 10 time repeated 2.5*B spectrum results in the original 25*B spectrum. This is lowpass filtering. 3) Once the lowpass filter is applied, we can reduce the sample rate by a factor of 10 by simple decimation. This is sample rate reduction / decimation preceded by a lowpass filter. We could have gone directly to this step starting with the original 25*B spectrum. So, what if these both include quantization noise? The 2.5*B spectrum will have quantization noise that is of lower energy because there are fewer samples - I think that's a fair statement. The 25*B spectrum will have quantization noise that is of higher energy because there are more samples - but it is spread out over a wider frequency range. In fact, I have to wonder if the spectral density of the noise isn't the same or nearly so. Anyway, here we have this 25*B spectrum with white quantization noise added. If we lowpass filter the noisy data, we get rid of some of the noise energy in a perfect situation (not quantized). If we quantize the result and decimate - using the same quantization level as before - then the quantization noise *must* be the same as the noise in the original 2.5*B data - because the samples are the same (or nearly so). Thus one must conclude that the lowpass filter must output a longer word length - which is easy enough to imagine in that the filter adds multiple samples and would generally require a longer word length to fully represent the results. You can calculate how many bits at the output are necessary to not add any new "quantization". Only doing this can reduce the quantization noise - even if you drop some of the added LSBs from the calculation above - you can't drop *all* of them and get lower quanitzation noise. Fred Fred