DSPRelated.com
Forums

ADC Interpolation Gain

Started by pacman101 February 24, 2011
Hello,

I've been reading a lot about increasing ADC quantization gain through
oversampling and then applying an interpolation filter.

I understand that for every 4x oversampling you can increase the SNR by 6
dB or 1-Bit and that lowers the quantization noise floor.

However, say my ADC is sampling two tones at 1 kHz and 2 kHz using a 14-bit
theoretical ADC.  Theoretically my SNR is 6 dB * 14 = 84 dB.  Which means
the SNR from the highest tone to the noise floor is 84 dB, and the dynamic
range is equal to the abs(Pw@ 1kHz - Pw@ 2kHz).  If I perform oversampling
and interpolation the SNR of each tone increases.

Now my question is, say the difference between the two tones is bigger than
my ADC SNR, which is about 87 dB.  If I do not oversample then I will not
see this lower tone.  However, If I oversample 4x more, then in theory I
should be able to obtain 90 dB in SNR and see the two tones, and more
oversampling in theory will allow me to increase the difference between the
two signals.  Is this realistic?

Also I found this reading an Analog data sheet:
"The signal-to-noise ratio (SNR) is the ratio of the rms signal amplitude
to the rms value of the sum of all spectral components except the first six
harmonics and dc. As the input level is decreased, SNR typically decreases
decibel-for-decibel in a linear fashion."

What do they mean when you lower the input level, SNR typically decreases? 


Thanks
Jan
On Thu, 24 Feb 2011 10:39:09 -0600, pacman101 wrote:

> Hello, > > I've been reading a lot about increasing ADC quantization gain through > oversampling and then applying an interpolation filter. > > I understand that for every 4x oversampling you can increase the SNR by > 6 dB or 1-Bit and that lowers the quantization noise floor. > > However, say my ADC is sampling two tones at 1 kHz and 2 kHz using a > 14-bit theoretical ADC. Theoretically my SNR is 6 dB * 14 = 84 dB. > Which means the SNR from the highest tone to the noise floor is 84 dB, > and the dynamic range is equal to the abs(Pw@ 1kHz - Pw@ 2kHz). If I > perform oversampling and interpolation the SNR of each tone increases. > > Now my question is, say the difference between the two tones is bigger > than my ADC SNR, which is about 87 dB. If I do not oversample then I > will not see this lower tone.
Actually, for many tones you will see the lower tone if you look for it, particularly if the 'quiet' tone is of much higher frequency, or if the two tones aren't harmonically related. The 'quiet' tone will occasionally bump the reading of the 'loud' tone up or down one LSB, and this will come through in the result. With the simple harmonic relationship of 1kHz and 2kHz, I'm just not sure how much the above applies.
> However, If I oversample 4x more, then in > theory I should be able to obtain 90 dB in SNR and see the two tones, > and more oversampling in theory will allow me to increase the difference > between the two signals. Is this realistic?
If you oversample 4x more, filter and decimate, all you're doing is reducing the total spectrum, most of which is (you hope) noise. That increases the SNR.
> Also I found this reading an Analog data sheet: "The signal-to-noise > ratio (SNR) is the ratio of the rms signal amplitude to the rms value of > the sum of all spectral components except the first six harmonics and > dc. As the input level is decreased, SNR typically decreases > decibel-for-decibel in a linear fashion." > > What do they mean when you lower the input level, SNR typically > decreases?
They say that because the noise level is fixed to the RSS of all the noise sources in the ADC. So when you increase the signal level (up to the point where distortion from clipping becomes objectionable) you also increase SNR. The SNR usually given is with the highest possible unclipped input amplitude, to make the chip look as good as possible. Something that will change your confusion level (either up or down :-)) is that with most ADCs, the conversion noise is white at the ADC output, and has roughly the same magnitude, no matter how fast you sample the ADC. So _in the discrete time domain_ the noise always looks the same. The difference is that in that same discrete time domain, when you sample faster, all of your desired signals become narrower in bandwidth. So you can filter tighter, from the perspective of that discrete-time domain, and get rid of more noise. From a frequency-domain perspective, _that_ is why oversampling helps. -- http://www.wescottdesign.com
Tim Wescott <tim@seemywebsite.com> wrote:
(snip on SNR and oversampling)

> Something that will change your confusion level (either up or down :-)) > is that with most ADCs, the conversion noise is white at the ADC output, > and has roughly the same magnitude, no matter how fast you sample the > ADC. So _in the discrete time domain_ the noise always looks the same.
> The difference is that in that same discrete time domain, when you sample > faster, all of your desired signals become narrower in bandwidth. So you > can filter tighter, from the perspective of that discrete-time domain, > and get rid of more noise. From a frequency-domain perspective, _that_ > is why oversampling helps.
Otherwise, there is a long tradition of "signal averaging" to reduce the SNR. It is the basis for the "margin of error" on many statistical reports. For Gaussian distributed (and sometimes even non-Gaussian) noise, the noise averages out at 1/sqrt(N). Many measurements are inherently statistical (radioactive half-life, as one example), and the only way to get a more accurate measurement is to measure many samples. Oversampling is slightly different, but mostly you get the same result from multple sampling and dithering. -- glen
On Thu, 24 Feb 2011 19:27:42 +0000, glen herrmannsfeldt wrote:

> Tim Wescott <tim@seemywebsite.com> wrote: (snip on SNR and oversampling) > >> Something that will change your confusion level (either up or down :-)) >> is that with most ADCs, the conversion noise is white at the ADC >> output, and has roughly the same magnitude, no matter how fast you >> sample the ADC. So _in the discrete time domain_ the noise always >> looks the same. > >> The difference is that in that same discrete time domain, when you >> sample faster, all of your desired signals become narrower in >> bandwidth. So you can filter tighter, from the perspective of that >> discrete-time domain, and get rid of more noise. From a >> frequency-domain perspective, _that_ is why oversampling helps. > > Otherwise, there is a long tradition of "signal averaging" to reduce the > SNR. It is the basis for the "margin of error" on many statistical > reports. For Gaussian distributed (and sometimes even non-Gaussian) > noise, the noise averages out at 1/sqrt(N). Many measurements are > inherently statistical (radioactive half-life, as one example), and the > only way to get a more accurate measurement is to measure many samples.
Which is really looking at the same thing from yet another angle again. It's good to have lots of different angles, too look at hard problems from.
> Oversampling is slightly different, but mostly you get the same result > from multple sampling and dithering.
Most ADCs with bit counts above 12 or so provide the dithering free of charge, in the form of random noise. -- http://www.wescottdesign.com
Thanks.  All your comments make sense.  I wanted to make sure that I am on
the right track since I have no experience dealing with playing with A/D
converters (except using a spectrum analyzer and some readily built
radios).  

So if I were to design an RF receiver, what design parameters should I pay
particular attention to when choosing the correct ADC?  For instance I
would think for my application I need high dynamic range and low jitter,
but what else should I pay attention to so that I don't get screwed by the
manufacturer?


>On Thu, 24 Feb 2011 19:27:42 +0000, glen herrmannsfeldt wrote: > >> Tim Wescott <tim@seemywebsite.com> wrote: (snip on SNR and
oversampling)
>> >>> Something that will change your confusion level (either up or down
:-))
>>> is that with most ADCs, the conversion noise is white at the ADC >>> output, and has roughly the same magnitude, no matter how fast you >>> sample the ADC. So _in the discrete time domain_ the noise always >>> looks the same. >> >>> The difference is that in that same discrete time domain, when you >>> sample faster, all of your desired signals become narrower in >>> bandwidth. So you can filter tighter, from the perspective of that >>> discrete-time domain, and get rid of more noise. From a >>> frequency-domain perspective, _that_ is why oversampling helps. >> >> Otherwise, there is a long tradition of "signal averaging" to reduce
the
>> SNR. It is the basis for the "margin of error" on many statistical >> reports. For Gaussian distributed (and sometimes even non-Gaussian) >> noise, the noise averages out at 1/sqrt(N). Many measurements are >> inherently statistical (radioactive half-life, as one example), and the >> only way to get a more accurate measurement is to measure many samples. > >Which is really looking at the same thing from yet another angle again. >It's good to have lots of different angles, too look at hard problems >from. > >> Oversampling is slightly different, but mostly you get the same result >> from multple sampling and dithering. > >Most ADCs with bit counts above 12 or so provide the dithering free of >charge, in the form of random noise. > >-- >http://www.wescottdesign.com >
> Theoretically my SNR is 6 dB * 14 = 84 dB. &#4294967295;Which means > the SNR from the highest tone to the noise floor is 84 dB, ...
By noise floor, if you mean on a spectrum analyzer or FFT, then that last statement is not true. Dirk
I agree Dirk.

I believe what you are saying is the dynamic range equals to the
20*log(Signal) - 20*log(FFT) - Noise Figure. 

Is this formula more or less correct?

>> Theoretically my SNR is 6 dB * 14 =3D 84 dB. =A0Which means >> the SNR from the highest tone to the noise floor is 84 dB, ... > >By noise floor, if you mean on a spectrum analyzer or FFT, then that >last statement is not true. > >Dirk >
On 02/24/2011 12:39 PM, pacman101 wrote:
> Thanks. All your comments make sense. I wanted to make sure that I am on > the right track since I have no experience dealing with playing with A/D > converters (except using a spectrum analyzer and some readily built > radios). > > So if I were to design an RF receiver, what design parameters should I pay > particular attention to when choosing the correct ADC? For instance I > would think for my application I need high dynamic range and low jitter, > but what else should I pay attention to so that I don't get screwed by the > manufacturer? >
>> context snipped << Look for converters that specify their 2nd- and 3rd-order intermodulation distortion -- this is a good indication of the 'useful' linearity in a receiver where you may be bringing a much wider band to the ADC than you will be sorting out after conversion. Actually, in general, you want to look for converters that are specified in 'radioish' terms, and the 3rd-order intermod is just part of that. This has been a frustration in the past when I was helping to select a good ADC for video use, and had to translate (or just plain guess at) what the radio-centric specifications meant in terms of a broadband signal that went all the way down to DC. Rate converter noise by spectral height in real frequency -- i.e., take the sampling rate into account. Jitter is important, but shows up as phase noise in your receiver -- i.e., if the sampling instant jitters, then a strong signal adjacent to a desired weak signal will smear, and will degrade the effective SNR of your desired signal. Note, too, that the ADC jitter is going to specified assuming a perfect clock to the ADC -- you can screw this over really quick by not paying strict attention to your clock signal. You need to treat the clock to the ADC as a precious resource: I know that digital radio guys don't let their ADC clocks go through their FPGAs, because they know the FPGA internal noise will mess up their clocks. On the other hand, don't over buy -- there's no point in getting the World's Lowest Jitter ADC if you _are_ corrupting your clock's phase noise generating it through an FPGA. I don't know what else specific to say -- the ADC is one of the choke points on performance, so there's a lot of value in modeling your receiver's performance parametrically, then plugging a lot of ADC parameters into it to see which one really is best. Then look at what each ADC needs for care and feeding, and decide how much it's going to cost to keep each one happy. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
two things,
1) if you have two = tones into the ADC, they BOTH can't be at full
scale.

2) when an ADC is oversampling, it just means the sampling rate is
higher then it needs to be to meet Nyquist.  If you rasise the
sampling rate, the TOTAL Q noise remains unchanged, but that noise is
now spread out over a wider BW, over the full Nyquist BW, so the noise
in YOUR BW of INTEREST gets lowered, i.e. the noise DENSITY is lower,
but the TOTAL noise remains the same..

Mark


On Thu, 24 Feb 2011 14:07:06 -0800, Tim Wescott <tim@seemywebsite.com>
wrote:

>On 02/24/2011 12:39 PM, pacman101 wrote: >> Thanks. All your comments make sense. I wanted to make sure that I am on >> the right track since I have no experience dealing with playing with A/D >> converters (except using a spectrum analyzer and some readily built >> radios). >> >> So if I were to design an RF receiver, what design parameters should I pay >> particular attention to when choosing the correct ADC? For instance I >> would think for my application I need high dynamic range and low jitter, >> but what else should I pay attention to so that I don't get screwed by the >> manufacturer? >> > >> context snipped << > >Look for converters that specify their 2nd- and 3rd-order >intermodulation distortion -- this is a good indication of the 'useful' >linearity in a receiver where you may be bringing a much wider band to >the ADC than you will be sorting out after conversion. > >Actually, in general, you want to look for converters that are specified >in 'radioish' terms, and the 3rd-order intermod is just part of that. >This has been a frustration in the past when I was helping to select a >good ADC for video use, and had to translate (or just plain guess at) >what the radio-centric specifications meant in terms of a broadband >signal that went all the way down to DC. > >Rate converter noise by spectral height in real frequency -- i.e., take >the sampling rate into account. > >Jitter is important, but shows up as phase noise in your receiver -- >i.e., if the sampling instant jitters, then a strong signal adjacent to >a desired weak signal will smear, and will degrade the effective SNR of >your desired signal. > >Note, too, that the ADC jitter is going to specified assuming a perfect >clock to the ADC -- you can screw this over really quick by not paying >strict attention to your clock signal. You need to treat the clock to >the ADC as a precious resource: I know that digital radio guys don't let >their ADC clocks go through their FPGAs, because they know the FPGA >internal noise will mess up their clocks. On the other hand, don't over >buy -- there's no point in getting the World's Lowest Jitter ADC if you >_are_ corrupting your clock's phase noise generating it through an FPGA. > >I don't know what else specific to say -- the ADC is one of the choke >points on performance, so there's a lot of value in modeling your >receiver's performance parametrically, then plugging a lot of ADC >parameters into it to see which one really is best. Then look at what >each ADC needs for care and feeding, and decide how much it's going to >cost to keep each one happy. > >-- > >Tim Wescott >Wescott Design Services >http://www.wescottdesign.com > >Do you need to implement control loops in software? >"Applied Control Theory for Embedded Systems" was written for you. >See details at http://www.wescottdesign.com/actfes/actfes.html
In addition to Tim's points on clock jitter, etc., another parameter that may be useful in an RF system is the aperture jitter or aperture delay. This is the jitter in the Sample-and-Hold Amplifier prior to the ADC and comes into play mostly if you're doing IF-sampling, sub-sampling or super-Nyquist conversion (i.e., capturing an aliased image rather than energy within the traditional Nyquist sampling range). And if what you're really getting at is whether it is possible to collect and process a signal that is WAY down in the dynamic range of the ADC, then, yes, you can, if you're careful about what you do and make use of processing gain to isolate and recover it. There are practical examples of receivers that are pretty old designs that pull relatively narrow-band PSK signals that occupy about 1-LSB (or a little less) of a single ADC (i.e., IF sampled) and process them with barely measurable implementation loss. Processing gain is a nice magic trick that a lot of people don't fully exploit. Eric Jacobsen http://www.ericjacobsen.org http://www.dsprelated.com/blogs-1//Eric_Jacobsen.php