Reply by SRB April 17, 20132013-04-17
>Sharon > >In the case of overall INL errors that cause low-order distortion terms,
you can model this as a continuous-time non-linearity followed by the sampling operation. Therefore distortion terms that exceed the Nyquist rate will alias.
> > >Bob > >
Hi Bob Thank you for the clarification. I think I see what you are getting at. Sharon
Reply by SRB April 17, 20132013-04-17
Hi Rick

> Just to add my two cents here, we should keep in >mind that traditional 'oversampling to improve >signal-to-quatization-noise ratio (SQNR) is a three >step process: > >1. Sample the analog signal at a higher sample rate > that is required by the Nyquist criterion. > >2. Lowpass filter the sampled data. Make sure that > the number of bits in the filter's coefficients and > the number of bits in the arithmetic results are > large enough to maintain your desired high SQNR. > >3. Decimate the filtered sequence.
Thank you very much for clarifying that. That is actually what I'm doing but I should have been more explicit about it in my original question. What prompted me to ask the question originally was trying to decide what stopband attenuation my lowpass filter should have. My initial thought was that it should be at least: SNR_ADC + 10log(R) where SNR_ADC is the quoted SNR for the ADC and R is the decimation factor. However, I realised that, for large decimation factors, this would give me a stopband attenuation much greater than the ADC SFDR (and require a lot of filter taps). My thinking was that, if SFDR is not improved by oversampling, there is no point improving SNR to the extent that signals smaller than the spurs can be detected and therefore that I could potentially have a less stringent stopband attenuation (and save on filter taps). This discussion has made me think that that thinking was probably oversimplified! Sharon
Reply by SRB April 17, 20132013-04-17
Tim

>To rant a bit:...
Thank you very much for taking the time to reply in so much detail! That was very helpful in clarifying the some of the assumptions involved in considering the effects of oversampling, and the types of noise for which it won't help. Sharon
Reply by Rick Lyons April 17, 20132013-04-17
On Sat, 13 Apr 2013 13:32:58 -0500, "SRB" <62352@dsprelated> wrote:

>Hi All > >My question relates to the effect of oversampling on the SFDR obtained with >an ADC. I understand that the SNR of an ADC, considered over a certain >bandwidth of interest (BW), can be increased relative to the overall SNR by >sampling at a rate (Fs) such that Fs/2 >> BW, and that the resulting >increase in SNR is 10log(Fs/2*BW). This has been clearly explained in the >responses to a previous post >(http://www.dsprelated.com/showmessage/72731/1.php). > >Am I correct in assuming that oversampling will not alter the SFDR? If so, >is it reasonable to conclude that there is no point in oversampling to the >extent that SNR becomes much better than SFDR because, while this would >allow very small signals to be detected, these signals could not be >distinguished from spurs? > >Thanks very much, >Sharon
Hello Sharon, Just to add my two cents here, we should keep in mind that traditional 'oversampling to improve signal-to-quatization-noise ratio (SQNR) is a three step process: 1. Sample the analog signal at a higher sample rate that is required by the Nyquist criterion. 2. Lowpass filter the sampled data. Make sure that the number of bits in the filter's coefficients and the number of bits in the arithmetic results are large enough to maintain your desired high SQNR. 3. Decimate the filtered sequence. Good Luck, [-Rick-]
Reply by April 17, 20132013-04-17
Sharon

In the case of overall INL errors that cause low-order distortion terms, you can model this as a continuous-time non-linearity followed by the sampling operation. Therefore distortion terms that exceed the Nyquist rate will alias.


Bob

Reply by glen herrmannsfeldt April 16, 20132013-04-16
Tim Wescott <tim@seemywebsite.com> wrote:

(snip on discussion about noise and oversampling)

> Actually your statement misses reality a bit.
> And this whole subject is full of complications, subtleties, and > opportunities to mislead yourself.
> To rant a bit: as signal processing engineers, we're taught to think in > the frequency domain. Thinking in the frequency domain means that we > are, implicitly at least, using Fourier transform math to describe and > attempt to solve our problems. This is exactly correct when we are > dealing with exactly linear, time-invariant systems. It can be made to > be exactly correct with systems that are time-varying in known ways. It > can be used to some benefit with systems that are nonlinear -- but only > if we are careful.
(snip)
> Because quantization is really a nonlinear effect it is dependent on the > signal -- this is one of the "it depends" things that I was talking > about. What it depends on, primarily, is how fast the signal is varying, > how large the signal is, how much noise there is ahead of the > quantization, and (to some extent) whether the signal is synchronized to > the sampling.
> If you are measuring a slowly-varying signal with an ADC that is > dominated by quantization, then oversampling won't do you any good > at all -- you'll just be measuring the same damn thing over and > over again, then averaging the heck out of it.
Seems to me that this is another "it depends" case. Specifically on how fast "slowly-varying" is. For a specific example, consider the bass response of CD audio. More generally, is it "slowly-varying" at the high oversampled rate or the original sample rate? If it isn't too slow, then the oversampling will be done with different values, maybe enough to matter. There was once discussion about quantization noise and CD audio related to the question about bass response. At 44.1kHz, each cycle of a 20Hz sine is sampled 2205 times. What number do you use to describe the quantization noise in that case? (Assuming 16 bit samples, and that the sine might not be full amplitude.) -- glen
Reply by glen herrmannsfeldt April 16, 20132013-04-16
SRB <62352@dsprelated> wrote:

(snip, and previously snipped)

> Yes, I was assuming the same A/D would be used in both cases. In our > specific application it would be the same A/D but, in general, in asking > about the effect of oversampling on SFDR I was assuming other factors > remained constant. Also, thank you for pointing out that I was failing to > consider other sources of A/D impairment.
In the general case, each ADC has a maximum conversion rate, and slower ones might have different noise characteristics. I suppose one could even build and ADC that internally, after the hold, sampled the signal many times and averaged the results.
>>Note that one advantage of oversampling is that low-order harmonic >> distortion terms do not alias back into the passband for > high-frequency inputs.
> Please could you enlarge on this a bit? It sounds like it could be very > relevant to my question but I don't fully follow your meaning.
I should let whoever wrote that reply, but in general you will want a filter after your oversampling to get down to the expected rate. You then consider the effect of that filter. -- glen
Reply by Tim Wescott April 16, 20132013-04-16
On Tue, 16 Apr 2013 05:22:43 -0500, SRB wrote:

<< snip >>

>>Next, noise in real systems isn't just quantization noise. When there is > wi= >>deband background noise at levels noticeable compared to the >>quantization > n= >>oise, increasing bandwidth by oversampling can increase the noise in the > si= >>gnal bandwidth. > > Thank you for pointing this out. This makes me realize that my above > assumption about the effect of oversampling, filtering and decimation on > SNR only applies if the dominant source of noise is quantisation noise. >
Actually your statement misses reality a bit. And this whole subject is full of complications, subtleties, and opportunities to mislead yourself. To rant a bit: as signal processing engineers, we're taught to think in the frequency domain. Thinking in the frequency domain means that we are, implicitly at least, using Fourier transform math to describe and attempt to solve our problems. This is exactly correct when we are dealing with exactly linear, time-invariant systems. It can be made to be exactly correct with systems that are time-varying in known ways. It can be used to some benefit with systems that are nonlinear -- but only if we are careful. The whole issue of analyzing noise in an ADC involves nonlinearities, both from quantization and from the transfer curve of the ADC not being perfect, and it involves sampling from continuous to discrete time which is a time-varying process. So it basically throws all possible complications at using the frequency domain as a basis for analysis. Which is a really long justification for a short recommendation: don't hesitate to do some time-domain analysis of what's going on. Some of the issues that you're dealing with here that seemingly involve a lot of High Math, mysterious smoke, and mirrors that can only be manufactured in a factory that sacrifices virgins can be cut right through to complete sensibility if you just think about them in he time domain. Specifically, the question of whether SNR performance is enhanced by oversampling, and what noise sources have effects that can be mitigated by oversampling and what noise sources don't. Consider an ADC. We say "it has quantization noise" -- but it doesn't. An ADC quantizes, yes. But quantization by itself is a deterministic process; it's not random noise at all. It is nonlinear, however, and frequency domain analysis can't deal with that directly. Quantization noise is a fiction that we create, to make a nonlinear effect into account when we are using frequency domain analysis on our system. We do this because frequency domain analysis cannot handle nonlinear components -- but it can handle injected signals. Because quantization is really a nonlinear effect it is dependent on the signal -- this is one of the "it depends" things that I was talking about. What it depends on, primarily, is how fast the signal is varying, how large the signal is, how much noise there is ahead of the quantization, and (to some extent) whether the signal is synchronized to the sampling. If you are measuring a slowly-varying signal with an ADC that is dominated by quantization, then oversampling won't do you any good at all -- you'll just be measuring the same damn thing over and over again, then averaging the heck out of it. If you are measuring a tiny signal with an ADC that is dominated by quantization, then oversampling won't do any good either -- all you'll ever see on the output of your ADC will be the average input value, not the tiny signal you want to see. If you are measuring a signal that spans several LSB's of the ADC, then the quantization effect will be be spread out -- here, you may find that it does, indeed, show up as "white". To the extent that you are sampling slower than the stair-step effect of the quantization, oversampling will help. If you are measuring a signal that is accompanied by noise or other signals, and if the unwanted signals and/or noise is large enough, then the quantization error will be whitened, and you can treat your quantization noise as white. Again, to the extent that you are sampling slower than the stair-step effect of the quantization, oversampling will help. Many so-called "high-speed" ADCs have considerable noise in their analog front ends -- often far in excess of other noise sources in your circuit. ADC designers often set the number of bits in the ADC such that the RMS noise is several LSB's -- this is a fault if you want to take one measurement and trust it, but it is a benefit if you want to oversample, because the ADC itself is providing noise that is great enough in magnitude to swamp out the quantization noise. At this point, you _can_ improve SNR with oversampling. Again, this can be easier to see in the time domain, or at least by mixing time and frequency domains: with broadband noise in the ADC front end (or the preceding circuits), each ADC sample, no matter when it is taken, will have a fixed amount of noise added in. Staying in the time domain, you can then assert that the more samples you take from the ADC and average, the more you'll beat the noise down. Switching to the frequency domain, you can assert (correctly) that the total noise _power_ from quantization and random noise is constant, but increasing the sampling rate will automatically reduce the noise _density_, so filtering to a fixed bandwidth will improve your noise performance. The two noise sources that you _can't_ improve by oversampling are bandlimited noise in your front-end, such as the noise that rides into your antenna with your signal (assuming radio), and any noise (generally quantization noise) that you insert in the process of your computations. The former you have to deal with at the circuits, antenna design, or systems level; the latter you have to deal with by designing for sufficient data path widths. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
Reply by SRB April 16, 20132013-04-16
Hi Tim

>Unfortunately, most of the answers to your question are going to boil >down to "it depends", and what it depends on is the specifics of the ADC >you're using and what you're trying to do.
Thank you very much for summing things up, and for your earlier explanation. Both were very helpful. Although the conclusion seems to be 'it depends', that is actually very useful to me as it tells me that I can't make any assumptions about the effect of oversampling on SFDR. I'm grateful to everyone who has responded.
>If you have the time, a much better approach to take is to learn what >causes spurs in an ADC, and what noise processes there are, and how to >relate that to specific cases with specific ADCs and specific sampling >schemes.
That sounds like a very sensible suggestion. Thank you. Sharon
Reply by SRB April 16, 20132013-04-16
Hi Glen

>I didn't look at the page, but if you average N numbers with random >noise, the noise (uncertainty) is reduced by a factor of sqrt(N). > >If you are careful with the math, you can consider the oversampling >as averaging, and so should reduce the S/N. But which noise is reduced? >If the samples can be considered independent measurements of the >underlying signal, then the random error in their measurement can >be averages out. Otherwise it won't.
Thank you very much from your explanation. This suggests to me that, while oversampling followed by filtering and decimation might reduce the overall noise power, it would not necessarily reduce spurs within the bandwidth of interest. Sharon