Reply by Michael Schoeberl April 25, 20052005-04-25
> So I thougth of oversampling but since the data stream is processed by a > virtexx II pro FPGA I have a small range of increasing the sample rate. > 200 Mhz beiing the upper limit.
there were ideas about some nice tricks on comp.arch.fpga ... - you could produce e.g. 4 shifted clocks (at max. freq) and sample the same signal with each of them ... in your case you could even run 4 accumulations in parallel and just sum up the end result ... you could use different input pins with IOB-FFs or one pin and the FFs inside the fpga (manual placement can assure almost constant delay) - the rocket-IOs could sample a digital signal at a much higher rate ... (but its only digital then) ... could give a quite easy solution for sub-ns peak detection bye, Michael
Reply by Tim Wescott April 20, 20052005-04-20
galatisa wrote:

> Hi, > I'm sampling a signal with a 35 MHz BW at a rate of 100Msa/s and 12 bit. > The nature of the signal is something like 100ns Peaks at about 2/3 of > peak value. I need the integration over each signal(100ns) but due to > the sampling rate I only get at about 4-10 samples of every signal, a > number which is not enough. > So I thougth of oversampling but since the data stream is processed by a > virtexx II pro FPGA I have a small range of increasing the sample rate. > 200 Mhz beiing the upper limit. > > Can anyone tell me a "general" rule for how much a increase in speed > reflects in better resolution or an euquality to increasing number of > bits. > > eg 12bit 200Msa/s vs 14 bit@150MSa/s > > Thanks >
Any sampling rate vs. precision tradeoff is going to depend on the nature of your signal. The amount of error due to sampling rate is described with sampling theory, the amount of error due to quantization can be more difficult to determine, but at those speeds you can probably model it as white noise (and you may see more than just quantization noise from your ADC). If it were me I'd look at the expected spectrum of the signal, look how much of its energy is aliased as a consequence of sampling at a particular rate, and call that "noise". Then I'd look at the ADCs available at a particular sampling rate, and look at their total noise (both front-end and quantization). Then I'd look for a minimum. By the way: Analog Devices has some 14-bit ADCs that go up to 60MHz sampling rate. They run hot, and they have more than 1lsb of front-end noise, but they do report 14 bits when they're done. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Reply by Mark April 20, 20052005-04-20
I'll try to answer from an analog guys perspective.

12 bits gives a quantizing noise floor of about -72 dBc.
This Q noise is spread across the entire Nyquist bandwidth.

If you double the sampling rate, the Q noise floor is still -72 dBc but
the Nyquist bandwidth has doubled so while the total Q noise is the
same, the noise DENSITY has been reduced by 3 dB and the noise in your
band of interest has reduced by 3 dB.  Thus with the correct post
processing, doubling the sampling rate can improve your signal to Q
noise ratio by 3 dB which is equivalent to another 1/2 bit of
resolution



Mark

Reply by galatisa April 20, 20052005-04-20
Hi,
I'm sampling a signal with a 35 MHz BW at a rate of 100Msa/s and 12 bit.
The nature of the signal is something like 100ns Peaks at about 2/3 of
peak value. I need the integration over each signal(100ns) but due to 
the sampling rate I only get at about 4-10 samples of every signal, a
number which is not enough. 
So I thougth of oversampling but since the data stream is processed by a
virtexx II pro FPGA I have a small range of increasing the sample rate.
200 Mhz beiing the upper limit. 

Can anyone tell me a "general" rule for how much a increase in speed 
reflects in better resolution or an euquality to increasing number of
bits. 

eg 12bit 200Msa/s vs 14 bit@150MSa/s

Thanks


		
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com