Hi, I'm trying to understand how to combine the signal that's digitized by a pair of ADCs and I'm a little confused. The signal is recorded in the time-domain using dual ADCs to increase the dynamic range--- a low-gain signal and a high-gain signal. The two signals are combined by replacing the saturated samples in the high-gain signal by those in the low-gain after the gain and offset are determined. I currently determine the gain/offset for each pair of high- and low-gain signals by doing a simple least squares fit. After the signals are combined, I do an FFT as I'm really interested in the frequency of the measured signals. Over the coarse of one week, I make many measurements (thousands) using this exact experimental setup and for each measurement I fit an independent gain/offset using least squares. What I've noticed is that there are periodic patterns in these fit gain/offset values? I expected the gain and offset to be constant (i.e. properties of the electronics) and while they do seem to be relatively constant over the coarse of one day, they are certainly not constant over a week. Any ideas on what could be causing this? The actual voltage being digitized does vary over the course of a week, as does the lab temperature, humidity, etc... At this end of the day, fitting an independent gain/offset seems a little dubious to me and I'm wondering if I shouldn't be fixing the ADC gain/offset values to a constant. Any thoughts? Thanks, Graham
Dual ADCs
Started by ●May 16, 2006
Reply by ●May 16, 20062006-05-16
GrahamWilso...@yahoo.ca wrote:> Hi,After the signals are combined, I> do an FFT as I'm really interested in the frequency of the measured > signals. > > Over the coarse of one week, I make many measurements (thousands) using > this exact experimental setup and for each measurement I fit an > independent gain/offset using least squares.With a set of known independently calibrated inputs?
Reply by ●May 16, 20062006-05-16
GrahamWilsonCA@yahoo.ca wrote:> Hi, > > I'm trying to understand how to combine the signal that's digitized by > a pair of ADCs and I'm a little confused. The signal is recorded in > the time-domain using dual ADCs to increase the dynamic range--- a > low-gain signal and a high-gain signal. The two signals are combined > by replacing the saturated samples in the high-gain signal by those in > the low-gain after the gain and offset are determined. I currently > determine the gain/offset for each pair of high- and low-gain signals > by doing a simple least squares fit. After the signals are combined, I > do an FFT as I'm really interested in the frequency of the measured > signals. > > Over the coarse of one week, I make many measurements (thousands) using > this exact experimental setup and for each measurement I fit an > independent gain/offset using least squares. What I've noticed is that > there are periodic patterns in these fit gain/offset values? I > expected the gain and offset to be constant (i.e. properties of the > electronics) and while they do seem to be relatively constant over the > coarse of one day, they are certainly not constant over a week. Any > ideas on what could be causing this? The actual voltage being > digitized does vary over the course of a week, as does the lab > temperature, humidity, etc... > > At this end of the day, fitting an independent gain/offset seems a > little dubious to me and I'm wondering if I shouldn't be fixing the ADC > gain/offset values to a constant. Any thoughts? > > Thanks, > Graham >Generally and ADC is barely accurate to the number of output bits available, so seeing variations with temperature would be no great surprise, particularly if you're running them off of different references, or if you're not using the world's best amplifier for the low range ADC. If the gain and offset variations you're seeing only account for a few counts of the coarse ADC's range then I'm not surprised at all. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by ●May 16, 20062006-05-16
The high and low-gain measurements are actually voltages coming from an infrared detector. I have things setup so that I observe a calibrated hot and a cold blackbody source at regular intervals so I do have a 'calibrated' measurements over the course of a week/month. The complication, however, is that the detectors degrade with time, temperature and humidity (this is what I'm actually trying to quantify here) so the output voltage does vary with time. Nevertheless, the calibrated blackbodies I have are very stable (effectively constant) and the output radiance is independently measured. My concern is that I'm folding information regarding the degradation of the detectors into these offset/gain parameters. Thanks again, Graham
Reply by ●May 16, 20062006-05-16
Thanks Tim,> If the gain and offset variations you're seeing only account for a few > counts of the coarse ADC's range then I'm not surprised at all.It is the offset values that seem to vary the most, not the gain, so indeed this probably doesn't amount to that many counts. I suppose the real question I have is whether it is reasonable/customary to process dual ADC signals using independent fits instead of using a set of fixed gain offset values. I've looked at various diagnostics of the least squares fits in matlab and the they look pretty good. It is quite obvious, however, that the offset value varies and the fit becomes worse when the input signal decreases significantly (i.e. as the noise goes up, the fit gets worse). I've never actually needed to work with raw high and low gain signals before and I'm feeling a certain amount of uneasiness here. I should caveat, I'm a physist and my past experience with signal processing is limited to a single ADC. Many thanks, Graham
Reply by ●May 16, 20062006-05-16
Graham wrote:> The high and low-gain measurements are actually voltages coming from an > infrared detector. I have things setup so that I observe a calibrated > hot and a cold blackbody source at regular intervals so I do have a > 'calibrated' measurements over the course of a week/month. The > complication, however, is that the detectors degrade with time, > temperature and humidity (this is what I'm actually trying to quantify > here) so the output voltage does vary with time. Nevertheless, the > calibrated blackbodies I have are very stable (effectively constant) > and the output radiance is independently measured. > > My concern is that I'm folding information regarding the degradation of > the detectors into these offset/gain parameters. >And that should be ok, you can calibrate the A/D at a part level (feed in a ground and reference at the A/D) or you can calibrate at the system level, which is what you are doing (which is better as you are calibrating out all the errors in the system). The offset and gains you read when observing the hot and cold blackbodies should calibrate out any drift in either the A/D or detector as long as they are done often enough, even though both are degrading differently. They shouldn't work only for a day and not work for the rest of the week. Not sure why you are using least squares, you have only two reference points (hot and cold), so a straight line calibration is all the cal info you have, you need mutiple references before least squares is needed, so maybe I don't understand something.
Reply by ●May 16, 20062006-05-16
Graham wrote:> Thanks Tim, > > >>If the gain and offset variations you're seeing only account for a few >>counts of the coarse ADC's range then I'm not surprised at all. > > > It is the offset values that seem to vary the most, not the gain, so > indeed this probably doesn't amount to that many counts. I suppose the > real question I have is whether it is reasonable/customary to process > dual ADC signals using independent fits instead of using a set of fixed > gain offset values. I've looked at various diagnostics of the least > squares fits in matlab and the they look pretty good. It is quite > obvious, however, that the offset value varies and the fit becomes > worse when the input signal decreases significantly (i.e. as the noise > goes up, the fit gets worse). > > I've never actually needed to work with raw high and low gain signals > before and I'm feeling a certain amount of uneasiness here. I should > caveat, I'm a physist and my past experience with signal processing is > limited to a single ADC.I think you would find it useful to examine the spec sheets of the converters and amplifiers. The temperature specifications will let you know what to expect. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●May 16, 20062006-05-16
Many thanks for posting Steve! It's always difficult to know how much to include in a first posting and I suspect I may not have included enough. Basically, here's how things work: Each 'measurement' comprises a low and a high-gain signal in the time domain. Each has ~16K data points. I make 100 measurements of an unknown source followed by 10 calibration measurements of the hot blackbody, and 10 calibration measurements of the cold blackbody. Each measurement/calibration takes 15 seconds giving 16K data points for the high-gain and 16K data points for the low-gain. Data collection is automated now so I can repeat this many times throught the day/week/month. At present, to reconstruct a single measurement from the high and low-gain signals, a linear least squares fit is used to determine the gain and offset: [m, b] = LinearFit(I_high, I_low, weights) where I_high and I_low are the high and low-gain signals, with ~5K points (the saturated values and spikes are removed prior to fitting). So, for each of the 100 measurements and each of the 20 calibrations, the gain and offset values are determined independently. The combined signal for each measurement and calibration is found by scaling the points in the high-gain signal with the fit gain and offset. Saturated points are replaced by those from the low-gain signal. The result is that I have plots show the gain, offset, and fit error as a function of time (i.e. every 15 seconds). What I've noticed is that there are trends in the measurement gain and offset values and that these may correspond to 'events' in my measurement. For example, as the detector warms it's efficiency decreases and the recombined ADC signal decreases. I also see correlated changes in the gain/offset parameters which has me a bit worried. I didn't actually devise this scheme myself and I'm having a hard time understanding why it was done this way (it was a PhD student who has since left the lab). I think it would be more sensible to use the same gain and offset for each measurement and, ideally, I'd like to determine this from the calibration measurements as these represent the extremes of the ADC counts. Thanks again, Graham
Reply by ●May 16, 20062006-05-16
Graham wrote:> Many thanks for posting Steve! > > It's always difficult to know how much to include in a first posting > and I suspect I may not have included enough. Basically, here's how > things work: > > Each 'measurement' comprises a low and a high-gain signal in the time > domain. Each has ~16K data points. > > I make 100 measurements of an unknown source followed by 10 calibration > measurements of the hot blackbody, and 10 calibration measurements of > the cold blackbody. Each measurement/calibration takes 15 seconds > giving 16K data points for the high-gain and 16K data points for the > low-gain. Data collection is automated now so I can repeat this many > times throught the day/week/month. > > At present, to reconstruct a single measurement from the high and > low-gain signals, a linear least squares fit is used to determine the > gain and offset: > > [m, b] = LinearFit(I_high, I_low, weights) > > where I_high and I_low are the high and low-gain signals, with ~5K > points (the saturated values and spikes are removed prior to fitting). > > So, for each of the 100 measurements and each of the 20 calibrations, > the gain and offset values are determined independently. The combined > signal for each measurement and calibration is found by scaling the > points in the high-gain signal with the fit gain and offset. Saturated > points are replaced by those from the low-gain signal. > > The result is that I have plots show the gain, offset, and fit error as > a function of time (i.e. every 15 seconds). What I've noticed is that > there are trends in the measurement gain and offset values and that > these may correspond to 'events' in my measurement. For example, as > the detector warms it's efficiency decreases and the recombined ADC > signal decreases. I also see correlated changes in the gain/offset > parameters which has me a bit worried. > > I didn't actually devise this scheme myself and I'm having a hard time > understanding why it was done this way (it was a PhD student who has > since left the lab). > > I think it would be more sensible to use the same gain and offset for > each measurement and, ideally, I'd like to determine this from the > calibration measurements as these represent the extremes of the ADC > counts.If it's any consolation, I don't understand the rationale either. I also find the use of two ADCs strange. In applications of this sort, I use a single converter and a multiplexer. One of the mux inputs is ground, another, an accurate reference voltage. Still others can be the signal at various gains. (Often, the low-gain signal is derived from the high-gain one by a voltage divider using resistors with matching temperature coefficients.) To determine offset, simply read ground. To determine gain, read the reference and subtract the (signed) offset. The gain of an instrumentation amplifier is enough more stable than a converter to be considered perfect for applications where the linearity of the converter can be taken for granted. (A mid-voltage reference, another temperature stable divider for example, can check linearity.) A mux often has eight available inputs, allowing six gains as well as ground (offset) and reference (scale). Using them all (with a single tapped divider) allows an average of more bits per reading. It's always a disappointment to see fancy math used to partly redeem a poor design. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●May 16, 20062006-05-16
if I follow you correctly, the "offset" you are referring to here is the offset that you have to add to the low gain reading to make it "match up" with the high gain reading. So this will be a function of the actual offset in both the low gain and high gain converters and also a function of the exact value that the high gain converter saturates at... and thus a function of the gain in the high gain converters, so basically this "offset" is a function of almost everything. Question... low speed converters with many many bits are available...how many bits do you need that you need to use a low gain/high gain system? Mark






