I'm working on a DSP-based system for audio noise cancellation, and as a
result I need to do the microphone->ADC->DSP->DAC->speaker loop quickly to
output the anti-noise value before the noise input I sampled is gone.
To date, I have used a SAR architecture ADC sampling at 20kHz. At the beginning
of my 50us sample period, I tell the ADC to start its conversion. I get the
digital value after 4us, then do the cancellation computations in ~30us, and
write the cancellation value to the DAC. From mic to speaker takes less than
one sample period (<50us).
Ideally, I would like to use an oversampling ADC to ease the constraints on my
analog anti-alias filter. However, a typical oversampling delta-sigma ADC
typically specifies a group delay on the order of 20 sample periods. I
can't figure out how group delay corresponds to end-to-end latency.
Obviously, delaying the microphone signal by 20 sample periods before it enters
the DSP for computation would kill my application.
Can any help me understand how the group delay specification relates to my
requirements? Or am I asking the wrong questions?
Question on Group Delay/Latency of oversampling ADCs
Started by ●March 24, 2007
Reply by ●March 26, 20072007-03-26
T Deitrich-
> I'm working on a DSP-based system for audio noise cancellation, and as a
> result I need to do the microphone->ADC->DSP->DAC->speaker loop quickly to
> output the anti-noise value before the noise input I sampled is gone.
>
> To date, I have used a SAR architecture ADC sampling at 20kHz. At the
> beginning of my 50us sample period, I tell the ADC to start its conversion.
> I get the digital value after 4us, then do the cancellation computations
> in ~30us, and write the cancellation value to the DAC. From mic to speaker
> takes less than one sample period (<50us).
>
> Ideally, I would like to use an oversampling ADC to ease the constraints on
> my analog anti-alias filter. However, a typical oversampling delta-sigma
> ADC typically specifies a group delay on the order of 20 sample periods.
> I can't figure out how group delay corresponds to end-to-end latency.
> Obviously, delaying the microphone signal by 20 sample periods before it
> enters the DSP for computation would kill my application.
>
> Can any help me understand how the group delay specification relates to my
> requirements? Or am I asking the wrong questions?
For sigma-delta (oversampling) converters, you can assume that phase is linear (due
to the built-in FIR filters) and therefore group delay is constant -- all frequencies
are delayed the same. So when the data sheet says the input has N sample period
delay and the output has M, then the total latency added by the ADC and DAC
converters is N + M samples.
If you really want to stay at about 50 usec latency, then the only possible way to
use oversampling converters would be to find one with both high enough bit
resolution, SNR, and effective sampling rate N+M higher than 20 kHz. In your
example, that would be a sampling rate of 800 kHz. It may not be easy.
-Jeff
> I'm working on a DSP-based system for audio noise cancellation, and as a
> result I need to do the microphone->ADC->DSP->DAC->speaker loop quickly to
> output the anti-noise value before the noise input I sampled is gone.
>
> To date, I have used a SAR architecture ADC sampling at 20kHz. At the
> beginning of my 50us sample period, I tell the ADC to start its conversion.
> I get the digital value after 4us, then do the cancellation computations
> in ~30us, and write the cancellation value to the DAC. From mic to speaker
> takes less than one sample period (<50us).
>
> Ideally, I would like to use an oversampling ADC to ease the constraints on
> my analog anti-alias filter. However, a typical oversampling delta-sigma
> ADC typically specifies a group delay on the order of 20 sample periods.
> I can't figure out how group delay corresponds to end-to-end latency.
> Obviously, delaying the microphone signal by 20 sample periods before it
> enters the DSP for computation would kill my application.
>
> Can any help me understand how the group delay specification relates to my
> requirements? Or am I asking the wrong questions?
For sigma-delta (oversampling) converters, you can assume that phase is linear (due
to the built-in FIR filters) and therefore group delay is constant -- all frequencies
are delayed the same. So when the data sheet says the input has N sample period
delay and the output has M, then the total latency added by the ADC and DAC
converters is N + M samples.
If you really want to stay at about 50 usec latency, then the only possible way to
use oversampling converters would be to find one with both high enough bit
resolution, SNR, and effective sampling rate N+M higher than 20 kHz. In your
example, that would be a sampling rate of 800 kHz. It may not be easy.
-Jeff