I'm familiar with CIC filters (https://en.wikipedia.org/wiki/Cascaded_integrator%...). What happens if the decimation ratio isn't constant? For example, I have a \(f_1\) = 1.024MHz sampling rate, and the downstream consumer is approximately 1kHz, so the decimation ratio is nominally N=1024, but the entity in charge of the 1kHz rate has lousy jitter specs and is not synchronized to my \(f_1\) clock, so the number of samples acquired at \(f_1\) ranges from N=1020 to N=1028? (BUT -- I know the value of N associated with each output sample)
Is there a way for the CIC structure to still be useful? Or is it worthless without a rock-solid guarantee on decimation ratio synchronization?
Trying to get a mental picture ... you're saying that you have a sample buffer (or more than one, at 1.024 MHz) and the "downstream consumer" is taking an irregular number of samples from your buffer at his rate ? You can't control how much he consumes ?
You can't control how much he consumes ?
Nope. Different example: Picture you have a Swiss watch with a CIC, ticking at once per second on the dot, dutifully adding up ADC samples, and Uncle Ned comes over once a day... noonish. You tell him how many samples since the last time, and the output of the last integrator value. He goes away, scratching his head, and does something with it. You don't know what. It pains you. Sometimes you give him 86123 samples, sometimes 86902 samples, sometimes 85779 samples... but if you happened to get a day where exactly 86400 seconds had gone by since the last time, you'd be flabbergasted.
The question is, can Uncle Ned make any sense of your ADC values? (Maybe he's trying to get average temperature over a day. I don't know.)
"It pains you", hehe. Yeah I can see that. Maybe Ned is wiser than he looks, if he knows the number of samples he consumes (seems like a reasonable assumption) then possibly he's maintaining enough previous data (delay) in order to perform processing that depends on a constant / regular number of samples.
Yeah, well, except in this case Ned is me. Or, rather, Ned is just a courier acting on my behalf, and all he does is go fetch the ADC samples and give me the information while I start swearing and wondering why he can't manage to be punctual.
Certainly if I had sample counts like 86303, 86303, 86303, 86303, 86303, 86397, 86397, 86397, 86397, then I would expect everything to be perfect except around the transient where the number of samples changes. But if the sample count changes every time, what can I do if it is a system built of 2 stages or more of CICs?
One possible solution is for Ned to relax and be tolerant. But seriously it is normally the responsibility of input side to eject the right outputs and not expect the current module to detect and correct that.
I said normally but remember line coding is done by input side and the receiver must act upon. This is justified in long distance comms but do we extend the same to low level local design?
This problem sometimes arises in team work when for example once somebody asked me to detect his wrong packet headers sent to my module and correct them instead of him sending the correct headers always.
Yes, that's useful. I do that in resampling loops and tracking loops where the output sampling instant is steered by the loop, i.e., the decimation ratio is steered by the loop.
This is not unlike using a loop to steer a polyphase filter decimation rate, where there is essentially jitter in the coefficient selection for the same reason.
This generally does not cause appreciable output distortion in either CIC or polyphase filters.
How do you deal with the gain changing over time? I get how it would work with a 1-stage CIC, you just divide by N as needed. But what if you have a 2-stage?
The problem you are describing is not the fault of the cIC. The CIC is a recasting of the N-Tap boxcar integrator which becomes the cascade of an integrator (the I in CIC) and an N-tap comb filter (the 2nd C in CIC) in turn followed by an N-to-1 downsampler. The resampler is interchanged with the comb converting it into a derivative operating at output rate rather than a comb operating at input rate,,,we can do this because N-sample delay at input rate is same as one sample delay at output (Noble identity). See the derivation with pictures in the attached pdf file CIC Filter.pdf
The delay in the comb can not change. If it is changing you are describing a different problem. One called clock alignment... the two clocks, input and output) have nominally the same frequency but never exactly the same frequency due to clock drift or Doppler. Suppose we have an input clock CLK-1 putting samples into a buffer and have an output clock CLK-2 pulling samples from the buffer. Assuradley the input rate and output rates differ so the buffer is heading towards underflow when the output clock frequency exceeds the input clock frequency or it is heading towards overflow when the input clock frequency exceeds the output clock frequency. The clock alignment process monitors the boundary position between occupied and empty addresses and if that boundary is shifting towards output we have to increase output rate and of course if boundary is drifting towards input we have decrease output rate. a PLL operates an arbitrary interpolator to set the average output rate to match the average input rate.. we do this all the time when two systems with independent clocks talk to each other. For instance, the clock aboard a satellite experiences time dependent clock drift due to Doppler shift with satellite position and the ground based clock absorbs the frequency offset with a clock interpolator in the timing loop. We have built clock alignment systems for an MP3 audio system exchanging signals from a CD player with independent clocks. Interpolators.pdf