I have a system which has a tuner chip and DSP chip, both are clocked by independent xtals. Tuner gives the baseband samples to DSP, and tuner is master and dsp is slave. Dsp does the demodulation of base band samples and does the audio decoding and generate audio, which is streamed out from DSP.
Now if the tuner xtal is drifting then the dsp audio streaming needs to adjust to that drift, else buffer overflow or underrun happens as the sample rates doesn't match.
How to design a control system such that a digital baseband frame of duration 'T' ms is mapped to audio and adjust the drift ?
Apparently there are no timing information being broadcast and no meta data fields available for digital radio standards like DAB etc.The only way is to find the symbol boundary and time stamp it, since the tuner is feeding the base band samples to DSP, if any tuner variations will be reflected in the timing of symbols in DSP. Monitor this difference and adjust audio clock in DSP, which is possible in hardware of DSP.
This is what I'm planning to implement, please find the attached figure,
Now the timing of input side is after detecting the start of symbol. Every symbol will be timestamped and measure the time deviation between two symbols.
T0 – timestamp of RF sample buffer 1
T1 - timestamp of RF sample buffer 2
T = T1 – T0 ( duration of RF buffer)
t0 – timestamp of Audio sample buffer 1
t1 - timestamp of Audio sample buffer 2
t = t1 – t0 ( duration of audio buffer)
Assume audio playback buffer and RF capture buffer are of same duration, then
Error = (T – t ),
This error is minimized by audio control loop.
well, it appears to me that you will need to perform Asynchronous Sample Rate Conversion (ASRC). there are chips to do that, but if you have the sufficient unused memory and MIPS left in your DSP, you can do the ASRC in the DSP.
it's not an easy task, but sorta rewarding. you get to dabble both in the notion of SRC (with a slightly variable rate ratio) and a little PID controller (to adjust that SRC ratio).
i am presuming your DSP gets an interrupt from the tuner chip whenever there is a sample from the IF, and your DSP gets a different interrupt from the streaming destination when it is "starved" for audio samples.
I would put a FIFO in there someplace (or look for one) and make that the "phase detector" of my PLL, with the setting for the sample rate conversion being the driven element (I don't want to call it the "NCO", but that's the role it would take).
I've used a FIFO count as the feedback in a rate conversion scheme but I can't recommend it. It tends to suffer from limit cycles. After a fair bit of tweaking I got it to work but it is not ideal.
If you are really stuck with having two separate clock domains then the cleanest way is to use a phase accumulator. The accumulator runs in the source domain and is sampled in the destination domain. The accumulator phase is used for "mu" in the Farrow filter and the rollover is used to pull the next sample from the source.
Are the crystal frequencies equal or related by an integer factor? If so, it may be easier to drive the DSP chip with the clock from the tuner.
Or if not, put a VXCO on the DSP and use it as the VCO of your PLL. Which could look pretty demented, but should be solid if you get the controller right.
Many, many years ago, we had this same problem when synchronizing the internals of the first Sirius Satellite Radio receivers to the audio play-out.
We had to receive at the satellite transmit rate, then decode to audio samples, but the stereo D/As were running on a clock derived from a local oscillator.
We used a Farrow interpolator to "retime" the samples. Similar techniques are used for the timing recovery in modems, but we used a longer FIR response, and a higher order Lagrange interpolation to calculate the coefficients, in order to preserve over 85 dB of SNR in the audio samples.
I don't think I'm spilling any beans, but we used a transfer buffer and monitored the "high-water" and "low-water" marks for the currently occupied length of the buffer, and then used the fact that we were bumping up against one of these marks to drive a PLL that dynamically adjusted the correct phase shift for the Farrow interpolation.
This way we could handle data coming in too slow or too fast.
I have over-simplified the issue, because the data came in in chunks of 1024 bytes, so the occupancy of the buffer was seriously quantized, making it a little harder than just a simple tracking loop.
We have used a technique similar to Farrow for a TV modulator. The input TV video stream was clocked by a source different from our DAC. We inserted a buffering module at frontend transport stream (before symbol generation) which was feedback controlled by an upsampler just before DAC. The upsampler was designed to give fixed output rate for DAC by requesting data from buffer as and when required by its input pipe state (read logic to buffer). The buffering size and default fill level were chosen per type of QAM(16/64/256).
Some good suggestions have already been made, and another simple thing that can be done is to just occasionally repeat or drop an output sample. Whether that's acceptable depends on the requirements for the output signal, and it's often well within tolerance of specs in certain apps.
The other end of the thoroughness spectrum is doing the previously-mentioned full-on sample-rate conversion interpolation with a control loop, which is a *little* cleaner and takes a lot more engineering and processing resources. If you need it to meet spec, though, then you need it.
in general, for the cases like this, there are 2 things which are required to be addressed. 1. jitter in the clock and 2. long term drift in the clock abs values. Jitter can be solved by longer buffers. however, drift can't solved by finite buffers.
Coming to the stated problem and potential soln:>> Monitor this difference and adjust audio clock in DSP, which is possible in hardware of DSP.
since you have a provision to control the clock, your problem is limited to finding and approach for monitoring the diff in clocks of producer(RF) and consumer(DSP).
One of the simple clock difference monitoring approach which can be fully confined to DSP is to have a input buffer at DSP and monitor the input buffer level.
Let's say you have an input buffer which can hold input data from RF chip for 240mSec*. and let DSP algo consume 6mSec as chunk. Keep a pre-role of 50% - once the DSP start decoding after 120mSec.
now keep looking for the buffer levels periodically. if the clocks are the same, the buffer level will stay around 50%.
If the DSP clock is slower, the buffer level go high end vice-versa.
So, by putting an upper and lower threshold you may be able to decide and adjust your DSP PLL control voltage.
as any other problem, there several avenues for making it complicated, the above approach works for audio/radio like needs. hope it helps.
*note :the length of the input buffer shall be decided based on the max jitter.