Hello, I am looking to sync a 3072000Hz-ish PDM signal(clock) to an external 16kHz-ish PCM signal(clock). The PDM clock is running slightly faster then the PCM clock scaled up, i.e. 16k * 192 < 3072000. I also know when the PDM clock is ahead of the PCM clock by a single PDM clock. Ultimately I will be decimating the PDM signal to 16kHz and I want then in sync.
Using all this I tried dropping a PDM sample every time the PDM signal advanced by a single sample over the PCM but this increased the noise floor to an unacceptable level. Does anyone have a suggestion to a computationally simple way of matching the signal rates without an excessive noise floor penalty?
If your clocks are not frequency locked then you need a sample rate converter. I've done a couple of these. You have two design problems.
The first is that you need a digital domain PLL to sync an NCO in one clock domain to a phase accumulator running in the other. What you get is a "mu" and "enable" signal needed for the variable interpolator. Take a look at Neil's PLL tutorial. The phase comparison works perfectly in the digital domain. It is just a subtraction and modulus operation.
The second is the variable interpolator. There are many implementations. In all you need to band limit your signal such that is doesn't exceed the usable BW of the interpolator. For instance, a simple quadratic Farrow filter may only be useful to Fs/12; depends on your signal requirements. A cubic filter (more complicated) might get to Fs/8. I've used a big multi-tap polyphase filter that was good to Fs/4 and better than -80dB distortion over the band. Regardless, your signal must be band limited to less than that at the input rate to the filter else it will distort the signal.
BTW, if your clocks *are* frequency locked, even with a complicated relationship, then there is a way to do a rational divider and multi-rate filter and is much simpler.
Wow. Mark just said exactly what I would have said, only (probably) better.
I have not used this code, but I've seen it highly recommended, and it looks like it'll do the sample-rate conversion part of what you need, if not the PLL part. "Secret Rabbit Code": http://www.mega-nerd.com/SRC/.
You might want to downconvert to a (nominal) 16kHz from your 3072kHz PDM, and then sample-rate convert from there. This may not be the most overall-efficient way to do it, but if you're using the SRC library I mention it may be easier overall. Knowing whether the PDM has been noise-shaped and how will be a big help in choosing an appropriate filter.
Thanks, both of you. I thought I would have to go to the lengths of a proper asynchronous sample rate converter with which I am familiar. What I was curious too was if there were any tricks I could do given the high rate 1-bit nature of one of the signals. Thanks again
Another trick might be to run a simple low-pass filter at high speed in the 3MHz domain - say a 1st order feed-back filter or a single biquad. This will of course create audio samples (maybe 8 bit are sufficient) instead of binary values at 3MHz.
After that your initial method of dropping single PDM samples might not increase the noise floor by much.
On the other hand you increase your data overload (8bit instead of 1bit @3MHz) and depending on the architecture you use, an IIR filter at 3MHz might be too much...
There are indeed a few tricks you can play since your data is already oversampled. It isn't ideally oversampled because of the sigma-delta noise, but you can probably still leverage something.
However you would have to explain in a bit more details the relationship between the two clocks, especially what you mean by 'knowing' when one clock is 'ahead' of the other one.
Certainly, the system in which I am working is receiving a 16kHz clock from an external source. The chip I am programming can generate clocks very close to 16kHz*192 = 3.072MHz which is my PDM clock. I can set the PDM clock to only values that my internal PLL can achieve which means that I can set it to just above or below the exact 3.072MHz ideal.
By clocking the 16kHz clock into my device using the PDM clock I can detect when my PDM clock has become one cycle ahead (as I chose for the PDM clock to go a little fast). I can also detect when it's going too slow.
Hope that helps, thanks
Without having run the math, I feel like a zero-order hold filter should be good enough at this oversampling ratio. That would be much simpler as you don't need a PLL to generate the 'u' discussed above. The trick is to use this filter to only 'decimate' from the async PDM input down to your 'slightly' slower ~3MHz signal, nothing more. That way the constraints on the filter are relaxed substantially since its output is still oversampled. This is similar to splitting a standard decimating chain into multiple stages: the requirements for the higher rate filters are relaxed. In this case, since the higher rate filter is performing async sample rate conversion, reducing its order is quite beneficial.
For decimation, the zero-order hold filter is just an integrate and dump circuit: integrating the 1-bit input as they arrive at the async PDM rate, and resetting the integrator when an output sample is required (at your slower ~3MHz clock rate). For example, if the two clocks are almost the same rate, you'll end up resetting the integrator every cycle for many consecutive cycles, essentially feeding the PDM input directly into your subsequent decimator. Once in a while you'll end up integrating two input samples (the ratio between the two clocks dictates how often this happens).
Again - using a zero-order filter avoids having to calculate the ratio with a PLL, which would be required if you used a 1st order filter like a linear interpolator (or linear decimator here I guess).
You can easily prototype this by taking an FFT of your PDM stream and compare that with an FFT of the pdm stream where every 'x' sample is replaced with the sum of the two adjacent samples, where x comes from the ratio of the two clocks. Hopefully the inband noise is acceptable. If not, then you need a higher order filter.
If you want to read more formal literature on this, search for the transposed farrow structure. What you'll find is typically for higher order filter, but the same theory applies to a simple integrate and dump.
Thank you very much for all the ideas, I'm going to run some simulations and come back with my results.
I have run three simulations. In all simulations I ran a 5th order PDM modulator with a 1kHz input signal and a 1 bit 3.072MHz output. I then took the FFT(hanging windowed) of the result to get my baseline signal plus noise floor graph:
Next I did three runs: Drop every Nth sample
Then drop two samples every 2Nth
Finally, every 2Nth drop 2 and remove a third, if the sum of the three is greater than half then reinsert a high, else reinsert a low
In the third the idea was to try to compensate for some of the loss but it resulted in much higher harmonic content.
N was set to 300 which is 0.3% clock mismatch which is bad but exaggerated the results.
If anyone else has any better ideas I would be happy to give them a run. Thanks
As you might be able to tell, due to the speed, I am sort of limited to the dropping and summing of samples(and other simple operations).
In a 3MHz PDM stream there is high energy content above 20kHz (the "noise transfer function" of the PDM modulator). Each time you drop PDM samples, you are creating spectral folding effect which is creating floor noise increase.