Hi, and apologies if this is a duplicated question.
I wanted to know if anyone can point me to a reference that deals with actual implementations of multi rate systems such as a polynomial filter. I have a fair understanding of how polyphase filters and polynomial filters work. I read Rick Lyon's and fred harris's books, but it is still intriguing of how the clock domain is handled. Thank you in advance for your help/insight/references.
If the frequency of faster clock is multiple times the slower clock you can resample the data from the slower domain and use only the faster clock. Don't take this suggestion too seriously though there are many different solutions depending on the platform (DSP, FPGA) you use.
The standard reference is "Multirate DSP" by Crochiere and Rabiner.
There are several implementation philosophies depending on the details. In easy cases (as mentioned in a previous reply) either one rate is an integer multiple of the other, or at least both are integer multiples of some reasonable rate. In that case you run at the fastest rate and optimally interpolate everything needed. (Sometimes you can get away with running at the slowest rate and decimating.)
In a more complex case you have several unrelated rates, and you optimally resample all to some reference rate. Here is where (multistage) polyphase filters come in handy. (BTW, in a related case the sampling might not be uniform or there may be missing samples.)
In the harder cases some of the sampling rates vary depending on the processing (like when performing clock recovery). In the hardest case you are reduced to working in analog time, with the next time instant resulting from computation and then working out what is needed to work there (if lucky Gaussian integration may help).
Thank you very much @Y(J)S and Dimitar!
I also have the book by Crochiere and Rabiner. Concurring on it being a stellar reference.
You both have confirmed my suspicion, perhaps partially. The way I am thinking about it, unless the clocking is done correctly, the samples even though being "calculated" correctly, the filtering will not work. So from a hardware perspective, we see in the textbooks those commutators and I do not understand (not being a hardware specialist) how those would get implemented. For instance say, I am crossing clock domains from 100 MHz to 101.24 MHZ or viceversa (fractional delay implementation), and the implementation uses a Polynomial/Farrow filter, then I would still need to have clocks on both ends handling the I/O. What is obvious based on your comments, we need a higher clock that is able to keep up which intuitively makes sense.
I cannot find anything on the WWW that discusses with sufficient detail.
I do fpga and so familiar with clock issues. DSP engineers are less concerned about clocking details - I assume.
clock is system clock and is not sample rate (though may be equal sometimes).
In your case of 100 to 101.24 you are actually referring to sampling rate of your signal. That requires fractional rate converter. You use a system clock higher or much higher if you want sharing of resource across multipliers ...etc. Once you arrive at DAC then you need to upsample to DAC clock, here the signal sampling rate must equal DAC sampling clock rate.
I worked on a couple of ASICs where the sample rate is tied to the clock rate. Crossing from one domain to another is tricky.
In my ASIC designs and in an FPGA for sure I don't do this. So an A/D or DAC may well be in a small clock domain where the clock is synchronous to the data. This may well be the highest clock rate in the design. Filtering and decimating (half-band for example) will drop the data rate. The system clock stays the same. Use a data enable pulse every time you need to march the data forward. If you have to cross clock domains you can use a toggle (slow rate) or FIFO (high rate). Personally I use a central module to generate all the data enables with a machine so that the system is deterministic. For frames I use a start pulse to keep the machines in sync. If you need to do variable rate filtering to match some symbol rate to an unrelated A/D or DAC rate that is a whole other subject.
Labview FPGA solves the problem by using buffers to tie functional blocks together. Ready/Ack signals handle flow control and so data rate matching is automatic. It is also horribly wasteful of FPGA resources.
Thank you Mark!
I was glad to learn that I am not completely off in my thinking. I was leaning towards an enabled logic. Also, I was curious how a variable rate filtering that is not an integer multiple of the clock would be plausible. I am used to not having to deal with any clock generation but that it is always a given. Would you have any insight?
Well, that is the whole other subject.
The easiest would be if you could give some details of what you are trying to do.
Sorry for the much delayed response, @napierm!
I am trying to resample an input to an output that is close a rational fraction of 1.
Given that the input is driven by an input clock (sample rate and system clock) that is "compatible", the output samples cannot be driven by that same clock used for the input. So in a purely hardware perspective, how do you accommodate that? Would you need some mechanism (say your system clock is not high enough to derive a clock to drive the output that has an integer relationship) to drive your output samples? And what would that be?
There are multiple answers to that question depending on the application. I've used a Farrow filter and sampled across clock domains. You can actually use the FIFO level to steer the output sample rate converter. But it does suffer from limit cycles. I managed to get it to work by playing with a feed back loop and tinkering with the bit-widths but I wouldn't do that again.
A better way is to have a rotating phase accumulator driven by the imput sample rate. Then the phase of the accumulator is sampled at a rate fixed by the output. A PLL in the output rate domain locks onto the phase and generates the enables and "mu" needed by the Farrow filter. See Neil Robertson's blog entries on PLLs. He is the guy who designed them for this very application at ST Micro.
A note on Farrow filters. They are not LTI systems. The response varies with the input and output sample rates so they have to be analyzed over frequency and with different input signal delays to see their performance. Another thing is that the simpler ones only work with minimal distortion at the low end of the Nyquist zone. A bigger polyphase implementation might work out to Fs/4. I've seen one used for OFDM that did work over most of the Nyquist zone but it had kind of a lousy response over all of it. I guess the system was relying on the inherent equalizers to make it work anyway. The point being again it depends on your application. Usually the signal will have to be band limited at the input to the Farrow filter matched to its frequency response. If not then you may need another filter on the output of the Farrow to clean it back up.
Sometimes the output *is* related to the input (same clock generators) but is not an integer relationship. Again, a phase accumulator is used. However it has a binary portion and a non-binary fraction to get an an exact rate match that will not slip with time. Similar Farrow arrangement.
For one audio application I've seen the input up-sampled (sigma-delta) to a very high rate and then directly sampled on the output side with no rate conversion. Just drop some samples. It does work. The rate is high enough that the distortion is well above the audio range. Then downsample and filter. Not perfect but good enough for some applications.
Then there is what is known as rational division. There is a numerator and denominator in the enable generator so any rational enable rate can be derived.
So many ways to skin the cat.
If you want anything more concrete I'll need some numbers.