DSPRelated.com
Forums

Fractional rate asynchronous conversion for highly oversampled data

Started by deba 5 years ago11 replieslatest reply 5 years ago372 views

Hi All,

The general method used for fractional rate conversion is to use a polynomial based filtering of the input data like a Farrow structure. This structures are needed for input data with frequency content near about Nyquist. Are there simplified architectures available for highly oversampled data like 1000? 


The input and output clocks are asynchronous and will be slowly time varying as there is no fixed relation between them. But the data is very slow almost dc, <10 Hz. What artifacts will be present in the output spectrum if the data is simply transferred at the output rate?


I will be thankful for suggestions.

[ - ]
Reply by dudelsoundSeptember 18, 2019

For an oversampling as big as yours, I would simply calculate the linear interpolation of the two neighbouring input samples. If your output sample is taken at time t_o and the neighbouring input samples occur at t_i1 and t_i2, I'd use:

mix = (t_o - t_i1) / (t_i2 - t_i1)

out(t_o) = mix * input(t_i2) + (1-mix) * input(t_i1)

This corresponds to convolving the input signal with a triangle with a duration of two input samples. That is some form of low pass, but a very weak one and as such will introduce some aliasing. But if your input data dos not contain energy near to nyquist, it will not be a problem.

[ - ]
Reply by artmezSeptember 18, 2019

Not completely sure I fully understand your problem description. You have data that is 1000 times oversampled that needs to be downsampled (presumably by 1000), yet preserve a modicum of frequencies near DC. The term "asynchronous" is overused and has ambiguities that need to be clarified here. So, is it that the downsampled data's rate is the same as your nominal output rate, but the sample and output clocks are not identical or phase-locked? There are differing elements of this problem that need to be addressed separately. This part of the problem is addressed by "simple" data buffering strategies, which can be particularly onerous if a process most run continuously and forever. Since you have a handle on the filtering part of the problem, I will address the data flow side.

Phone companies for years have struggled with digital data, pumping data from disparate thru their network of switches that are also not completely synchronized. The end result became a buffering problem after a while, even though the clocks were close to each other (ideally, identical), they were NOT perfect, so sometimes the source would have an extra sample the destination could not handle and vice versa. So what happens then? One could drop that sample or add a buffer. But how deep should that buffer be? Well, in the long run, it needs to be infinite, but that doesn't help in the case that the output clock runs faster, and now can miss a sample. What to do? One strategy is to just duplicate the last sample if there is no new sample available. A good unbiased estimate of any smooth process is that the next sample is close to the previous sample. However, duplicating the previous sample does cause a discontinuity in the data and thus a (slight) spreading of the spectrum.

So your options for a continuous process are: if the source process is too fast, drop samples, if the output process is too fast, duplicate the previous sample. Unless you have something that requires absolute accuracy, this should be sufficient. Note that the rate of dropping/duplicating samples can be predicted by the frequency tolerance between the two sources, and should be done as an exercise to see how big that is. Obviously, the higher the tolerance, the more frequent the dropping/duplicating. For example, a 1% tolerance averages to 1 in 100 samples being dropped or duplicated.

[ - ]
Reply by napiermSeptember 18, 2019

Hello,

Like the others I don't think I see enough information here to make much of a suggestion.  You say there is input data "with frequency content near about Nyquist".  What does that mean?  My question is what frequency content do you need to preserve as a fraction of the Sampling rate Fs?  Max is 1/2.

Yes, one of the simplest Farrow filters is a quadratic.  Needs 3 samples.  Also need a mu and enable.  One scheme is to up sample/filter in the source domain and then interpolate back down in the destination domain with a simple farrow.  Next thing you need is a phase accumulator that will keep pulling samples from the source domain and generate the Mu needed for the interpolation.  Question:  are the clock rates nearly the same or is one greater than the other?  Can you give them as a fraction of each other?

Best regards,

Mark Napier



[ - ]
Reply by SlartibartfastSeptember 18, 2019

Since you're highly oversampled wrt your desired signal the jitter due to just sample skipping or repeating might be relatively low.   Whether such a scheme will work for you is highly application dependent, i.e., depends totally on the system sensitivity to sample jitter downstream from your resampler.

I've used simple pickup/drop systems like you suggest via a FIFO with a delay-locked-loop or something similar many times because there are a lot of applications where the downstream system is insensitive to a little bit of sample jitter.   There are, however, lots of applications where even a little jitter can cause problems, so it really depends on what you're doing.

e.g., video frame rates and audio sample rates don't match.   Often during conversion or just during playback when they start to get out of register a video frame is just dropped or repeated.   The visual jitter is barely noticeable, but if the same were done to the audio (which is a much higher sample rate), the audio artifacts are more annoying to most people.

[ - ]
Reply by debaSeptember 18, 2019

Hi All,

To clarify the specific question about the input and outputs. The input sampling rate is ~10 kHz, and the input content is within ~10 Hz. The data needs to be handed over to another clock domain, at ~96 MHz. The input and output sampling rates are from different oscillators, so there can vary with time and will introduce a ppm drift. 

Thanks for good suggestions!

[ - ]
Reply by gretzteamSeptember 18, 2019

All the suggestions so far would work (skip/repeat sample, linear interpolation, quadratic, cubic...). Those are all polynomial filters of different orders. They will all put their zeros at multiple of the INPUT sampling frequency (ie you can easily work out their frequency response, you don't even need to consider the sampling rate change which is independent from this).

However, the sampling rate change is a resampling process, and will cause aliasing of the bands centered at multiple of the OUTPUT sampling rate onto your 0-10Hz desired band. Depending on the frequency content of the input signal (what is present between 10Hz and 5kHz), you can decide if the frequency response of the polynomials above will give you enough attenuation even if not ideally centered at multiple of the output sampling rate. For example, if the data was nicely oversampled (say all images from 10Hz to 5kHz were already attenuated by 100dB), then there is nothing to filter out anyway and you can get by with a rather simple non-ideal filter.

Now if you need decent filtering, it is a better idea to move the zeros of the polynomial response to multiple of the OUTPUT sampling rate instead of using a very high polynomial order. In this case, you have to use the transposed farrow structure (same polynomials can be implemented such as zero-order hold, linear, cubic...), but the response is now based on the output sampling rate, placing the zeros where you really want them.

So far this addressed the filtering aspect. You still need to work out the inter-sample position (mu) required for the polynomial responses (either standard or transposed farrow). If you know the input/output ratio exactly, you can simply feed it into an integrator to generate 'mu'. If not, you need some kind of digital pll to work it out in real time.

[ - ]
Reply by SlartibartfastSeptember 18, 2019

With a BW of 10Hz and a sample rate of 10kHz, you have a very high oversample rate.   So without any interpolation, just picking the nearest sample, the max jitter on a 10Hz signal will be 2pi/1000, or 0.00628 radians, i.e., not much.  If your system can handle that much phase jitter, then you don't need to bother with interpolating filters at all, just use the nearest input sample at the output sample interval.

At the 96MHz rate if you have some processing bandwidth, you can run an upsampling filter with a narrow bandwidth and smooth it out.   Again, there's a tradeoff between the output requirements and how much effort you might need to put into it.





 

[ - ]
Reply by debaSeptember 18, 2019

Hi  Slartibartfast,

Can you explain a bit on how does the max jitter of 2*pi/1000 is estimated?


Thanks

[ - ]
Reply by SlartibartfastSeptember 18, 2019

Since you said the signal was 1000x oversampled, e.g., 10kHz sps for a 10Hz signal, there are 1000 samples/cycle of the 10 Hz signal, or 1000 samples per 2*pi radians at 10Hz.  Say there are 10 cycles recorded in a one second interval, a full 10000 samples.  If you need to drop one of the input samples to do a simple drop/repeat strategy to get 9999 samples to rate match, you have deleted 2*pi/1000 of a full cycle of one of the periods of the 10Hz signal.   That's the max error, since it is the highest frequency.

If you upsample first with a fixed-rate interpolating filter before doing drop/repeats, you can make the phase jitter even smaller, but it costs you processing to do it.

Either way, you can avoid a polyphase or Farrow filter if need to and you can tolerate whatever phase jitter you wind up with depending on the strategy.  A Farrow filter or an interpolating polyphase filter will do a, potentially only slightly, better job for more processing overhead.

I hope that helps. It's the sort of thing that might be more obvious by chalk-talking with drawings, etc.



[ - ]
Reply by debaSeptember 18, 2019

Hi Slartibartfast and All,


Thanks for helping me out.


This is the below scheme which I am thinking will just do the picking of samples. I am just worried about the meta-stability issues which might arise, since the clocks are not synchronized. 

Is there a way to estimate the meta-stability errors in such an implementation? 


frac_src_1_56036.png


[ - ]
Reply by SlartibartfastSeptember 18, 2019

The old tried-and-true way to deal with the hardware clock metastability is just to put two flip-flops (registers) in series on the 96MHz clock.   Some people invert the first clock to slightly reduce latency, but it is usually not necessary.