DSPRelated.com
Forums

Sampling Rate Conversion, better than

Started by ekkehard January 20, 2014
Hello,
for an OFDM project I have to resample ADC samples, since the receivers ADC
sample rate does not match exact the transmitter sample rate.
The physical ADC sample rate is fixed and can not be changed, the deviation
of the wanted sample rate is small, but can vary e.g. if the receiver is
moved toward the transmitter.
The calculation has to be done on a normal CPU, having the samples in
memory.
After testing a lot of approaches only "Spline" survived, because it mets
the speed and precision criteria.
Following the "numerical recipes" the setup for the spline operation is to
take 1K original samples, calculate the coefficents ("splint") and than, by
avoiding the first and last few samples, calculate the new interpolated
samples by calling "spline".
I read that "Lagrange" or "Neville" interpolation may also be worth to
test, could somebody comment?

Thanks for reading!
Best regards
Ekkehard

	 

_____________________________		
Posted through www.DSPRelated.com
On Monday, January 20, 2014 6:37:02 AM UTC-5, ekkehard wrote:
> Hello, > > for an OFDM project I have to resample ADC samples, since the receivers ADC > sample rate does not match exact the transmitter sample rate. > The physical ADC sample rate is fixed and can not be changed, the deviation > of the wanted sample rate is small, but can vary e.g. if the receiver is > moved toward the transmitter. > > The calculation has to be done on a normal CPU, having the samples in > memory. > > After testing a lot of approaches only "Spline" survived, because it mets > the speed and precision criteria. > > Following the "numerical recipes" the setup for the spline operation is to > take 1K original samples, calculate the coefficents ("splint") and than, by > avoiding the first and last few samples, calculate the new interpolated > samples by calling "spline". > > I read that "Lagrange" or "Neville" interpolation may also be worth to > test, could somebody comment? > > Thanks for reading! > > Best regards > > Ekkehard >
Good question. Despite its age, Floyd Gardner's classic papers are your best starting points. http://dx.doi.org/10.1109/26.221081 What you want is a scheme that is suitable for real-time implementation, in particular one that has small memory footprint on both input and output signals. And for that it's hard to beat polynomial interpolators as given in Gardner's papers. One aspect you should consider is at which point you do sampling clock correction. The more oversampling, the better.
On Mon, 20 Jan 2014 05:37:02 -0600, ekkehard wrote:

> Hello, > for an OFDM project I have to resample ADC samples, since the receivers > ADC sample rate does not match exact the transmitter sample rate. > The physical ADC sample rate is fixed and can not be changed, the > deviation of the wanted sample rate is small, but can vary e.g. if the > receiver is moved toward the transmitter. > The calculation has to be done on a normal CPU, having the samples in > memory. > After testing a lot of approaches only "Spline" survived, because it > mets the speed and precision criteria. > Following the "numerical recipes" the setup for the spline operation is > to take 1K original samples, calculate the coefficents ("splint") and > than, by avoiding the first and last few samples, calculate the new > interpolated samples by calling "spline". > I read that "Lagrange" or "Neville" interpolation may also be worth to > test, could somebody comment?
I'm 99.44% sure that you've discovered an obfuscated, and possibly more computationally intensive way, of doing polyphase filtering over sort lengths. You may want to look into polyphase filtering with a real-time filter length restricted to 4, and see if your computational load goes down. I'm pretty sure it will -- you'll be doing a bunch of 4-long vector multiplies, with coefficients that you can pull out of a fairly short look-up table. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On Mon, 20 Jan 2014 09:51:18 -0600, Tim Wescott
<tim@seemywebsite.please> wrote:

>On Mon, 20 Jan 2014 05:37:02 -0600, ekkehard wrote: > >> Hello, >> for an OFDM project I have to resample ADC samples, since the receivers >> ADC sample rate does not match exact the transmitter sample rate. >> The physical ADC sample rate is fixed and can not be changed, the >> deviation of the wanted sample rate is small, but can vary e.g. if the >> receiver is moved toward the transmitter. >> The calculation has to be done on a normal CPU, having the samples in >> memory. >> After testing a lot of approaches only "Spline" survived, because it >> mets the speed and precision criteria. >> Following the "numerical recipes" the setup for the spline operation is >> to take 1K original samples, calculate the coefficents ("splint") and >> than, by avoiding the first and last few samples, calculate the new >> interpolated samples by calling "spline". >> I read that "Lagrange" or "Neville" interpolation may also be worth to >> test, could somebody comment? > >I'm 99.44% sure that you've discovered an obfuscated, and possibly more >computationally intensive way, of doing polyphase filtering over sort >lengths. > >You may want to look into polyphase filtering with a real-time filter >length restricted to 4, and see if your computational load goes down. >I'm pretty sure it will -- you'll be doing a bunch of 4-long vector >multiplies, with coefficients that you can pull out of a fairly short >look-up table. > >-- >Tim Wescott >Control system and signal processing consulting >www.wescottdesign.com
I'd second Tim's suggestion, and also suggest to look at Farrow filters, which allows a tradeoff with the amount of LUT memory. One downside of spline interpolation is that the frequency response can be unfavorable, and with a polyphase or Farrow filter you have very good control of the frequency response. One of the downsides of OFDM is dealing with the sample synchronization in the receiver, as you've discovered. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
I did not use polyphase since I thought that those filters can convert the
sampling rate only by a integer n:m ratio. Since the deviation of sampling
rates are small (some Hertz) but the needed acuracy high (e.g. few percent
of Hertz), the calculation of the actual filter parameters may be more
expensive than the calculation itself.

I will check the "Fundamental" document.
Thank you
Ekkehard

>On Mon, 20 Jan 2014 09:51:18 -0600, Tim Wescott ><tim@seemywebsite.please> wrote: > >>On Mon, 20 Jan 2014 05:37:02 -0600, ekkehard wrote: >> >>> Hello, >>> for an OFDM project I have to resample ADC samples, since the
receivers
>>> ADC sample rate does not match exact the transmitter sample rate. >>> The physical ADC sample rate is fixed and can not be changed, the >>> deviation of the wanted sample rate is small, but can vary e.g. if the >>> receiver is moved toward the transmitter. >>> The calculation has to be done on a normal CPU, having the samples in >>> memory. >>> After testing a lot of approaches only "Spline" survived, because it >>> mets the speed and precision criteria. >>> Following the "numerical recipes" the setup for the spline operation
is
>>> to take 1K original samples, calculate the coefficents ("splint") and >>> than, by avoiding the first and last few samples, calculate the new >>> interpolated samples by calling "spline". >>> I read that "Lagrange" or "Neville" interpolation may also be worth to >>> test, could somebody comment? >> >>I'm 99.44% sure that you've discovered an obfuscated, and possibly more >>computationally intensive way, of doing polyphase filtering over sort >>lengths. >> >>You may want to look into polyphase filtering with a real-time filter >>length restricted to 4, and see if your computational load goes down. >>I'm pretty sure it will -- you'll be doing a bunch of 4-long vector >>multiplies, with coefficients that you can pull out of a fairly short >>look-up table. >> >>-- >>Tim Wescott >>Control system and signal processing consulting >>www.wescottdesign.com > >I'd second Tim's suggestion, and also suggest to look at Farrow >filters, which allows a tradeoff with the amount of LUT memory. One >downside of spline interpolation is that the frequency response can be >unfavorable, and with a polyphase or Farrow filter you have very good >control of the frequency response. > >One of the downsides of OFDM is dealing with the sample >synchronization in the receiver, as you've discovered. > > > >Eric Jacobsen >Anchor Hill Communications >http://www.anchorhill.com >
_____________________________ Posted through www.DSPRelated.com
On Tue, 21 Jan 2014 07:59:43 -0600, "ekkehard" <87302@dsprelated>
wrote:

>I did not use polyphase since I thought that those filters can convert the >sampling rate only by a integer n:m ratio. Since the deviation of sampling >rates are small (some Hertz) but the needed acuracy high (e.g. few percent >of Hertz), the calculation of the actual filter parameters may be more >expensive than the calculation itself. > >I will check the "Fundamental" document. >Thank you >Ekkehard
A small point: mixing top-posting and bottom-posting makes threads very difficult to read. Strictly speaking a basic polyphase filter does n:m conversion, but often that's not an issue. A polyphase can be "steered" to pick the closest available filter phase to the desired output sample time, and the number of available coefficients determines the resolution on the output sample time jitter. Since some output timing jitter is usually acceptable (even if it's a very small time spec), this means that a practical polyphase can be used for arbitrary resampling within specified tolerances. The question is then how many phases/coefficients are needed to get the output time jitter within your needs. Usually this winds up being very manageable. If the coefficient memory gets too large then a Farrow filter can help reduce the memory requirements at the expense of some computation for interpolating coefficients along the way. I suspect, though, that a polyphase can be made to work but it really depends on the details of your constraints. It may still even boil down to doing what you're doing, but I think these approaches are worth looking at to see if they'll help your situation. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On 1/21/14 8:59 AM, ekkehard wrote:
> I did not use polyphase since I thought that those filters can convert the > sampling rate only by a integer n:m ratio. Since the deviation of sampling > rates are small (some Hertz) but the needed accuracy high (e.g. few percent > of Hertz), the calculation of the actual filter parameters may be more > expensive than the calculation itself. > > I will check the "Fundamental" document.
dunno what that document is (other than a good textbook like O&S). in the past, i have posted what is reposted at the bottom here as a sorta fundamentals primer for an application of bandlimited interpolation (usually sample rate conversion or fractional-sample delay). i would just direct you to the archive at Google Groups, but what used to be an invaluable resource is rapidly becoming useless. so i am reposting below. read it with a monospace font. probably, at some point, i will format it using LaTeX and put it on stackexchange. maybe not. so ekke, even if your SRC ratio is not a simple rational number, even if it varies slowly (as it does for asynchronous SRC) you can make tradeoffs with how many coefficients are in memory (that determines what your integer *upsampling* factor is), what kind of interpolation is used between these upsampled points (i recommend no higher than simple linear), and what the image-suppression performance is. Duane Wise and i did a paper for AES in the 90s about that. if you're doing linear interpolation between the upsampled points, essentially every time you double the integer upsampling ratio (and linearly interpolate between those points), you gain 12 dB image suppression and 12 dB S/N ratio in the reconstruction. for audio, i have found upsampling by a factor of 512 to be plenty sufficient and i have seen that ratio as low as 32 with linear interpolation still used. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge." (you need to set your font to a monospace font to read this correctly.) ____________________________________________________________________________ Bandlimited Interpolation, Sample Rate Conversion, or Fractional-Sample Delay from the P.O.V. of the Nyquist/Shannon Sampling and Reconstruction Theorem ____________________________________________________________________________ Here is the mathematical expression of the sampling and reconstruction theorem: x(t)*q(t) = T*SUM{x[k]*d(t-k*T)} .------. x(t)--->(*)------------------------------------>| H(f) |---> x(t) ^ '------' | | +inf '------- q(t) = T*SUM{ d(t - k*T) } k=-inf where d(t) = "dirac" impulse function and T = 1/Fs = sampling period Fs = sampling frequency +inf q(t) = T * SUM{ d(t - k*T) } is the "sampling function", is periodic k=-inf with period T, and can be expressed as a Fourier series. It turns out that ALL of the Fourier coefficients are equal to 1. +inf q(t) = SUM{ c[n]*exp(j*2*pi*n/T*t) } n=-inf where c[n] is the Fourier series coefficient. t0+T c[n] = 1/T * integral{q(t) * exp(-j*2*pi*n/T*t) dt} t0 and t0 can be any time. We choose t0 to be -T/2. T/2 +inf c[n] = 1/T * integral{T * SUM{ d(t-k*T) } * exp(-j*2*pi*n/T*t) dt} -T/2 k=-inf T/2 = integral{ d(t - 0*T) * exp(-j*2*pi*n/T*t) dt} -T/2 T/2 = integral{ d(t) * exp(-j*2*pi*n/T*t) dt} -T/2 = exp(-j*2*pi*n/T*0) = 1 So all Fourier series coefficients for q(t) are 1. This results in +inf q(t) = SUM{ exp(j*2*pi*n*Fs*t) } . n=-inf The sampled signal looks like: +inf x(t)*q(t) = x(t) * T * SUM{ d(t - k*T) } k=-inf +inf = T * SUM{ x(t) * d(t - k*T) } k=-inf +inf = T * SUM{ x(k*T) * d(t - k*T) } k=-inf +inf = T * SUM{ x[k] * d(t - k*T) } k=-inf where x[k] = x(k*T) by definition. An alternative (and equally important representation) of the sampled signal is: +inf x(t)*q(t) = x(t) * SUM{ exp(j*2*pi*n*Fs*t) } n=-inf +inf = SUM{ x(t) * exp(j*2*pi*n*Fs*t) } n=-inf Defining the spectrum of x(t) as its Fourier Transform: +inf X(f) = FT{ x(t) } = integral{ x(t)*exp(-j*2*pi*f*t) dt} -inf Then using the frequency shifting property of the Fourier Transform, FT{ x(t)*exp(j*2*pi*f0*t) } = X(f - f0) results in +inf FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } n=-inf This says, what we all know, that the spectrum of our signal being sampled is shifted and repeated forever at multiples of the sampling frequency, Fs. If x(t) or X(f) is bandlimited to B (i.e. X(f) = 0 for all |f| > B) AND if there is no overlap of the tails of adjacent images X(f), that is: right tail of nth image of X(f) < left tail of (n+1)th image n*Fs + B < (n+1)*Fs - B = n*Fs + Fs - B B < Fs - B 2*B < Fs = 1/T Then we ought to be able to reconstruct X(f) (and also x(t)) by low pass filtering out all of the images of X(f). To do that, Fs > 2*B (to prevent overlap) and H(f) must be: { 1 for |f| < Fs/2 = 1/(2*T) H(f) = { { 0 for |f| >= Fs/2 = 1/(2*T) This is the "sampling" half of the Nyquist/Shannon sampling and reconstruction theorem. It says that the sampling frequency, Fs, must be strictly greater than twice the bandwidth, B, of the continuous-time signal, x(t), for no information to be lost (or "aliased"). The "reconstruction" half follows. The impulse response of the reconstruction LPF, H(f), is the inverse Fourier Transform of H(f), called h(t): +inf h(t) = iFT{ H(f) } = integral{ H(f)*exp(+j*2*pi*f*t) df} -inf 1/(2*T) = integral{ exp(+j*2*pi*f*t) df} -1/(2*T) = (exp(+j*2*pi/(2*T)*t) - exp(+j*2*pi/(-2*T)*t))/(j*2*pi*t) = ( exp(+j*pi/T*t) - exp(-j*pi/T*t) )/(j*2*pi*t) = sin((pi/T)*t)/(pi*t) = (1/T)*sin(pi*t/T)/(pi*t/T) = (1/T)*sinc(t/T) where sinc(u) = sin(pi*u)/(pi*u) The input to the LPF is x(t)*q(t) = T*SUM{x[k]*d(t - k*T)} . Each d(t - k*T) impulse generates its own impulse response and since the LPF is linear and time invariant, all we have to do is add up the time-shifted impulse responses weighted by their coefficients, x(k*T). The T and 1/T kill each other off. The output of the LPF is +inf +inf x(t) = SUM{ x[k]*sinc((t-k*T)/T) } = SUM{ x[k]*sinc(t/T - k) } k=-inf k=-inf This equation tells us explicitly how to reconstruct our sampled, bandlimited input signal from the samples. This is the "reconstruction" half of the Nyquist/Shannon sampling and reconstruction theorem. Interpolation, Sample Rate Conversion, etc: When interpolating (whether for fractional sample delay or for doing sample rate conversion), you must evaluate this equation for times that might not be integer multiples of your sampling period, T. The sinc(t/T) function is one for t = 0 and zero for t = m*T for k = nonzero integer. This means that if your new sample time, t, happens to land exactly on an old sample time, k*T, only that sample (and none of the neighbors) contributes to the output sample and the output sample is equal to the input sample. Only in the case where the output sample is in between input samples, do the neighbors contribute to the calculation. Since the sinc() function goes on forever to + and - infinity, it must be truncated somewhere to be of practical use. Truncating is actually applying the rectangular window (the worst kind) so it is advantageous to window down the sinc() function gradually using something like a Hamming or Kaiser window. In my experience, with a good window, you'll need to keep the domain of the windowed sinc() function from -16 to +16 and sample it 16384 times in that region (that would be 16384/32 = 512 "polyphases" or "fractional delays" if linear interpolation is subsequently used to get in between neighboring fractional delays). Now we simply split the time, t, into an integer part, m, and fractional part, a, and substitute: +inf x((m+a)*T) = SUM{ x[k]*sinc((m+a)*T/T - k) } k=-inf +inf = SUM{ x[k]*sinc(a + m - k) } k=-inf where t = (m + a)*T and m = floor(t/T) and a = fract(t/T) = t/T - floor(t/T) t-T < m*T <= t which means a is always nonnegative and less than 1: 0 <= a < 1 The limits of the sum can be bumped around a bit without changing it and by substituting k <- k+m : +inf x((m+a)*T) = SUM{ x[m+k] * sinc(a - k) } k=-inf Now, an approximation is made by truncating the summation to a finite amount (which is effectively windowing): +16 x((m+a)*T) = SUM{ x[m+k] * sinc(a-k) * w((a-k)/16) } k=-15 w(u) is the window function. A Hamming window would be: { 0.54 + 0.46*cos(pi*u) for |u| < 1 w(u) = { { 0 for |u| >= 1 A Kaiser window would be better but is harder to use for illustration. This requires two 32 point FIR computations to calculate two adjacent interpolated samples (you linearly interpolate between the two for the final output sample). Since the sinc()*w() function is symmetrical, that means 8193 numbers stored somewhere in memory. When interpolating, the integer part of t/T (or "m") determines which 32 adjacent samples of x[k] to use, the fractional part of t/T (or "a") determines which set of 32 windowed sinc() coefficients to be used to combine the 32 samples. ____________________________________________________________________________ ____________________________________________________________________________
On Tue, 21 Jan 2014 15:23:29 +0000, Eric Jacobsen wrote:

> On Tue, 21 Jan 2014 07:59:43 -0600, "ekkehard" <87302@dsprelated> wrote: > >>I did not use polyphase since I thought that those filters can convert >>the sampling rate only by a integer n:m ratio. Since the deviation of >>sampling rates are small (some Hertz) but the needed acuracy high (e.g. >>few percent of Hertz), the calculation of the actual filter parameters >>may be more expensive than the calculation itself. >> >>I will check the "Fundamental" document. >>Thank you Ekkehard > > A small point: mixing top-posting and bottom-posting makes threads very > difficult to read. > > Strictly speaking a basic polyphase filter does n:m conversion, but > often that's not an issue. A polyphase can be "steered" to pick the > closest available filter phase to the desired output sample time, and > the number of available coefficients determines the resolution on the > output sample time jitter. Since some output timing jitter is usually > acceptable (even if it's a very small time spec), this means that a > practical polyphase can be used for arbitrary resampling within > specified tolerances. > > The question is then how many phases/coefficients are needed to get the > output time jitter within your needs. Usually this winds up being very > manageable. If the coefficient memory gets too large then a Farrow > filter can help reduce the memory requirements at the expense of some > computation for interpolating coefficients along the way. > > I suspect, though, that a polyphase can be made to work but it really > depends on the details of your constraints. It may still even boil > down to doing what you're doing, but I think these approaches are worth > looking at to see if they'll help your situation.
Eric just did a very good job expanding on what I was thinking. You can think of a polyphase filter as being a fixed-ratio thing, or you can think of it as a thing that cycles through a number of selections of partial-sample phase shifts in a fixed manner. Then you can make it select those partial-sample phase shifts the way _you_ want, to get the frequency ratio that you want -- even on an active basis. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
>> Strictly speaking a basic polyphase filter does n:m conversion, but >> often that's not an issue. A polyphase can be "steered" to pick the >> closest available filter phase to the desired output sample time, and >> the number of available coefficients determines the resolution on the >> output sample time jitter. Since some output timing jitter is usually >> acceptable (even if it's a very small time spec), this means that a >> practical polyphase can be used for arbitrary resampling within >> specified tolerances.
One question is that how would that polyphase filter be steered. In other words, referring to Gardner's classical papers, there is no peak in the pulse shape to adjust the fractional sampling interval in an OFDM waveform. How do we compute the basepoint indices and fractional intervals in this case? _____________________________ Posted through www.DSPRelated.com
On Thu, 23 Jan 2014 14:58:26 -0600, "commsignal" <58672@dsprelated>
wrote:

>>> Strictly speaking a basic polyphase filter does n:m conversion, but >>> often that's not an issue. A polyphase can be "steered" to pick the >>> closest available filter phase to the desired output sample time, and >>> the number of available coefficients determines the resolution on the >>> output sample time jitter. Since some output timing jitter is usually >>> acceptable (even if it's a very small time spec), this means that a >>> practical polyphase can be used for arbitrary resampling within >>> specified tolerances. > >One question is that how would that polyphase filter be steered. In other >words, referring to Gardner's classical papers, there is no peak in the >pulse shape to adjust the fractional sampling interval in an OFDM waveform. >How do we compute the basepoint indices and fractional intervals in this >case?
The polyphase filter response has a peak, and the coefficient set (or phase) with the peak closest to the desired sampling instant is chosen. If this can be done within a desirable sampling jitter tolerance, then you're done. If not, either increase the number of coefficients or interpolate the coefficients for the desired sampling instant from the existing coefficients. Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com