Forums

Resampling/interpolation (uniform and non-uniform case)

Started by Alfred Bovin May 7, 2010
Hi all.

I'm doing some work on a commercial black box system with three sensors that 
are being sampled at 10 Hz. One of the sensors are running freely, while the 
two other sensors are polled by a linux computer. For one of the polled 
sensors there is a unpredictable delay from when I request data to it is 
being received.

This means that I'm basically getting the information from the three sources 
at different point on the time line. For offline processing purposes I need 
to interpolate and resample so that I get the measurements at the same point 
in time.

If we assume constant sampling time Ts =  1/10 s and proper bandlimiting in 
the hardware, I get the following samplings of the signals x1(t), x2(t) and 
x3(t):
x1(n * Ts), x2(n * Ts + delta2) and x3(n * Ts + delta3)
with n being an integer and delta2, delta3 are constant but random offsets 
determined by unpredictable initial conditions. I can, however, get rather 
precise time stamps for every measurement.

For example I can get readings
from sensor1 at 0.0, 0.1, 0.2, 0.3, ... sec
from sensor 2 at 0.0, 0.07, 0.17, 0.27, ... sec
from sensor 3 at 0.08, 0.18, 0.28, 0.38, ... sec

My idea is then to reconstruct the x(t)'s with standard windows sinc 
interpolation and then resample at 10 Hz at the same time points for all 
three signal. What will work, won't it?

Due to non real-time behaviour of the controller the propagation delay for 
sensor 3 is not constant, so I have a time dependency on delta3 - or 
basically a random variable for which I do not know the distribution. My 
intuition says that when I'm analyzing offline and know the time of arrival 
for each sampling from sensor 3 I can do a better reconstruction by taking 
that into account.

In uniform sinc reconstruction we place the scaled sinc function at every 
sample with zero crossing at the other samples and compute the 
superposition. In the non-uniform case there will be an error since the zero 
crossing will not occur at the other samples. How can this be enforced?

Thanks in advance! 


On May 7, 6:46&#2013266080;pm, "Alfred Bovin" <alf...@bovin.invalid> wrote:
> Hi all. > > I'm doing some work on a commercial black box system with three sensors that > are being sampled at 10 Hz. One of the sensors are running freely, while the > two other sensors are polled by a linux computer. For one of the polled > sensors there is a unpredictable delay from when I request data to it is > being received. > > This means that I'm basically getting the information from the three sources > at different point on the time line. For offline processing purposes I need > to interpolate and resample so that I get the measurements at the same point > in time. > > If we assume constant sampling time Ts = &#2013266080;1/10 s and proper bandlimiting in > the hardware, I get the following samplings of the signals x1(t), x2(t) and > x3(t): > x1(n * Ts), x2(n * Ts + delta2) and x3(n * Ts + delta3) > with n being an integer and delta2, delta3 are constant but random offsets > determined by unpredictable initial conditions. I can, however, get rather > precise time stamps for every measurement. > > For example I can get readings > from sensor1 at 0.0, 0.1, 0.2, 0.3, ... sec > from sensor 2 at 0.0, 0.07, 0.17, 0.27, ... sec > from sensor 3 at 0.08, 0.18, 0.28, 0.38, ... sec > > My idea is then to reconstruct the x(t)'s with standard windows sinc > interpolation and then resample at 10 Hz at the same time points for all > three signal. What will work, won't it? >
it should work if you know the actual sampling times for each sensor.
> Due to non real-time behaviour of the controller the propagation delay for > sensor 3 is not constant, so I have a time dependency on delta3 - or > basically a random variable for which I do not know the distribution. My > intuition says that when I'm analyzing offline and know the time of arrival > for each sampling from sensor 3 I can do a better reconstruction by taking > that into account. > > In uniform sinc reconstruction we place the scaled sinc function at every > sample with zero crossing at the other samples and compute the > superposition. In the non-uniform case there will be an error since the zero > crossing will not occur at the other samples. How can this be enforced?
for non-uniform sampling, you have a sorta problem. you can try to fit polynomials between the points, or non-uniformly "stretch" the sinc function so that the zero crossings *do* go through the other samples. not necessarily a good way to do it. r b-j
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message 
news:d855f449-35ca-4e50-96d2-31c96f52d692@h11g2000vbo.googlegroups.com...
On May 7, 6:46 pm, "Alfred Bovin" <alf...@bovin.invalid> wrote:

>> In uniform sinc reconstruction we place the scaled sinc function at every >> sample with zero crossing at the other samples and compute the >> superposition. In the non-uniform case there will be an error since the >> zero >> crossing will not occur at the other samples. How can this be enforced?
>for non-uniform sampling, you have a sorta problem. you can try to >fit polynomials between the points, or non-uniformly "stretch" the >sinc function so that the zero crossings *do* go through the other >samples. not necessarily a good way to do it.
I guess that a method that would be quite easy to implement would be to do a Lagrange interpolation and read off the values at the same points on the interpolated functions. Since we know that sinc interpolation is the limit to Lagrange interpolation in the case of an infinite number of uniformly spaced samples, I could then get an estimate/visualization of the error (aliasing) comming from both the finite number of samples and the non-uniformness by Fourier transforming the specific lagrange transfer function and compare it to the ideal brickwall response from sinc. Does that make sense?

Alfred Bovin wrote:

> This means that I'm basically getting the information from the three sources > at different point on the time line. For offline processing purposes I need > to interpolate and resample so that I get the measurements at the same point > in time.
[...]
> In uniform sinc reconstruction we place the scaled sinc function at every > sample with zero crossing at the other samples and compute the > superposition. In the non-uniform case there will be an error since the zero > crossing will not occur at the other samples. How can this be enforced?
Good problem. You can proceed in two ways: 1. Build a fixed time grid with fine step in time, so a sample time could be assumed at the nearest point of the grid. Do sinc interpolation in this grid. 2. Use a polynomial interpolation x(t). Lagrange interpolation through unequally spaced points would be the simplest; however that requires a solution of linear system of equations at every step. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>for non-uniform sampling, you have a sorta problem. you can try to >fit polynomials between the points, or non-uniformly "stretch" the >sinc function so that the zero crossings *do* go through the other >samples. not necessarily a good way to do it.
There's a very good way to do this problem. The Lagrangian interpolator is often used with equally spaced abcissas, however it can just as easily be stated for non-equally-spaced absicssas: f(x) =~ (sum over i=0 to n) (f(x[i]) * N[x] / D[i]) where N[x] = (product from j=0 to n) (x - x[j]) D[i] = (product from j=0 to n, j!=i) (x[i] - x[j]) You have to pick how many datapoints to use, but four (two before, and two after the time point of interest) is often a good choice. Steve
Alfred Bovin <alfred@bovin.invalid> wrote:

>I guess that a method that would be quite easy to implement would be to do a >Lagrange interpolation and read off the values at the same points on the >interpolated functions.
Yep. Steve
Vladimir Vassilevsky  <nospam@nowhere.com> wrote:

>2. Use a polynomial interpolation x(t). Lagrange interpolation through >unequally spaced points would be the simplest; however that requires a >solution of linear system of equations at every step.
I don't think you need to solve a system of equations. Steve
Thanks for your reply!

>> In uniform sinc reconstruction we place the scaled sinc function at every >> sample with zero crossing at the other samples and compute the >> superposition. In the non-uniform case there will be an error since the >> zero crossing will not occur at the other samples. How can this be >> enforced? > > Good problem. You can proceed in two ways: > > 1. Build a fixed time grid with fine step in time, so a sample time could > be assumed at the nearest point of the grid. Do sinc interpolation in this > grid.
I'm not sure what you mean by that. How can I do sinc interpolation in a grid finer than my sampling time. The superposition/convolution occurs at the samples. What do I do inbetween samples?
> 2. Use a polynomial interpolation x(t). Lagrange interpolation through > unequally spaced points would be the simplest; however that requires a > solution of linear system of equations at every step.
I think this would be a nice approach. I don't mind the computational cost associated with it, since it will happen offline. Do you agree with my comment about the associated error that I wrote in the reply Roberts first comment? I need to document my method, so I would be nice to have something about the error. Thanks again.
"Steve Pope" <spope33@speedymail.org> wrote in message 
news:hs28p6$c95$1@blue.rahul.net...
> robert bristow-johnson <rbj@audioimagination.com> wrote: > >>for non-uniform sampling, you have a sorta problem. you can try to >>fit polynomials between the points, or non-uniformly "stretch" the >>sinc function so that the zero crossings *do* go through the other >>samples. not necessarily a good way to do it. > > There's a very good way to do this problem. > > The Lagrangian interpolator is often used with equally spaced > abcissas, however it can just as easily be stated for > non-equally-spaced absicssas: > > f(x) =~ (sum over i=0 to n) (f(x[i]) * N[x] / D[i]) > > where > > N[x] = (product from j=0 to n) (x - x[j]) > > D[i] = (product from j=0 to n, j!=i) (x[i] - x[j]) > > You have to pick how many datapoints to use, but four (two before, > and two after the time point of interest) is often a good choice.
Are you talking about a local polynomial interpolator here? I understand the concept of "Lagrange intepolation" as the polynomial of least possible degree that interpolates all the points in the data set.
Alfred Bovin <alfred@bovin.invalid> wrote:

>"Steve Pope" <spope33@speedymail.org> wrote in message
>> The Lagrangian interpolator is often used with equally spaced >> abcissas, however it can just as easily be stated for >> non-equally-spaced absicssas:
>> f(x) =~ (sum over i=0 to n) (f(x[i]) * N[x] / D[i])
>> where
>> N[x] = (product from j=0 to n) (x - x[j])
>> D[i] = (product from j=0 to n, j!=i) (x[i] - x[j])
>> You have to pick how many datapoints to use, but four (two before, >> and two after the time point of interest) is often a good choice.
>Are you talking about a local polynomial interpolator here? I understand the >concept of "Lagrange intepolation" as the polynomial of least possible >degree that interpolates all the points in the data set.
Yes, that is what you get. If you use four sample points, you get an interpolated point on the cubic polynomial that goes through them. A "Lagrangian interpolator" specifically is a calculation that uses a ratio of two polynomials, one of which is the derivative of the other, as in N/D in the above example. It's a way of computing a point on the target polynomial (e.g. on the cubic curve if you're interpolating from four points), without having to compute the coefficients of the polynomial. This shows up all over the place, such as in algebraic decoding when computing error values. Steve