Well, yes I've read that one too. But I'm actually just referring to the
general case of regular cubic splines. Cubic splines are second order
osculating (both position, first- and second derivatives are matched at
the knots that connect the piecewise polynomials). It is in this case
quite clear to say that, the interpolation is C2-continuous. If we do not
express constraints on the second derivative when creating the spline (for
example for cubic Hermites), then the interpolation is C1-continuous.
But which degree of continuity is reached using when using the sinc
interpolator? Ideally it is fully continous I guess, since the sinc has
the properties you described. But in practice? Doesn't it depend on
whether it is truncated or windowed? And at which zero crossing this is
done?
Just because the interpolation kernel (or impulse response, in this case
the sinc) is fully continous, does that imply that the resulting
interpolation is fully continous too?
This is confusing. Thanks.
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
Reply by robert bristow-johnson●September 29, 20052005-09-29
in article GpCdnbQrNNy_OqbeRVn-hw@giganews.com, snappy at glad@kth.se wrote
on 09/29/2005 04:34:
> Speaking about continuity when using interpolating polynomials make sense
> since we can set up equations of constraints for this when constructing
> the polynomials (such as for splines).
you might refer to Olli Niemitalo's paper on polynomial interpolation at
http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf
it is a definite improvement on an earlier paper on the same topic that
Duane Wise and i did sometime in the 90s. ours doesn't live on the web, but
i can send you a copy if you want.
> When using windowed sinc interpolation, does it make sense to talk about
> interpolation continuity?
sure.
> If so, how is this measured?
you mean how is this determined? the sinc() function is fully continuous
(the 0/0 singularity is a "removable" singularity) in all of its
derivatives. if the window is continuous (at least to the point where it
ends), then the windowed sinc() is continuous (at least to the point where
it ends).
--
r b-j rbj@audioimagination.com
"Imagination is more important than knowledge."
Reply by Fred Marshall●September 29, 20052005-09-29
"snappy" <glad@kth.se> wrote in message
news:GpCdnbQrNNy_OqbeRVn-hw@giganews.com...
> Speaking about continuity when using interpolating polynomials make sense
> since we can set up equations of constraints for this when constructing
> the polynomials (such as for splines).
>
> When using windowed sinc interpolation, does it make sense to talk about
> interpolation continuity? If so, how is this measured?
It seems like it might.
I wonder if the question were posed as:
"When using windowed Dirichlet Kernel interpolation, does it make sense to
talk about interpolation continuity?"
It seems that the periodic nature of this "sinc" would remove the
discontinuities otherwise evident in windowing infinitely long sincs.
After all, this is often what we're doing (or should be doing?).
Fred
Reply by snappy●September 29, 20052005-09-29
Speaking about continuity when using interpolating polynomials make sense
since we can set up equations of constraints for this when constructing
the polynomials (such as for splines).
When using windowed sinc interpolation, does it make sense to talk about
interpolation continuity? If so, how is this measured?
This message was sent using the Comp.DSP web interface on
www.DSPRelated.com
Reply by I. R. Khan●January 23, 20042004-01-23
> yeah, but where do we care where the polynomials match? when i'm
> interpolating between 1 and 2, i care about matching to x[1] and x[2] but
> why would i give a rat's ass to whether *this* polynomial hits at x[0] or
at
> x[3]? when i'm interpolating between 2 and 3, *then* i'll care if i hit
at
> x[3]. when i'm interpolating between 0 and 1, *then* i'll care if i hit
at
> x[0].
>
> > If so,
> > would the 3rd order Hermite polynomical pass through 3 of the 4 points
and
> > match the first derivative at one of those 4 points?
>
> it passes though the inner 2 points and misses on the outer 2. the
Lagrange
> passes through all 4 but, when spliced to the adjacent Lagrange, the
> derivative is not guaranteed to be continuous.
>
> r b-j
>
Very good explaination Robert.
Can Hermite polynomial based interpolation be used for interpolating 2D
data, like images? I shall appreciate any references or links?
Ishtiaq.
Reply by robert bristow-johnson●January 22, 20042004-01-22
In article buntf1$jv8i2$1@ID-210375.news.uni-berlin.de, Jon Harris at
goldentully@hotmail.com wrote on 01/22/2004 02:20:
> robert bristow-johnson <rbj@surfglobal.net> wrote in message
> news:BC34BC76.7F1E%rbj@surfglobal.net...
...
>> if you have sampled data, then it is reasonable to presume it was sampled
>> from a bandlimited continuous-time (or continuous-something-else) signal.
>> isn't a continuous-time signal also one with continuity of the function and
>> all its derivative? if so, it seems to me a reasonable goal to try to
>> preserve the continuity of as many derivatives as possible. am i missing
>> something?
>
> I don't think you're missing anything, I'm just confused about the mechanics
> of the process.
> For instance: Say you are trying to do 3rd order Hermite interpolation. All
> you have is 4 data points and you need to calculate an intermediate point. So
> you supposedly construct a cubic polynomical that matches (some of) the
> original data points and their derivatives. What is the derivative of the
> original data points?
good question, but it's not matching the "original data" derivative since it
does not exist, of course, being discrete-time.
> It sounds like you are saying it is "the derivative of the continuous function
> that would be created by zero padding those 4 points infinitely, and then
> doing an ideal band-limited interpolation". Is that right?
okay, look at it this way: you have *5* points with indices 0, 1, 2, 3, and
4. those discrete samples are x[n] and the interpolated function is x(t)
(note the use of brackets for discrete-time and parenths for
continuous-time).
so first, we take the first 4 points (x[0], x[1], x[2], x[3]) and create
some cubic polynomial, x(t), that we are interested in its behavior between
points 1 and 2. the actual normalized "time" value for the interpolation is
1+a where 0 <= a < 1.
then secondly, we take the next 4 points (x[1], x[2], x[3], x[4]) and create
some cubic polynomial, x(t), that we are interested in its behavior between
points 2 and 3. the actual normalized "time" value for the interpolation is
2+a where 0 <= a < 1. this cubic polynomial is spliced to the previous one
at x[2].
now we want x(t) to be continuous everywhere and as many derivatives and we
can get to be continuous. so, at x(2) we want both polynomials and as many
derivatives to be the same since that is the point they share in common.
the only way for that to happen is if the polynomial values and as many
derivatives as possible *at* the point x(2), to depend on the values of the
*same* shared set of points, that is x[1], x[2], x[3].
so for the first cubic Hermite polynomial, when a=1, the polynomial value
must depend only on x[1], x[2], x[3] (and in fact, it depends only on x[2])
and the polynomial derivative must depend only on x[1], x[2], x[3] (and in
fact, it depends only on x[1] and x[3]).
and for the second cubic Hermite polynomial, when a=0, the polynomial value
must depend only on x[1], x[2], x[3] (and in fact, it depends only on x[2])
and the polynomial derivative must depend only on x[1], x[2], x[3] (and in
fact, it depends only on x[1] and x[3]).
that's the philosophy behind the cubic Hermite.
> Another point of confusion, I know that there is some fundamental thereom that
> states that there is only 1 cubic polynomial that passes through 4 points.
> And that would be the so-called Lagrange polynomical. So the Hermite and
> other osculating functions must not pass through all the points.
yes. it only passes through the two neighboring points (think of that as
matching the zeroeth derivative) to where you are interpolating.
> So do they trade off exact matches at points for exact matches of derivatives?
yeah, but where do we care where the polynomials match? when i'm
interpolating between 1 and 2, i care about matching to x[1] and x[2] but
why would i give a rat's ass to whether *this* polynomial hits at x[0] or at
x[3]? when i'm interpolating between 2 and 3, *then* i'll care if i hit at
x[3]. when i'm interpolating between 0 and 1, *then* i'll care if i hit at
x[0].
> If so,
> would the 3rd order Hermite polynomical pass through 3 of the 4 points and
> match the first derivative at one of those 4 points?
it passes though the inner 2 points and misses on the outer 2. the Lagrange
passes through all 4 but, when spliced to the adjacent Lagrange, the
derivative is not guaranteed to be continuous.
r b-j
Reply by robert egri●January 22, 20042004-01-22
"Jon Harris" <goldentully@hotmail.com> wrote in message news:<bumukv$jo88n$1@ID-210375.news.uni-berlin.de>...
> I was thinking about osculating polynomial interpolation (of which Hermite
> is the first order member) in relation to sample rate conversion. As I
> understand it, osculating interpolation is a method where the interpolating
> function is a polynomial that matches a function's value(s) plus a certain
> number of its derivatives (0, 1, 2*, etc.) at those values. This makes good
> sense when you are approximating a continuous function, for example
> approximating log(x) with a polynomial. However, the concept of the
> derivative doesn't make sense when all you have is a few data points. What
> am I missing?
>
> -Jon
>
> * Typically 0 is known as Lagrange and 1 is known as Hermite interpolation
I think one way of thinking of this is to imagine an inductor L and
both a voltmeter in parallel and an ammeter in series with it and you
simultaneously take samples of both voltage and current. Since V =
LdI/dt you have now sampled both the function I and its 1st
derivative. The Hermite interpolator is now applicable to the set of
voltage and current samples you have taken to reconstruct the current
as function of time. Notice, that you do not need to perform any
numerical differentiation, instead the derivative, i.e., the voltage,
must be measured independently of the current.
Interestingly, one can extend the Nyquist sampling theorem to cases
you are asking about. Roughly speaking, it is possible to reconstruct
a bandlimited function if it is sampled at 2W rate, where W is its
bandwidth and the samples are "independent". Sampling here may mean
the function values or its derivatives, 1st, 2nd, etc., and these do
not have to be at the sample place.
Hope this helps.
Reply by Jon Harris●January 22, 20042004-01-22
robert bristow-johnson <rbj@surfglobal.net> wrote in message
news:BC34BC76.7F1E%rbj@surfglobal.net...
> In article bumukv$jo88n$1@ID-210375.news.uni-berlin.de, Jon Harris at
> goldentully@hotmail.com wrote on 01/21/2004 17:37:
>
> > I was thinking about osculating polynomial interpolation (of which Hermite
> > is the first order member) in relation to sample rate conversion. As I
> > understand it, osculating interpolation is a method where the interpolating
> > function is a polynomial that matches a function's value(s) plus a certain
> > number of its derivatives (0, 1, 2*, etc.) at those values.
>
> yup.
>
> > This makes good
> > sense when you are approximating a continuous function, for example
> > approximating log(x) with a polynomial.
>
> i dunno. i use a remes polynomial solution to do log2() and exp2() and
> sqrt() and the like.
>
> > However, the concept of the
> > derivative doesn't make sense when all you have is a few data points. What
> > am I missing?
>
> if you have sampled data, then it is reasonable to presume it was sampled
> from a bandlimited continuous-time (or continuous-something-else) signal.
> isn't a continuous-time signal also one with continuity of the function and
> all its derivative? if so, it seems to me a reasonable goal to try to
> preserve the continuity of as many derivatives as possible. am i missing
> something?
I don't think you're missing anything, I'm just confused about the mechanics of
the process.
For instance: Say you are trying to do 3rd order Hermite interpolation. All
you have is 4 data points and you need to calculate an intermediate point. So
you supposedly construct a cubic polynomical that matches (some of) the original
data points and their derivatives. What is the derivative of the original data
points? It sounds like you are saying it is "the derivative of the continuous
function that would be created by zero padding those 4 points infinitely, and
then doing an ideal band-limited interpolation". Is that right?
Another point of confusion, I know that there is some fundamental thereom that
states that there is only 1 cubic polynomial that passes through 4 points. And
that would be the so-called Lagrange polynomical. So the Hermite and other
osculating functions must not pass through all the points. So do they trade off
exact matches at points for exact matches of derivatives? If so, would the 3rd
order Hermite polynomical pass through 3 of the 4 points and match the first
derivative at one of those 4 points?
> BTW, there are many other concerns for deciding polynomial interpolation
> over some other kind and which kind of polynomial to use. so i am not
> saying Hermite is the best, but i do think there is at least one attractive
> property of them.
Right. I'm just trying to get my brain around the concept in general. (Guess
where I heard of the concept in the first place--your paper. :-)
> r b-j
Jon
Reply by robert bristow-johnson●January 22, 20042004-01-22
In article bumukv$jo88n$1@ID-210375.news.uni-berlin.de, Jon Harris at
goldentully@hotmail.com wrote on 01/21/2004 17:37:
> I was thinking about osculating polynomial interpolation (of which Hermite
> is the first order member) in relation to sample rate conversion. As I
> understand it, osculating interpolation is a method where the interpolating
> function is a polynomial that matches a function's value(s) plus a certain
> number of its derivatives (0, 1, 2*, etc.) at those values.
yup.
> This makes good
> sense when you are approximating a continuous function, for example
> approximating log(x) with a polynomial.
i dunno. i use a remes polynomial solution to do log2() and exp2() and
sqrt() and the like.
> However, the concept of the
> derivative doesn't make sense when all you have is a few data points. What
> am I missing?
if you have sampled data, then it is reasonable to presume it was sampled
from a bandlimited continuous-time (or continuous-something-else) signal.
isn't a continuous-time signal also one with continuity of the function and
all its derivative? if so, it seems to me a reasonable goal to try to
preserve the continuity of as many derivatives as possible. am i missing
something?
BTW, there are many other concerns for deciding polynomial interpolation
over some other kind and which kind of polynomial to use. so i am not
saying Hermite is the best, but i do think there is at least one attractive
property of them.
r b-j
Reply by Fred Marshall●January 21, 20042004-01-21
"Jon Harris" <goldentully@hotmail.com> wrote in message
news:bumukv$jo88n$1@ID-210375.news.uni-berlin.de...
> I was thinking about osculating polynomial interpolation (of which Hermite
> is the first order member) in relation to sample rate conversion. As I
> understand it, osculating interpolation is a method where the
interpolating
> function is a polynomial that matches a function's value(s) plus a certain
> number of its derivatives (0, 1, 2*, etc.) at those values. This makes
good
> sense when you are approximating a continuous function, for example
> approximating log(x) with a polynomial. However, the concept of the
> derivative doesn't make sense when all you have is a few data points.
What
> am I missing?
>
Jon,
Why would few (or many) data points make a difference? I'm thinking about
properly sampled data where the bandwidth was limited to begin with or where
the data points were discrete LPFd prior to interpolation.
[On occasion we've had folks suggest that a sine burst is bandlimited -
which of course it isn't. We had another case where a sinusoid at exactly
fs/2 was sampled. Until I saw the latter, I had always assumed that *any*
sequence of regular samples represents a bandlimited function that can be
reconstructed. That latter anomalous case convinced me otherwise. Sinc
reconstruction blows up in that case. ]
To make sure anomalous things aren't buried in the data, perhaps an up-front
LPF with the widest possible bandwidth is indicated.
If you compute the DFT (of an even number of samples as in an FFT) and a
nonzero, or more than insignificant, value results at fs/2, then something
is wrong.
The other thing that can be said is that some interpolators don't preserve
even the first derivative and guarantee break points - which wouldn't result
in a bandlimited function after interpolation.
So, if you've sampled the data properly then it must be bandlimited - or
pick your own definition from the "real" world. Even if you've sampled at
the lowest possible rate to generate "few" data points, the bandlimitation
has an affect on the derivatives I do believe.
This is one perspective at least - that may answer your question.
Fred