DSPRelated.com
Forums

Converting non-Causal IIR to Causal IIR

Started by bdm711 April 23, 2005

robert bristow-johnson wrote:
> > in article 426c0a27$1_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > 04/24/2005 17:05: > > > B-splines are known to be essentially FIR filters > > boy, i think i know what B-splines are but they are really non-sequitur to > FIR filters.
Not at all. In general b-splines are defined by a set of control points and a set of knots (sometimes called nodes). In your limited world the knots may well be only integers and therefore not separately and explicitly defined. That's your problem. At the knot locations (discrete points on a continuous curve) the spline is merely a convolution of the control points and the particular polynomial coefficients associated with that particular spline.
> in fact, like other polynomial interpolation schemes, > B-splines are for interpolating between samples with infinite precision of > delay (or the time in between samples), unlike polyphase in which we have a > finite choice of fractional sample delays. i guess, given a fractional > delay, the B-spline, like any other polynomial interpolation and like any > other polyphase interpolation, gives you an N tap FIR for an N-1 order > spline.
The concept of delay has no particular meaning with splines. Splines are given a direction simply for the purpose of ordering and enumeration. That direction can be either way.
> > > and a common method > > for finding the coefficients that interpolate a set of points is to run > > an IIR filter over the points once in one direction and once in the > > reverse direction - > > this is FILTFILT. fine. > > > the result are the control points for the spline > > that passes thru the original points. > > but what or how is this that running an IIR filter over some data and again > in reverse (assuming the IIR filter elongates the data by a finite amount), > how does that give us control points for the spline that passes thru the > original points? this has to be some particular IIR, right Jim?
Yes, they would be the inverse of the FIR filter.
> the > frequency response of an (N-1)th order B-spline is sinc^N(f) (normalized f).
Well the frequency response (for that discrete set of points known as knots) could be lots of things depending on the type of spline.
> how does an IIR invert that frequency response perfectly so that after the > IIR, applying the B-spline hits the original points perfectly? or do you > intend to mean approximately?
I meant precisely - to the level of precision that one has at one's disposal. I guess that would be approximately perfect.
> the sinc^N(f) function has N compound zeros > at every multiple of j (except 0) on the s-plane. how could a digital IIR > put in poles close to those zeros without first oversampling? >
You will have to explain that. I'm not sure what sinc^N(f) stands for exactly. I'm guessing I've never seen a spline with that frequency response. -jim
> sorry to be curious, but i cannot understand what you're saying, Jim. (wide > open for invective response.) > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
jim wrote:

   ...

> Seems kind of odd to me to attempt to describe non-causal using terms > like 'future', 'past' and 'delay' that are intended for describing > causal systems.
Odd or not, we do it all the time. A sampled truncated sinc provides coefficients for a low-pass filter. Being symmetric about time=zero, it's not possible to apply that filter to real-time streaming data. Adding delay to the filter makes it causal, allowing it to work. If I add insufficient delay to it, it has both delay and non-causality. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
in article 426cd9ec$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/25/2005 07:51:

> > > robert bristow-johnson wrote: >> >> in article 426c0a27$1_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on >> 04/24/2005 17:05: >> >>> B-splines are known to be essentially FIR filters >> >> boy, i think i know what B-splines are but they are really non-sequitur to >> FIR filters. > > Not at all. In general b-splines are defined by a set of control points > and a set of knots (sometimes called nodes).
... it's because we have use B-splines for ostensibly different purposes (albeit i think nearly the same given similar conditions). the language you are using appears a lot like that in http://en.wikipedia.org/wiki/B-spline . the language i've been using is in the context of resampling or sample-rate-conversion or intersample interpolation, etc of equally spaced discrete-time samples. it would be about "B-splines" as opposed to Lagrange interpolation or Hermite polynomial interpolation. i can tell in the above link that the 0th order and 1st order is the same as what i think B-splines are. i couldn't understand if the extended order was the same as my understanding. my understanding of what B-splines are for interpolation of equally spaced samples, is that a 0th order B-spline is piecewise constant or "zero-order hold" or "drop-sample" interpolation. for 1st order, it's linear interpoation (which happens to be constant height pulse (of 1 sample width) convolved with an identical constant height pulse). the (N+1)th order B-spline polynomical is the Nth order B-spline convolved with another constant height pulse. unlike Hermite or Lagrange, B-spline interpolation will NOT go through the original samples (because the convolving function does not go through zero for the non-zero integer sample times). as the order increases, Hermite and Lagrage will look more and more like a sinc() function and the interpolation will become more and more like the ideal. B-spline will look more and more like a Gaussian function, it never takes on a negative value. B-spline interpolation *filters* the input samples a lot more than the others. but there is an advantage, esp. if you prefilter the input samples to compensate for the LPF effect of B-spline interpolation.
>> the sinc^N(f) function has N compound zeros >> at every multiple of j (except 0) on the s-plane. how could a digital IIR >> put in poles close to those zeros without first oversampling? >> > > You will have to explain that. I'm not sure what sinc^N(f) stands for > exactly. I'm guessing I've never seen a spline with that frequency > response.
i mean the sinc(f) function (with normalized f, with Fs = 1) raised to the Nth power. each time you convolve with a constant height pulse in the time-domain, you multiply the spectrum by sinc(f). it's because the (N-1)th order B-spline interpolation had spectrum that was purely sinc^N(f) with no other functions added to it gunking up those deep notches where f = integer not zero, that had the effect of killing the images better than any other (N-1)th order polynomial interpolation. that's why B-splines had some attention paid to it by audio DSP types like me. Duane Wise and i did a paper about this in the 90s, but Olli Niemitalo picked that up and did a better job: http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf or maybe http://ldesoras.free.fr/doc/articles/resampler.pdf -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:
> > in article 426cd9ec$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > 04/25/2005 07:51: > > > > > > > robert bristow-johnson wrote: > >> > >> in article 426c0a27$1_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > >> 04/24/2005 17:05: > >> > >>> B-splines are known to be essentially FIR filters > >> > >> boy, i think i know what B-splines are but they are really non-sequitur to > >> FIR filters. > > > > Not at all. In general b-splines are defined by a set of control points > > and a set of knots (sometimes called nodes). > > ... > > it's because we have use B-splines for ostensibly different purposes (albeit > i think nearly the same given similar conditions). the language you are > using appears a lot like that in http://en.wikipedia.org/wiki/B-spline . > the language i've been using is in the context of resampling or > sample-rate-conversion or intersample interpolation, etc of equally spaced > discrete-time samples. it would be about "B-splines" as opposed to Lagrange > interpolation or Hermite polynomial interpolation. i can tell in the above > link that the 0th order and 1st order is the same as what i think B-splines > are. i couldn't understand if the extended order was the same as my > understanding. > > my understanding of what B-splines are for interpolation of equally spaced > samples, is that a 0th order B-spline is piecewise constant or "zero-order > hold" or "drop-sample" interpolation. for 1st order, it's linear > interpoation (which happens to be constant height pulse (of 1 sample width) > convolved with an identical constant height pulse). the (N+1)th order > B-spline polynomical is the Nth order B-spline convolved with another > constant height pulse.
Actually it sounds to me like what you are describing is a Catmull-Rom spline. But this doesn't have much to do with the point I was making about causality. The problem scenario I was discussing was given a set of points how does one go about contstructing a b-spline that passes thru those points precisely. The method I described is the standard way of doing that. This is called interpolating thru points. Or put another way given a set of points that lie on the curve how do you find a set of control points that will produce such a curve. What you are apparently talking about interpolating between points. Where the points you are interpolating between are the control points of the curve which do not lie on the curve.
> > unlike Hermite or Lagrange, B-spline interpolation will NOT go through the > original samples (because the convolving function does not go through zero > for the non-zero integer sample times). as the order increases, Hermite and > Lagrage will look more and more like a sinc() function and the interpolation > will become more and more like the ideal. B-spline will look more and more > like a Gaussian function, it never takes on a negative value. B-spline > interpolation *filters* the input samples a lot more than the others. but > there is an advantage, esp. if you prefilter the input samples to compensate > for the LPF effect of B-spline interpolation. > > >> the sinc^N(f) function has N compound zeros > >> at every multiple of j (except 0) on the s-plane. how could a digital IIR > >> put in poles close to those zeros without first oversampling? > >> > > > > You will have to explain that. I'm not sure what sinc^N(f) stands for > > exactly. I'm guessing I've never seen a spline with that frequency > > response. > > i mean the sinc(f) function (with normalized f, with Fs = 1) raised to the > Nth power. each time you convolve with a constant height pulse in the > time-domain, you multiply the spectrum by sinc(f). it's because the (N-1)th > order B-spline interpolation had spectrum that was purely sinc^N(f) with no > other functions added to it gunking up those deep notches where f = integer > not zero, that had the effect of killing the images better than any other > (N-1)th order polynomial interpolation. that's why B-splines had some > attention paid to it by audio DSP types like me. >
Sounds like what you are calling a sinc is really a Dirichlet function (sometimes called aliased sinc) and only a 2 point one at that. That function when squared repeatedly is not going to even vaguely resemble a true sinc function. -jim
> Duane Wise and i did a paper about this in the 90s, but Olli Niemitalo > picked that up and did a better job: > > http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf > or maybe > http://ldesoras.free.fr/doc/articles/resampler.pdf > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
in article 426edde2$1_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/26/2005 20:32:

> Actually it sounds to me like what you are describing is a Catmull-Rom > spline.
there were other papers, preceding the one i was working on (i seem to remember Zoelzer was one author), they called this "B-spline". i'll look up "Catmull-Rom", if i can find a web reference.
> The method I described is the standard way > of doing that. This is called interpolating thru points.
there are *many* ways of interpolating through points. Lagrange and Hermite polynomials are two examples. i guess the ideal for uniform sampled data is the sinc() function, which is not practical (since it is infinitely long). now "B-spline" as i and other authors have used the term, do *not* precisely interpolate through the given samples. the B-spline polynomial always is positive and an interpolation function (or "impulse-response") that interpolates *through* the given samples *must* take on zero values at every integer that is not zero (time is normalized to the sample time). the B-spline interpolation (as i and some other authors have used the term) heavily low-pass filters the data as it interpolates. it's the price you pay to have the absolute best and deepest notches for all of the images of the original spectrum (you wanna kill those images because when you resample, those images can fold back into the baseband and decrease S/N). this LPFing can be compensated a little with some prefiltering, but i never heard of a perfect inverse filter.
> Or put another > way given a set of points that lie on the curve how do you find a set of > control points that will produce such a curve.
those control points will not be the original points, no?
> What you are apparently talking about interpolating between points.
yes.
> Where the points you are interpolating between are the control points of > the curve which do not lie on the curve.
yes. i think so. (i didn't use the terminology of control points.) the whole issue was the resampling of audio (which has most of it's energy in the bottom 5 or 6 octaves or even lower) with an *arbiratrary* sample rate ratio. that's why a windowed sinc() or some other table lookup for the FIR coefficients, was not in the cards.
>> unlike Hermite or Lagrange, B-spline interpolation will NOT go through the >> original samples (because the convolving function does not go through zero >> for the non-zero integer sample times). as the order increases, Hermite and >> Lagrage will look more and more like a sinc() function and the interpolation >> will become more and more like the ideal. B-spline will look more and more >> like a Gaussian function, it never takes on a negative value. B-spline >> interpolation *filters* the input samples a lot more than the others. but >> there is an advantage, esp. if you prefilter the input samples to compensate >> for the LPF effect of B-spline interpolation. >> >>>> the sinc^N(f) function has N compound zeros >>>> at every multiple of j (except 0) on the s-plane. how could a digital IIR >>>> put in poles close to those zeros without first oversampling? >>>> >>> >>> You will have to explain that. I'm not sure what sinc^N(f) stands for >>> exactly. I'm guessing I've never seen a spline with that frequency >>> response. >> >> i mean the sinc(f) function (with normalized f, with Fs = 1) raised to the >> Nth power. each time you convolve with a constant height pulse in the >> time-domain, you multiply the spectrum by sinc(f). it's because the (N-1)th >> order B-spline interpolation had spectrum that was purely sinc^N(f) with no >> other functions added to it gunking up those deep notches where f = integer >> not zero, that had the effect of killing the images better than any other >> (N-1)th order polynomial interpolation. that's why B-splines had some >> attention paid to it by audio DSP types like me. >> > > Sounds like what you are calling a sinc is really a Dirichlet function > (sometimes called aliased sinc) and only a 2 point one at that. That > function when squared repeatedly is not going to even vaguely resemble a > true sinc function.
no, it's a sinc(). no repeating or periodicity. and yes, a sinc^N function, doesn't look much like a sinc() when the power N gets large. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:
> > in article 426edde2$1_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > 04/26/2005 20:32: > > > Actually it sounds to me like what you are describing is a Catmull-Rom > > spline. > > there were other papers, preceding the one i was working on (i seem to > remember Zoelzer was one author), they called this "B-spline". i'll look up > "Catmull-Rom", if i can find a web reference. >
The b in b-spline stands for basis and they all have bases so one could have whatever basis one wants. But b-splines were originally conceived of and developed for computer graphics and computer aided design (CAD) and were derived from Beziers which are equivalent to Bernstein polynomials. So the basis in in b-spline, as generally accepted, is derived from the Bezier basis.
> > The method I described is the standard way > > of doing that. This is called interpolating thru points. > > there are *many* ways of interpolating through points. Lagrange and Hermite > polynomials are two examples. i guess the ideal for uniform sampled data is > the sinc() function, which is not practical (since it is infinitely long). > now "B-spline" as i and other authors have used the term, do *not* precisely > interpolate through the given samples. the B-spline polynomial always is > positive and an interpolation function (or "impulse-response") that > interpolates *through* the given samples *must* take on zero values at every > integer that is not zero (time is normalized to the sample time). the > B-spline interpolation (as i and some other authors have used the term) > heavily low-pass filters the data as it interpolates. it's the price you > pay to have the absolute best and deepest notches for all of the images of > the original spectrum (you wanna kill those images because when you > resample, those images can fold back into the baseband and decrease S/N). > this LPFing can be compensated a little with some prefiltering, but i never > heard of a perfect inverse filter. >
As I said a b-spline is defined by a list of control points (sometimes called de "Boor points") and a list of knots. If you allow the knot list to be something other than integers (which most users of b-splines do) there are an infinity of such filters. Your references to spectrum and images and sincs is so vague as to be meaningless to me. If you construct a b-spline in the manner you describe (using your samples as the control points) and then sample the spline at the same sample rate (1:1 resample), each new sample point is merely a weighted sum of a set of control points in the neighborhood of that sample. I.E. its just a simple convolution operation, and the frequency response (and its inverse) is calculated as any other FIR filters response is calculated. If the spline is sampled at some other rate (a concept which you introduced into the discussion) then the frequency response of that operation is calculated based on the new and old sample rate in a way not to different from a polyphase resampling scheme. -jim
> > Or put another > > way given a set of points that lie on the curve how do you find a set of > > control points that will produce such a curve. > > those control points will not be the original points, no? > > > What you are apparently talking about interpolating between points. > > yes. > > > Where the points you are interpolating between are the control points of > > the curve which do not lie on the curve. > > yes. i think so. (i didn't use the terminology of control points.) the > whole issue was the resampling of audio (which has most of it's energy in > the bottom 5 or 6 octaves or even lower) with an *arbiratrary* sample rate > ratio. that's why a windowed sinc() or some other table lookup for the FIR > coefficients, was not in the cards. > > >> unlike Hermite or Lagrange, B-spline interpolation will NOT go through the > >> original samples (because the convolving function does not go through zero > >> for the non-zero integer sample times). as the order increases, Hermite and > >> Lagrage will look more and more like a sinc() function and the interpolation > >> will become more and more like the ideal. B-spline will look more and more > >> like a Gaussian function, it never takes on a negative value. B-spline > >> interpolation *filters* the input samples a lot more than the others. but > >> there is an advantage, esp. if you prefilter the input samples to compensate > >> for the LPF effect of B-spline interpolation. > >> > >>>> the sinc^N(f) function has N compound zeros > >>>> at every multiple of j (except 0) on the s-plane. how could a digital IIR > >>>> put in poles close to those zeros without first oversampling? > >>>> > >>> > >>> You will have to explain that. I'm not sure what sinc^N(f) stands for > >>> exactly. I'm guessing I've never seen a spline with that frequency > >>> response. > >> > >> i mean the sinc(f) function (with normalized f, with Fs = 1) raised to the > >> Nth power. each time you convolve with a constant height pulse in the > >> time-domain, you multiply the spectrum by sinc(f). it's because the (N-1)th > >> order B-spline interpolation had spectrum that was purely sinc^N(f) with no > >> other functions added to it gunking up those deep notches where f = integer > >> not zero, that had the effect of killing the images better than any other > >> (N-1)th order polynomial interpolation. that's why B-splines had some > >> attention paid to it by audio DSP types like me. > >> > > > > Sounds like what you are calling a sinc is really a Dirichlet function > > (sometimes called aliased sinc) and only a 2 point one at that. That > > function when squared repeatedly is not going to even vaguely resemble a > > true sinc function. > > no, it's a sinc(). no repeating or periodicity. and yes, a sinc^N > function, doesn't look much like a sinc() when the power N gets large. > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
in article 4270cfc1$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 07:56:

> Your references to spectrum and images and sincs is so vague as to be > meaningless to me.
not surprized. but you know that the F.T. of a single, rectangular pulse in continuous-time, with width of T (the sample time) is sinc(f*T), right? that's the Zero-Order-Hold or drop-sample interpolation. and each time you convolve with another identical rectangular pulse, you multiply by another sinc(f*T) function, right? so an (N-1)th order piecewise polynomial interpolation constructed this was will have a spectrum of sinc^N(f*T), right? so the notches in the sinc^N function (which are at non-zero multiples of 1/T) get deeper and wider with increasing N, right? and those notches will kill the ideal sampling images of the original spectrum before resampling, right? an other piecewise (N-1)th order interpolation polynomial will have sinc^N, but it will also have lower orders of sinc^n, therefore the notches won't be quite as deep as the pure sinc^N(f*T), right? so the "B-spline" as we have used the term is sorta optimal in this way. now do you get it?
> If you construct a b-spline in the manner you > describe (using your samples as the control points) and then sample the > spline at the same sample rate (1:1 resample), each new sample point is > merely a weighted sum of a set of control points in the neighborhood of > that sample.
yup. the language i use is that the input gets heavily filtered. all of the weighting coefficients are positive.
> I.E. its just a simple convolution operation, and the > frequency response (and its inverse) is calculated as any other FIR > filters response is calculated.
not that easy, but somewhat doable with remez() or firls(). besides, originally we we talking about IIR and *you* changed this to FIR.
> If the spline is sampled at some other rate (a concept which you > introduced into the discussion)
only to tell you what i thought "B-splines" we about, which i didn't bring into the discussion.
> then the frequency response of that > operation is calculated based on the new and old sample rate in a way > not to different from a polyphase resampling scheme.
it *is* about polyphase resampling, but with one difference from the polyphase we normally talk about. using linear or some other polynomial interpolation, we can resample at *any* rate. we are not limited to a finite set of inter-sample sample-times or "delays". if we're doing SRC at an arbitrary conversion rate, we can use traditional polyphase to optimall "upsample" by some large integer, M. then the images will be narrow little copies at integer multiples of M/T, then we use some function that we can define with infinite precision (like piecewise linear) to get to whatever precision of time in between samples that you want. and, because of the notches of the sinc(f*(T/M)) function, those little narrow images will be killed by the notches. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:
> > in article 4270cfc1$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > 04/28/2005 07:56: > > > Your references to spectrum and images and sincs is so vague as to be > > meaningless to me. > > not surprized. but you know that the F.T. of a single, rectangular pulse in > continuous-time, with width of T (the sample time) is sinc(f*T), right? > that's the Zero-Order-Hold or drop-sample interpolation. and each time you > convolve with another identical rectangular pulse, you multiply by another > sinc(f*T) function, right? so an (N-1)th order piecewise polynomial > interpolation constructed this was will have a spectrum of sinc^N(f*T), > right? >
Yes, I understand what you are claiming happens in the frequency domain. What I haven't seen is any actual description of the physical operation performed in the time domain that would produce this sort of frequency response. Take for example the simplest spline, where you just construct a straight line from each point to the next. If I sample this curve half way between 2 points the result is just the average of the 2 points. If I move ahead one sample width and take another sample I again get the average. If I sample the whole curve in this manner its just as if I filtered the whole set of points with the FIR filter [1 1]/2. The frequency response of that filter is going to be periodic. This is a complete, albeit in its simplest form, description of the actual process of sampling a spline. Where exactly do these continuous identical rectangular pulses your talking about appear in this process? -jim
> so the notches in the sinc^N function (which are at non-zero multiples of > 1/T) get deeper and wider with increasing N, right? and those notches will > kill the ideal sampling images of the original spectrum before resampling, > right? an other piecewise (N-1)th order interpolation polynomial will have > sinc^N, but it will also have lower orders of sinc^n, therefore the notches > won't be quite as deep as the pure sinc^N(f*T), right? so the "B-spline" as > we have used the term is sorta optimal in this way. now do you get it? > > > If you construct a b-spline in the manner you > > describe (using your samples as the control points) and then sample the > > spline at the same sample rate (1:1 resample), each new sample point is > > merely a weighted sum of a set of control points in the neighborhood of > > that sample. > > yup. the language i use is that the input gets heavily filtered. all of > the weighting coefficients are positive. > > > I.E. its just a simple convolution operation, and the > > frequency response (and its inverse) is calculated as any other FIR > > filters response is calculated. > > not that easy, but somewhat doable with remez() or firls(). besides, > originally we we talking about IIR and *you* changed this to FIR. > > > If the spline is sampled at some other rate (a concept which you > > introduced into the discussion) > > only to tell you what i thought "B-splines" we about, which i didn't bring > into the discussion. > > > then the frequency response of that > > operation is calculated based on the new and old sample rate in a way > > not to different from a polyphase resampling scheme. > > it *is* about polyphase resampling, but with one difference from the > polyphase we normally talk about. using linear or some other polynomial > interpolation, we can resample at *any* rate. we are not limited to a > finite set of inter-sample sample-times or "delays". if we're doing SRC at > an arbitrary conversion rate, we can use traditional polyphase to optimall > "upsample" by some large integer, M. then the images will be narrow little > copies at integer multiples of M/T, then we use some function that we can > define with infinite precision (like piecewise linear) to get to whatever > precision of time in between samples that you want. and, because of the > notches of the sinc(f*(T/M)) function, those little narrow images will be > killed by the notches. > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
in article 4271117f$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 12:37:

> robert bristow-johnson wrote:
...
>> but you know that the F.T. of a single, rectangular pulse in >> continuous-time, with width of T (the sample time) is sinc(f*T), right? >> that's the Zero-Order-Hold or drop-sample interpolation. and each time you >> convolve with another identical rectangular pulse, you multiply by another >> sinc(f*T) function, right? so an (N-1)th order piecewise polynomial >> interpolation constructed this was will have a spectrum of sinc^N(f*T), >> right? >> > > Yes, I understand what you are claiming happens in the frequency domain. > What I haven't seen is any actual description of the physical operation > performed in the time domain that would produce this sort of frequency > response.
actually, i think we both understand what is happening, but from different POVs. it's still the same outcome, if the same conditions are applied.
> Take for example the simplest spline, where you just construct a > straight line from each point to the next.
otherwise called "linear interpolation". i would not call this the simplest spline, i would call "zero-order hold" or "drop-sample" interpolation the simplest spline. using your language, that would be like discrete convolving with [1].
> If I sample this curve half > way between 2 points the result is just the average of the 2 points. If > I move ahead one sample width and take another sample I again get the > average. If I sample the whole curve in this manner its just as if I > filtered the whole set of points with the FIR filter [1 1]/2. The > frequency response of that filter is going to be periodic.
sure. you are implementing a sorta fractional sample fixed-delay (which is an application of interpolation). that is a purely LTI operator, so what you have is a filter with phase or group delay of 1/2 sample and a cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with N=2.
> This is a complete, albeit in its simplest form, description of the actual > process of sampling a spline.
not from my POV. you are fixing your sample times to be at this fixed delay of 1/2 sample and what you essentially have is a filter. there are no issues of SRC noise or error from resampling that happens in the real process of
> Where exactly do these continuous identical > rectangular pulses your talking about appear in this process?
i am generalizing a little more (so as to include the application of resampling or sample-rate conversion). i had been talking about defining the interior space between samples for all t, not just t = (n+0.5)*T. here i am using the language of that sampling/reconstruction treatise i periodically post to comp.dsp: http://groups-beta.google.com/group/comp.dsp/msg/f9e0ad7b430bc653?hl=en (view both with a fixed-width font.) let d(t) =def "dirac delta function" T =def "sampling period" x[k] =def x(k*T) Fs =def 1/T the sampling function: &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; q(t) = T * SUM{ d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +inf &#4294967295; &#4294967295; = SUM{ exp(j*2*pi*n*Fs*t) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;n=-inf we know that when you filter the ideal sampled signal &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;= SUM{ x(t) * exp(j*2*pi*n*Fs*t) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf with an ideal brickwall filter (with cutoff at Nyquist) &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;{ &#4294967295;1 &#4294967295;for &#4294967295;|f| < &#4294967295;Fs/2 = 1/(2*T) &#4294967295; &#4294967295; H(f) = { &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;{ &#4294967295;0 &#4294967295;for &#4294967295;|f| > Fs/2 = 1/(2*T) h(t) = (1/T)*sinc(t/T) that fully recovers x(t) (assuming it was sufficiently bandlimited before it was sampled), so the ideal interpolation function for getting x(t) from x[n] is &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) = T * SUM{ x[k] * h(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; = SUM{ x[k] * sinc((t - k*T)/T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; = SUM{ x[m+k] * sinc((t/T - m) - k) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf m = any integer, we might conveniently choose it to be m = floor(t/T). using that ideal sinc function utterly wipes out all images, and leaves the original spectrum untouched. so when we resample x(t) at another synchronous or asynchronous sample rate (let's say a higher rate so that we don't have to argue about aliasing), the resampling will be perfect because no images are folded back into the baseband in the resampling process. we also know that the ideally sampled signal above has the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf where &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +inf &#4294967295; &#4294967295; X(f) =def FT{ x(t) } = integral{ x(t)*exp(-j*2*pi*f*t) dt} &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; -inf now, we can't use the ideal sinc() for resampling, because of two reasons: 1. it goes on forever. we can't sum an infinite sum. gonna have to truncate (or window) it somewhere 2. suppose we window it: &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+L &#4294967295; &#4294967295; x((m+a)*T) = SUM{ x[m+k] * sinc(a-k) * w((a-k)/L) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-L-1 where t = (m + a)*T &#4294967295; m = floor(t/T) &#4294967295; a = fract(t/T) = t/T - floor(t/T) 0 <= a < 1 and w(t) is a window function of non-zero width of 2L. L should be an integer and i found L=16 to be good enough, but the AD1890 uses L=32 (a 64 tap polyphase FIR). even if we window it so the summation is finite, if we are going to interpolate at any arbitrary fractional time, "a", we would still need an infinite number of sets of coefficients (each a vector of length 2L) to define g(a-k) = sinc(a-k)*w((a-k)/L) for all of the infinite number of values of "a" that are 0 <= a < 1. so, somehow, we have to approximate g(a) = sinc(a)*w(a/L) with a function that can be evaluated with a computer for any value of "a" within -L < a <= L . in general, these will turn out to piecewise finite order polynomials. this simplest crude approximation is zero-order hold or "drop-sample" interpolation. for symmetry sake, let's say we "round" to the nearest sample. that would be approximating g(a) with g(a) ~= p0(a) where { 1 -1/2 <= a < 1/2 p0(a) = { { 0 otherwise that means we are convoluting &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf with h[0](t) = 1/T * p0(t/T) getting us &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) ~= T * SUM{ x[k] * h[0](t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf and the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf is getting multiplied by H[0](f) = sinc(f*T) and then we are resampling that with a sampling frequency that is unrelated to the original Fs. then the images of H[0](f)*FT{x(t)*q(t)}, all of them for n not 0, get reduced by the notches in the sinc() function of H0(f). all of those reduced images can be potentially folded into the baseband with an arbitrary new Fs. i can show that for 120 dB S/N ratio, the original x(t) would have to be oversampled by a factor 512K so that all of those images are narrow enough to be killed enough so that their energy is 120 dB less than that of x(t). the next simplest is "linear interpolation". now we are approximating g(a) ~= p1(a) where { 1-|a| |a| < 1 p1(a) = { { 0 otherwise now the ideally sampled continuous-time function &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf is getting convoluted with h[1](t) = 1/T * p1(t/T) getting us &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) ~= T * SUM{ x[k] * h[1](t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf and the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf is getting multiplied by H[1](f) = ( sinc(f*T) )^2 . so now we have deeper and wider notches killing those images of x(t) where n is not 0. so those images are reduced further. this time, for 120 dB S/N in the resampling process, all you need to do is first upsample by 512 (not 512K). at this point for 0th order and 1st order polynomial interpolation, Lagrange, Hermite, and B-spline all agree. but for order N=2 and greater, they begin to differ. as the order, N, gets larger, Lagrange and Hermite will look more like the ideal sinc() function for ideal interpolation (Hermite will be smoother since it tries to have as many continuous derivatives as possible), but the Nth order B-spline will have a frequency response of H[N](f) = ( sinc(f*T) )^(N+1) and the Nth order B-spline interpolation function, h[N](t) will be the convolution of h[N-1](t) with h[0](t). as N gets larger, h[N](t) looks more like a gaussian pulse, but for Lagrange and Hermite polynomials, h[N](t) will look more like 1/T * sinc(t/T), the ideal interpolating impulse response. now, the power spectrum of *any* Nth order piecewise polynomial interpolation function ( H[N](f) )^2 will have terms that are sinc^2, sinc^4, sinc^6, ... sinc^(2N). but for the Nth order B-spline, all of those terms will be zero except for sinc^(2N). the power spectrum of the other polynomial interpolators will have lower powers of sinc^2 that will make those notches less deep or wide and less effective in killing the images before resampling. Lagrange and Hermite look much more like the ideal sinc() interpolator and might not need any "pre-filtering" in the baseband to make up for the severe LPF that B-splines do, but they don't wipe out the images (which become folded and noise in the baseband) as well as the B-spline does. does that answer your question, Jim? if you (or anyone else) sends me a despammified edress, i'll return a copy of the paper that Duane Wise and i did about this, but i think that Olli Niemitalo did a better job: http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf or http://ldesoras.free.fr/doc/articles/resampler.pdf -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:

> > Take for example the simplest spline, where you just construct a > > straight line from each point to the next. > > otherwise called "linear interpolation". i would not call this the simplest > spline, i would call "zero-order hold" or "drop-sample" interpolation the > simplest spline. using your language, that would be like discrete > convolving with [1]. >
Well, I wouldn't call a set of unconnected points a spline.
> > If I sample this curve half > > way between 2 points the result is just the average of the 2 points. If > > I move ahead one sample width and take another sample I again get the > > average. If I sample the whole curve in this manner its just as if I > > filtered the whole set of points with the FIR filter [1 1]/2. The > > frequency response of that filter is going to be periodic. > > sure. you are implementing a sorta fractional sample fixed-delay (which is > an application of interpolation). that is a purely LTI operator, so what > you have is a filter with phase or group delay of 1/2 sample and a > cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period > and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with > N=2. >
The delay was so that the problem wasn't completely trivial - I could have sampled the spline at the original sample points.
> > This is a complete, albeit in its simplest form, description of the actual > > process of sampling a spline. > > not from my POV. you are fixing your sample times to be at this fixed delay > of 1/2 sample and what you essentially have is a filter. there are no > issues of SRC noise or error from resampling that happens in the real > process of >
The original point I made, which you took issue with, was that if you sample a spline at 1 sample per control point you always have what is equivalent to a simple convolution operation. And its a convolution that is invertable so you can do the inverse and run a spline through a set of samples by creating one new control point for each original sample. Now you keep talking about sampling the spline at some other sample rate what exactly does that have to do with the price of tea in China.
> > Where exactly do these continuous identical > > rectangular pulses your talking about appear in this process? > > i am generalizing a little more (so as to include the application of > resampling or sample-rate conversion). i had been talking about defining > the interior space between samples for all t, not just t = (n+0.5)*T. here > i am using the language of that sampling/reconstruction treatise i > periodically post to comp.dsp:
Yes, but you claimed frequency response which is a function of the sample spacing. I gave a counter example where your claimed frequency response doesn't work. Why didn't it work?
>snipped<
> H[N](f) = ( sinc(f*T) )^(N+1) > > and the Nth order B-spline interpolation function, h[N](t) will be the > convolution of h[N-1](t) with h[0](t).
It seems to me that the problem with this is that you didn't start with a kernel frequency response that was a sinc. You start with a very very very loose approximation.
>as N gets larger, h[N](t) looks more > like a gaussian pulse,
Ok that sounds reasonable if you mean a periodic Gaussian pulse, but if you multiply a sinc by itself over and over you don't get something that starts to look like a Gaussian.
>but for Lagrange and Hermite polynomials, h[N](t) > will look more like 1/T * sinc(t/T), the ideal interpolating impulse > response. >
Well I don't see how you can express the frequency response purely as a function of the output sample spacing. If that would work then it should have worked for the counterexample I gave. To derive the frequency response of a resampling scheme you clearly need to take into account both the input and output sample spacing. Also consider the delay in the example I gave - suppose the delay was only a quarter sample. Then sampling the spline would still be equivalent to an FIR filter but not the same filter and not the same frequency response. But yet its still the same sample spacing. -jim
> now, the power spectrum of *any* Nth order piecewise polynomial > interpolation function > > ( H[N](f) )^2 > > will have terms that are sinc^2, sinc^4, sinc^6, ... sinc^(2N). but for the > Nth order B-spline, all of those terms will be zero except for sinc^(2N). > the power spectrum of the other polynomial interpolators will have lower > powers of sinc^2 that will make those notches less deep or wide and less > effective in killing the images before resampling. Lagrange and Hermite > look much more like the ideal sinc() interpolator and might not need any > "pre-filtering" in the baseband to make up for the severe LPF that B-splines > do, but they don't wipe out the images (which become folded and noise in the > baseband) as well as the B-spline does. > > does that answer your question, Jim? if you (or anyone else) sends me a > despammified edress, i'll return a copy of the paper that Duane Wise and i > did about this, but i think that Olli Niemitalo did a better job: > > http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf > > or http://ldesoras.free.fr/doc/articles/resampler.pdf > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----