DSPRelated.com
Forums

Converting non-Causal IIR to Causal IIR

Started by bdm711 April 23, 2005
in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 20:35:

> robert bristow-johnson wrote: > >>> Take for example the simplest spline, where you just construct a >>> straight line from each point to the next. >> >> otherwise called "linear interpolation". i would not call this the simplest >> spline, i would call "zero-order hold" or "drop-sample" interpolation the >> simplest spline. using your language, that would be like discrete >> convolving with [1]. > > Well, I wouldn't call a set of unconnected points a spline.
so they have discontinuities in the 0th derivative. piecewise linear interpolation has discontinuities in the 1st derivative. piecewise quadratic has discontinuities in the 2nd derivative. etc.
> >>> If I sample this curve half >>> way between 2 points the result is just the average of the 2 points. If >>> I move ahead one sample width and take another sample I again get the >>> average. If I sample the whole curve in this manner its just as if I >>> filtered the whole set of points with the FIR filter [1 1]/2. The >>> frequency response of that filter is going to be periodic. >> >> sure. you are implementing a sorta fractional sample fixed-delay (which is >> an application of interpolation). that is a purely LTI operator, so what >> you have is a filter with phase or group delay of 1/2 sample and a >> cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period >> and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with >> N=2. >> > > The delay was so that the problem wasn't completely trivial - I could > have sampled the spline at the original sample points.
fine. but it still is not as general as sampling the spline at *any* uniformly spaced points.
>>> This is a complete, albeit in its simplest form, description of the actual >>> process of sampling a spline. >> >> not from my POV. you are fixing your sample times to be at this fixed delay >> of 1/2 sample and what you essentially have is a filter. there are no >> issues of SRC noise or error from resampling that happens in the real >> process of
my brain obviously was turned off. i meant to end the statement with the words "sample rate conversion".
> > The original point I made, which you took issue with, was that if you > sample a spline at 1 sample per control point you always have what is > equivalent to a simple convolution operation. And its a convolution that > is invertable so you can do the inverse and run a spline through a set > of samples by creating one new control point for each original sample.
i know that's what you're saying, and i guess that given a sequence of sampled data: { ... x[-1], x[0], x[1], x[2], ... } there is another set of data called "control points" { ... v[-1], v[0], v[1], v[2], ... } such that if you use the control points as weights on the corresponding B-spline, the summed result will go through the sampled data points x[0], x[1], x[2].
> Now you keep talking about sampling the spline at some other sample > rate what exactly does that have to do with the price of tea in China.
here in China, that's what we do when we interpolate uniformly spaced samples. precision delay is fine. that is a nice LTI filter that has the effect of delaying by something other than an integer number of samples. what i think about when i use any kind of interpolation between samples is resampling the interpolated function at *any* time.
>>> Where exactly do these continuous identical >>> rectangular pulses your talking about appear in this process? >> >> i am generalizing a little more (so as to include the application of >> resampling or sample-rate conversion). i had been talking about defining >> the interior space between samples for all t, not just t = (n+0.5)*T. here >> i am using the language of that sampling/reconstruction treatise i >> periodically post to comp.dsp: > > Yes, but you claimed frequency response which is a function of the > sample spacing. I gave a counter example where your claimed frequency > response doesn't work. Why didn't it work?
i tried looking back at my earlier posts. it's gonna take a bit to try to adapt our language to be talking about the same thing. see below ...
>> H[N](f) = ( sinc(f*T) )^(N+1) >> >> and the Nth order B-spline interpolation function, h[N](t) will be the >> convolution of h[N-1](t) with h[0](t). > > It seems to me that the problem with this is that you didn't start with > a kernel frequency response that was a sinc. You start with a very very > very loose approximation.
ya, drop-sample or even linear interpolation is not a very good approximation to the true sinc() interpolation. but if you *do* use a real sinc-like interpolation (could be windowed sinc() or something that the Parks-McClellan algorithm dreams up) to "upsample" by a factor of say, 512 (that means 512 different polyphases or fractional delays), *then* you can use linear interpolation on the remain precision (assuming your fractional delay was not a multiple of 1/512th of a sample) and you're still golden. again, if it's drop-sample, you need to have 512K polyphases (more memory, less computation).
>> as N gets larger, h[N](t) looks more >> like a gaussian pulse, > > Ok that sounds reasonable if you mean a periodic Gaussian pulse,
NO! just a regular Gaussian pulse or whatever you get when you convolve N non-repeating rectangular pulses together.
> but if > you multiply a sinc by itself over and over you don't get something that > starts to look like a Gaussian.
that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i think it's that the guassian pulse that h[N](t) becames when N gets large, that pulse gets wider and wider, so the FT of it might look like a Gaussian pulse that is getting narrower and narrower and the sinc^(N+1) might look like that.
>> but for Lagrange and Hermite polynomials, h[N](t) >> will look more like 1/T * sinc(t/T), the ideal interpolating impulse >> response. > > Well I don't see how you can express the frequency response purely as a > function of the output sample spacing. If that would work then it should > have worked for the counterexample I gave. To derive the frequency > response of a resampling scheme you clearly need to take into account > both the input and output sample spacing.
i am using a different model. i am first converting this uniformly sampled discrete-time sequence into a continuous-time function using a known interpolation function that can be modeled as an impulse response. that has a non-repeating spectrum which is a repeating spectrum (of the ideally sampled original function repeated at integer multiples of the sampling rate) multiplied by a non-repeating frequency response (the F.T. of the impulse response that models the interpolation function). not periodic. i dunno how we're gonna get our language to converge so that we can even know if we are agreeing or disagreeing.
> Also consider the delay in the example I gave - suppose the delay was > only a quarter sample. Then sampling the spline would still be > equivalent to an FIR filter but not the same filter and not the same > frequency response. But yet its still the same sample spacing.
then, that's simply a constant delay if it's still the same sample spacing (but a different fractional delay offset). that is an LTI filter. has phase response and magnitude response, but no aliasing. what i'm talking about is something where the sample spacing has changed from the original sample spacing. that's what resampling is or sample-rate-conversion. then it's like one output sample it's one fractional sample offset (in relation to the input sample times). the next output sample, it's another fractional sample offset. then you get other problems (like images folding into the baseband). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:

> > i know that's what you're saying, and i guess that given a sequence of > sampled data: { ... x[-1], x[0], x[1], x[2], ... } there is another set of > data called "control points" { ... v[-1], v[0], v[1], v[2], ... } such that > if you use the control points as weights on the corresponding B-spline, the > summed result will go through the sampled data points x[0], x[1], x[2]. >
Ok, now were getting somewhere. But don't overlook *why* this works - the reason is if you sample any spline at the same spacing as the control point spacing (whether you shift to between samples or stay on sample) you always have nothing more than an FIR filter.
> > Now you keep talking about sampling the spline at some other sample > > rate what exactly does that have to do with the price of tea in China. > > here in China, that's what we do when we interpolate uniformly spaced > samples. precision delay is fine. that is a nice LTI filter that has the > effect of delaying by something other than an integer number of samples. > what i think about when i use any kind of interpolation between samples is > resampling the interpolated function at *any* time. >
OK, now that we've dispensed with the original point under discussion which had nothing in particular to do with sample rate conversion, we can take a look at sample rate conversion which you seem so eager to discuss. Now that you have some understanding of the frequency response of a spline when you sample at the same rate as the original points lets look at what you claim is the frequency response when you sample at a different rate. Consider upsampling by a factor of 2. You take one sample at an original sample point and one halfway between samples and repeat that all thru the points. The result is a set of 2N points where every other point was produced by one of two different FIR filters. Now its pretty obvious by inspection that the overall frequency response of the sample rate conversion is going to have to depend on what the relationship is between the new rate to the old. In oterwords, If you a apply your formula for the frequency response, you don't know if the "T" in your formula is the same sample spacing as the original or half the space or something else, but its obvious that they can't ALL have the same frequency response. Since your analysis of the frequency response doesn't take this into account at all, you can talk yourself blue in the face with your proofs, but there's just no way your analysis can be right. But it's worse than that if you are interested in maintaining a linear phase relationship between the new and old rate (probably not important in audio but very important in image processing) you need to introduce a shift in the new sample rate to maintain phase with the old. That also obviously changes the frequency response of that particular resampling process. And your formula doesn't take that into account either. -jim
> >>> Where exactly do these continuous identical > >>> rectangular pulses your talking about appear in this process? > >> > >> i am generalizing a little more (so as to include the application of > >> resampling or sample-rate conversion). i had been talking about defining > >> the interior space between samples for all t, not just t = (n+0.5)*T. here > >> i am using the language of that sampling/reconstruction treatise i > >> periodically post to comp.dsp: > > > > Yes, but you claimed frequency response which is a function of the > > sample spacing. I gave a counter example where your claimed frequency > > response doesn't work. Why didn't it work? > > i tried looking back at my earlier posts. it's gonna take a bit to try to > adapt our language to be talking about the same thing. see below > > ... > >> H[N](f) = ( sinc(f*T) )^(N+1) > >> > >> and the Nth order B-spline interpolation function, h[N](t) will be the > >> convolution of h[N-1](t) with h[0](t). > > > > It seems to me that the problem with this is that you didn't start with > > a kernel frequency response that was a sinc. You start with a very very > > very loose approximation. > > ya, drop-sample or even linear interpolation is not a very good > approximation to the true sinc() interpolation. but if you *do* use a real > sinc-like interpolation (could be windowed sinc() or something that the > Parks-McClellan algorithm dreams up) to "upsample" by a factor of say, 512 > (that means 512 different polyphases or fractional delays), *then* you can > use linear interpolation on the remain precision (assuming your fractional > delay was not a multiple of 1/512th of a sample) and you're still golden. > again, if it's drop-sample, you need to have 512K polyphases (more memory, > less computation). > > >> as N gets larger, h[N](t) looks more > >> like a gaussian pulse, > > > > Ok that sounds reasonable if you mean a periodic Gaussian pulse, > > NO! just a regular Gaussian pulse or whatever you get when you convolve N > non-repeating rectangular pulses together. > > > but if > > you multiply a sinc by itself over and over you don't get something that > > starts to look like a Gaussian. > > that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > think it's that the guassian pulse that h[N](t) becames when N gets large, > that pulse gets wider and wider, so the FT of it might look like a Gaussian > pulse that is getting narrower and narrower and the sinc^(N+1) might look > like that. > > >> but for Lagrange and Hermite polynomials, h[N](t) > >> will look more like 1/T * sinc(t/T), the ideal interpolating impulse > >> response. > > > > Well I don't see how you can express the frequency response purely as a > > function of the output sample spacing. If that would work then it should > > have worked for the counterexample I gave. To derive the frequency > > response of a resampling scheme you clearly need to take into account > > both the input and output sample spacing. > > i am using a different model. i am first converting this uniformly sampled > discrete-time sequence into a continuous-time function using a known > interpolation function that can be modeled as an impulse response. that has > a non-repeating spectrum which is a repeating spectrum (of the ideally > sampled original function repeated at integer multiples of the sampling > rate) multiplied by a non-repeating frequency response (the F.T. of the > impulse response that models the interpolation function). not periodic. > > i dunno how we're gonna get our language to converge so that we can even > know if we are agreeing or disagreeing. > > > Also consider the delay in the example I gave - suppose the delay was > > only a quarter sample. Then sampling the spline would still be > > equivalent to an FIR filter but not the same filter and not the same > > frequency response. But yet its still the same sample spacing. > > then, that's simply a constant delay if it's still the same sample spacing > (but a different fractional delay offset). that is an LTI filter. has > phase response and magnitude response, but no aliasing. what i'm talking > about is something where the sample spacing has changed from the original > sample spacing. that's what resampling is or sample-rate-conversion. then > it's like one output sample it's one fractional sample offset (in relation > to the input sample times). the next output sample, it's another fractional > sample offset. then you get other problems (like images folding into the > baseband). > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----

robert bristow-johnson wrote:
> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
>>but if >>you multiply a sinc by itself over and over you don't get something that >>starts to look like a Gaussian. > > > that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > think it's that the guassian pulse that h[N](t) becames when N gets large, > that pulse gets wider and wider, so the FT of it might look like a Gaussian > pulse that is getting narrower and narrower and the sinc^(N+1) might look > like that.
I'm confused here. If you convolve a full sinc with itself over and over, you should get it again and again. The result should be the same as the operand. Or are you saying something entirely different? Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
in article d56oqj028m9@enews1.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 05/02/2005 22:48:

> > > robert bristow-johnson wrote: >> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > >>> but if >>> you multiply a sinc by itself over and over you don't get something that >>> starts to look like a Gaussian. >> >> >> that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i >> think it's that the guassian pulse that h[N](t) becames when N gets large, >> that pulse gets wider and wider, so the FT of it might look like a Gaussian >> pulse that is getting narrower and narrower and the sinc^(N+1) might look >> like that. > > I'm confused here. If you convolve a full sinc with itself > over and over, you should get it again and again.
but i'm convolving the rect() with itself over and over again. i am *multiplying* the sinc() with itself over and over again. and somehow, they both start to look like a gaussian pulse. the convolved rects like a wide gaussian and the sinc^N like a narrow gaussian.
> The result should be the same as the operand. Or are you saying > something entirely different?
different, but not entirely different. i am saying that these piecewise polynomial interpolation function that have been called "B-spline" by a few authors (including me, but i was just ignorantly following the same terminology as in previous papers) is what you get when you convolve rectangular pulse functions (as wide as a sample time) with each other. an (N-1)th order B-spline would be convolving N rect() functions together, that means, in the frequency domain, their spectrum is multiplying N sinc() functions together. as N gets large, the high order B-spline will look more and more like a gaussian function (because of the central limit theorem). but the spectrum is a sinc^N function. but we know that the F.T. of a gaussian is a gaussian. so how do these sinc^N functions become something that looks like a gaussian pulse? well, i'm guessing, but let's say that the sample time is 1. so those rectangular pulses that convolve against each other all have area of 1 (like a uniform pdf with variance of 1/12). so, for the (N-1)th order B-spline, it's like we are convolving the pdfs of N uniform random variables. we get some p.d.f. with a variance of N/12 and we expect that pdf to become more and more gaussian. so i think that B-spline will begin to look like 1/sqrt(2*pi*(N/12)) * exp( -t^2 / (2*(N/12)) ) or sqrt(6/(pi*N))) * exp( -(sqrt(6/N)*t)^2 ) or sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) and we know that the FT{ exp(-pi*t^2) } is exp(-pi*f^2) (where f is Hz) so FTing that gaussian pulse we get FT{ sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) } = exp( -pi*(sqrt((pi*N)/6)*f)^2 ) = exp( -pi^2/6*N * f^2 ) = ( exp( -pi^2/6 * f^2 ) )^N and somehow, this needs to look like sinc^N(f) (the FT of convolving N rect() functions against each other) which means that sinc(f) = sin(pi*f)/(pi*f) = approximately exp( -pi^2/6 * f^2 ) is that true?? i dunno. they are both even symmetry and equal to 1 when f=0. expanding both to 3 terms results in: sinc(f) = sin(pi*f)/(pi*f) ~= (pi*f - (pi*f)^3/(3!) + (pi*f)^5/(5!))/(pi*f) = 1 - (pi*f)^2/6 + (pi*f)^4/120 and exp( -pi^2/6 * f^2 ) ~= 1 + -pi^2/6*f^2 + (-pi^2/6*f^2)^2/(2!) = 1 - (pi*f)^2/6 + (pi*f)^4/72 so they *do* approximately agree! i never really checked this before! well, they're not exactly the same, but they agree in the constant term and the f^2 term and ain't too far off with the f^4 term. and since the sinc() function actually crosses zero and the gaussian never does that, i would expect the f^4 term to make the gaussian a little farther from zero than the sinc and +1/72 is bigger than +1/120, so i guess it makes sense. when N is large, they both are a narrow pulse that dives to zero very fast. it's just that the sinc^N(f) does go bipolar where the exp(-pi^2/6*N*f^2) stays positive (but very close to zero). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."

robert bristow-johnson wrote:
> in article d56oqj028m9@enews1.newsguy.com, Bob Cain at > arcane@arcanemethods.com wrote on 05/02/2005 22:48: > > >> >>robert bristow-johnson wrote: >> >>>in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on >> >>>>but if >>>>you multiply a sinc by itself over and over you don't get something that >>>>starts to look like a Gaussian. >>> >>> >>>that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i >>>think it's that the guassian pulse that h[N](t) becames when N gets large, >>>that pulse gets wider and wider, so the FT of it might look like a Gaussian >>>pulse that is getting narrower and narrower and the sinc^(N+1) might look >>>like that. >> >>I'm confused here. If you convolve a full sinc with itself >>over and over, you should get it again and again. > > > but i'm convolving the rect() with itself over and over again. i am > *multiplying* the sinc() with itself over and over again. and somehow, they > both start to look like a gaussian pulse. the convolved rects like a wide > gaussian and the sinc^N like a narrow gaussian.
Ah! Big difference from what I was thinking. Thanks, Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein

robert bristow-johnson wrote:
> > in article d56oqj028m9@enews1.newsguy.com, Bob Cain at > arcane@arcanemethods.com wrote on 05/02/2005 22:48: > > > > > > > robert bristow-johnson wrote: > >> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > > > >>> but if > >>> you multiply a sinc by itself over and over you don't get something that > >>> starts to look like a Gaussian. > >> > >> > >> that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > >> think it's that the guassian pulse that h[N](t) becames when N gets large, > >> that pulse gets wider and wider, so the FT of it might look like a Gaussian > >> pulse that is getting narrower and narrower and the sinc^(N+1) might look > >> like that. > > > > I'm confused here. If you convolve a full sinc with itself > > over and over, you should get it again and again. > > but i'm convolving the rect() with itself over and over again. i am > *multiplying* the sinc() with itself over and over again.
Yes, a point P(t) on a b-spline can be thought of as a blending of the splines control points in the neighborhood of P(t). That looks just like convolution - where each output is the weighted sum of a certain set of inputs. The problem is depending on where the t in P(t) is located you have a different number of inputs with different weightings contributing to each output. The only case where the same number of inputs, with the same weighting, contribute to each output sample is when one sample per control point is taken.
> and somehow, they > both start to look like a gaussian pulse. the convolved rects like a wide > gaussian and the sinc^N like a narrow gaussian. >
In practice N is usually a fairly small number. I don't know how useful it is to contemplate what happens when N gets large. -jim
> > The result should be the same as the operand. Or are you saying > > something entirely different? > > different, but not entirely different. > > i am saying that these piecewise polynomial interpolation function that have > been called "B-spline" by a few authors (including me, but i was just > ignorantly following the same terminology as in previous papers) is what you > get when you convolve rectangular pulse functions (as wide as a sample time) > with each other. > > an (N-1)th order B-spline would be convolving N rect() functions together, > that means, in the frequency domain, their spectrum is multiplying N sinc() > functions together. > > as N gets large, the high order B-spline will look more and more like a > gaussian function (because of the central limit theorem). but the spectrum > is a sinc^N function. but we know that the F.T. of a gaussian is a > gaussian. so how do these sinc^N functions become something that looks like > a gaussian pulse? > > well, i'm guessing, but let's say that the sample time is 1. so those > rectangular pulses that convolve against each other all have area of 1 (like > a uniform pdf with variance of 1/12). so, for the (N-1)th order B-spline, > it's like we are convolving the pdfs of N uniform random variables. we get > some p.d.f. with a variance of N/12 and we expect that pdf to become more > and more gaussian. so i think that B-spline will begin to look like > > 1/sqrt(2*pi*(N/12)) * exp( -t^2 / (2*(N/12)) ) > or > sqrt(6/(pi*N))) * exp( -(sqrt(6/N)*t)^2 ) > or > sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) > > and we know that the FT{ exp(-pi*t^2) } is exp(-pi*f^2) (where f is Hz) > > so FTing that gaussian pulse we get > > FT{ sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) } > > = exp( -pi*(sqrt((pi*N)/6)*f)^2 ) > > = exp( -pi^2/6*N * f^2 ) > > = ( exp( -pi^2/6 * f^2 ) )^N > > and somehow, this needs to look like sinc^N(f) (the FT of convolving N > rect() functions against each other) which means that > > sinc(f) = sin(pi*f)/(pi*f) = approximately exp( -pi^2/6 * f^2 ) > > is that true?? > > i dunno. they are both even symmetry and equal to 1 when f=0. > > expanding both to 3 terms results in: > > sinc(f) = sin(pi*f)/(pi*f) ~= (pi*f - (pi*f)^3/(3!) + (pi*f)^5/(5!))/(pi*f) > > = 1 - (pi*f)^2/6 + (pi*f)^4/120 > > and > > exp( -pi^2/6 * f^2 ) ~= 1 + -pi^2/6*f^2 + (-pi^2/6*f^2)^2/(2!) > > = 1 - (pi*f)^2/6 + (pi*f)^4/72 > > so they *do* approximately agree! i never really checked this before! > > well, they're not exactly the same, but they agree in the constant term and > the f^2 term and ain't too far off with the f^4 term. and since the sinc() > function actually crosses zero and the gaussian never does that, i would > expect the f^4 term to make the gaussian a little farther from zero than the > sinc and +1/72 is bigger than +1/120, so i guess it makes sense. when N is > large, they both are a narrow pulse that dives to zero very fast. it's just > that the sinc^N(f) does go bipolar where the exp(-pi^2/6*N*f^2) stays > positive (but very close to zero). > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Bob Cain <arcane@arcanemethods.com> wrote in
news:d59r3c1bel@enews1.newsguy.com: 

> > Ah! Big difference from what I was thinking. >
Certainly not the first time. I am sure that you remember the thread "Bob Cain Goes Down and Out in Defeat" regarding your idiotic and totally incorrect perspective on the Doppler distortion issue.