Reply by The Ghost May 8, 20052005-05-08
Bob Cain <arcane@arcanemethods.com> wrote in
news:d59r3c1bel@enews1.newsguy.com: 

> > Ah! Big difference from what I was thinking. >
Certainly not the first time. I am sure that you remember the thread "Bob Cain Goes Down and Out in Defeat" regarding your idiotic and totally incorrect perspective on the Doppler distortion issue.
Reply by jim May 4, 20052005-05-04

robert bristow-johnson wrote:
> > in article d56oqj028m9@enews1.newsguy.com, Bob Cain at > arcane@arcanemethods.com wrote on 05/02/2005 22:48: > > > > > > > robert bristow-johnson wrote: > >> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > > > >>> but if > >>> you multiply a sinc by itself over and over you don't get something that > >>> starts to look like a Gaussian. > >> > >> > >> that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > >> think it's that the guassian pulse that h[N](t) becames when N gets large, > >> that pulse gets wider and wider, so the FT of it might look like a Gaussian > >> pulse that is getting narrower and narrower and the sinc^(N+1) might look > >> like that. > > > > I'm confused here. If you convolve a full sinc with itself > > over and over, you should get it again and again. > > but i'm convolving the rect() with itself over and over again. i am > *multiplying* the sinc() with itself over and over again.
Yes, a point P(t) on a b-spline can be thought of as a blending of the splines control points in the neighborhood of P(t). That looks just like convolution - where each output is the weighted sum of a certain set of inputs. The problem is depending on where the t in P(t) is located you have a different number of inputs with different weightings contributing to each output. The only case where the same number of inputs, with the same weighting, contribute to each output sample is when one sample per control point is taken.
> and somehow, they > both start to look like a gaussian pulse. the convolved rects like a wide > gaussian and the sinc^N like a narrow gaussian. >
In practice N is usually a fairly small number. I don't know how useful it is to contemplate what happens when N gets large. -jim
> > The result should be the same as the operand. Or are you saying > > something entirely different? > > different, but not entirely different. > > i am saying that these piecewise polynomial interpolation function that have > been called "B-spline" by a few authors (including me, but i was just > ignorantly following the same terminology as in previous papers) is what you > get when you convolve rectangular pulse functions (as wide as a sample time) > with each other. > > an (N-1)th order B-spline would be convolving N rect() functions together, > that means, in the frequency domain, their spectrum is multiplying N sinc() > functions together. > > as N gets large, the high order B-spline will look more and more like a > gaussian function (because of the central limit theorem). but the spectrum > is a sinc^N function. but we know that the F.T. of a gaussian is a > gaussian. so how do these sinc^N functions become something that looks like > a gaussian pulse? > > well, i'm guessing, but let's say that the sample time is 1. so those > rectangular pulses that convolve against each other all have area of 1 (like > a uniform pdf with variance of 1/12). so, for the (N-1)th order B-spline, > it's like we are convolving the pdfs of N uniform random variables. we get > some p.d.f. with a variance of N/12 and we expect that pdf to become more > and more gaussian. so i think that B-spline will begin to look like > > 1/sqrt(2*pi*(N/12)) * exp( -t^2 / (2*(N/12)) ) > or > sqrt(6/(pi*N))) * exp( -(sqrt(6/N)*t)^2 ) > or > sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) > > and we know that the FT{ exp(-pi*t^2) } is exp(-pi*f^2) (where f is Hz) > > so FTing that gaussian pulse we get > > FT{ sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) } > > = exp( -pi*(sqrt((pi*N)/6)*f)^2 ) > > = exp( -pi^2/6*N * f^2 ) > > = ( exp( -pi^2/6 * f^2 ) )^N > > and somehow, this needs to look like sinc^N(f) (the FT of convolving N > rect() functions against each other) which means that > > sinc(f) = sin(pi*f)/(pi*f) = approximately exp( -pi^2/6 * f^2 ) > > is that true?? > > i dunno. they are both even symmetry and equal to 1 when f=0. > > expanding both to 3 terms results in: > > sinc(f) = sin(pi*f)/(pi*f) ~= (pi*f - (pi*f)^3/(3!) + (pi*f)^5/(5!))/(pi*f) > > = 1 - (pi*f)^2/6 + (pi*f)^4/120 > > and > > exp( -pi^2/6 * f^2 ) ~= 1 + -pi^2/6*f^2 + (-pi^2/6*f^2)^2/(2!) > > = 1 - (pi*f)^2/6 + (pi*f)^4/72 > > so they *do* approximately agree! i never really checked this before! > > well, they're not exactly the same, but they agree in the constant term and > the f^2 term and ain't too far off with the f^4 term. and since the sinc() > function actually crosses zero and the gaussian never does that, i would > expect the f^4 term to make the gaussian a little farther from zero than the > sinc and +1/72 is bigger than +1/120, so i guess it makes sense. when N is > large, they both are a narrow pulse that dives to zero very fast. it's just > that the sinc^N(f) does go bipolar where the exp(-pi^2/6*N*f^2) stays > positive (but very close to zero). > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Reply by Bob Cain May 4, 20052005-05-04

robert bristow-johnson wrote:
> in article d56oqj028m9@enews1.newsguy.com, Bob Cain at > arcane@arcanemethods.com wrote on 05/02/2005 22:48: > > >> >>robert bristow-johnson wrote: >> >>>in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on >> >>>>but if >>>>you multiply a sinc by itself over and over you don't get something that >>>>starts to look like a Gaussian. >>> >>> >>>that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i >>>think it's that the guassian pulse that h[N](t) becames when N gets large, >>>that pulse gets wider and wider, so the FT of it might look like a Gaussian >>>pulse that is getting narrower and narrower and the sinc^(N+1) might look >>>like that. >> >>I'm confused here. If you convolve a full sinc with itself >>over and over, you should get it again and again. > > > but i'm convolving the rect() with itself over and over again. i am > *multiplying* the sinc() with itself over and over again. and somehow, they > both start to look like a gaussian pulse. the convolved rects like a wide > gaussian and the sinc^N like a narrow gaussian.
Ah! Big difference from what I was thinking. Thanks, Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by robert bristow-johnson May 3, 20052005-05-03
in article d56oqj028m9@enews1.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 05/02/2005 22:48:

> > > robert bristow-johnson wrote: >> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > >>> but if >>> you multiply a sinc by itself over and over you don't get something that >>> starts to look like a Gaussian. >> >> >> that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i >> think it's that the guassian pulse that h[N](t) becames when N gets large, >> that pulse gets wider and wider, so the FT of it might look like a Gaussian >> pulse that is getting narrower and narrower and the sinc^(N+1) might look >> like that. > > I'm confused here. If you convolve a full sinc with itself > over and over, you should get it again and again.
but i'm convolving the rect() with itself over and over again. i am *multiplying* the sinc() with itself over and over again. and somehow, they both start to look like a gaussian pulse. the convolved rects like a wide gaussian and the sinc^N like a narrow gaussian.
> The result should be the same as the operand. Or are you saying > something entirely different?
different, but not entirely different. i am saying that these piecewise polynomial interpolation function that have been called "B-spline" by a few authors (including me, but i was just ignorantly following the same terminology as in previous papers) is what you get when you convolve rectangular pulse functions (as wide as a sample time) with each other. an (N-1)th order B-spline would be convolving N rect() functions together, that means, in the frequency domain, their spectrum is multiplying N sinc() functions together. as N gets large, the high order B-spline will look more and more like a gaussian function (because of the central limit theorem). but the spectrum is a sinc^N function. but we know that the F.T. of a gaussian is a gaussian. so how do these sinc^N functions become something that looks like a gaussian pulse? well, i'm guessing, but let's say that the sample time is 1. so those rectangular pulses that convolve against each other all have area of 1 (like a uniform pdf with variance of 1/12). so, for the (N-1)th order B-spline, it's like we are convolving the pdfs of N uniform random variables. we get some p.d.f. with a variance of N/12 and we expect that pdf to become more and more gaussian. so i think that B-spline will begin to look like 1/sqrt(2*pi*(N/12)) * exp( -t^2 / (2*(N/12)) ) or sqrt(6/(pi*N))) * exp( -(sqrt(6/N)*t)^2 ) or sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) and we know that the FT{ exp(-pi*t^2) } is exp(-pi*f^2) (where f is Hz) so FTing that gaussian pulse we get FT{ sqrt(6/(pi*N))) * exp( -pi*(sqrt(6/(pi*N))*t)^2 ) } = exp( -pi*(sqrt((pi*N)/6)*f)^2 ) = exp( -pi^2/6*N * f^2 ) = ( exp( -pi^2/6 * f^2 ) )^N and somehow, this needs to look like sinc^N(f) (the FT of convolving N rect() functions against each other) which means that sinc(f) = sin(pi*f)/(pi*f) = approximately exp( -pi^2/6 * f^2 ) is that true?? i dunno. they are both even symmetry and equal to 1 when f=0. expanding both to 3 terms results in: sinc(f) = sin(pi*f)/(pi*f) ~= (pi*f - (pi*f)^3/(3!) + (pi*f)^5/(5!))/(pi*f) = 1 - (pi*f)^2/6 + (pi*f)^4/120 and exp( -pi^2/6 * f^2 ) ~= 1 + -pi^2/6*f^2 + (-pi^2/6*f^2)^2/(2!) = 1 - (pi*f)^2/6 + (pi*f)^4/72 so they *do* approximately agree! i never really checked this before! well, they're not exactly the same, but they agree in the constant term and the f^2 term and ain't too far off with the f^4 term. and since the sinc() function actually crosses zero and the gaussian never does that, i would expect the f^4 term to make the gaussian a little farther from zero than the sinc and +1/72 is bigger than +1/120, so i guess it makes sense. when N is large, they both are a narrow pulse that dives to zero very fast. it's just that the sinc^N(f) does go bipolar where the exp(-pi^2/6*N*f^2) stays positive (but very close to zero). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Bob Cain May 2, 20052005-05-02

robert bristow-johnson wrote:
> in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
>>but if >>you multiply a sinc by itself over and over you don't get something that >>starts to look like a Gaussian. > > > that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > think it's that the guassian pulse that h[N](t) becames when N gets large, > that pulse gets wider and wider, so the FT of it might look like a Gaussian > pulse that is getting narrower and narrower and the sinc^(N+1) might look > like that.
I'm confused here. If you convolve a full sinc with itself over and over, you should get it again and again. The result should be the same as the operand. Or are you saying something entirely different? Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Reply by jim April 29, 20052005-04-29

robert bristow-johnson wrote:

> > i know that's what you're saying, and i guess that given a sequence of > sampled data: { ... x[-1], x[0], x[1], x[2], ... } there is another set of > data called "control points" { ... v[-1], v[0], v[1], v[2], ... } such that > if you use the control points as weights on the corresponding B-spline, the > summed result will go through the sampled data points x[0], x[1], x[2]. >
Ok, now were getting somewhere. But don't overlook *why* this works - the reason is if you sample any spline at the same spacing as the control point spacing (whether you shift to between samples or stay on sample) you always have nothing more than an FIR filter.
> > Now you keep talking about sampling the spline at some other sample > > rate what exactly does that have to do with the price of tea in China. > > here in China, that's what we do when we interpolate uniformly spaced > samples. precision delay is fine. that is a nice LTI filter that has the > effect of delaying by something other than an integer number of samples. > what i think about when i use any kind of interpolation between samples is > resampling the interpolated function at *any* time. >
OK, now that we've dispensed with the original point under discussion which had nothing in particular to do with sample rate conversion, we can take a look at sample rate conversion which you seem so eager to discuss. Now that you have some understanding of the frequency response of a spline when you sample at the same rate as the original points lets look at what you claim is the frequency response when you sample at a different rate. Consider upsampling by a factor of 2. You take one sample at an original sample point and one halfway between samples and repeat that all thru the points. The result is a set of 2N points where every other point was produced by one of two different FIR filters. Now its pretty obvious by inspection that the overall frequency response of the sample rate conversion is going to have to depend on what the relationship is between the new rate to the old. In oterwords, If you a apply your formula for the frequency response, you don't know if the "T" in your formula is the same sample spacing as the original or half the space or something else, but its obvious that they can't ALL have the same frequency response. Since your analysis of the frequency response doesn't take this into account at all, you can talk yourself blue in the face with your proofs, but there's just no way your analysis can be right. But it's worse than that if you are interested in maintaining a linear phase relationship between the new and old rate (probably not important in audio but very important in image processing) you need to introduce a shift in the new sample rate to maintain phase with the old. That also obviously changes the frequency response of that particular resampling process. And your formula doesn't take that into account either. -jim
> >>> Where exactly do these continuous identical > >>> rectangular pulses your talking about appear in this process? > >> > >> i am generalizing a little more (so as to include the application of > >> resampling or sample-rate conversion). i had been talking about defining > >> the interior space between samples for all t, not just t = (n+0.5)*T. here > >> i am using the language of that sampling/reconstruction treatise i > >> periodically post to comp.dsp: > > > > Yes, but you claimed frequency response which is a function of the > > sample spacing. I gave a counter example where your claimed frequency > > response doesn't work. Why didn't it work? > > i tried looking back at my earlier posts. it's gonna take a bit to try to > adapt our language to be talking about the same thing. see below > > ... > >> H[N](f) = ( sinc(f*T) )^(N+1) > >> > >> and the Nth order B-spline interpolation function, h[N](t) will be the > >> convolution of h[N-1](t) with h[0](t). > > > > It seems to me that the problem with this is that you didn't start with > > a kernel frequency response that was a sinc. You start with a very very > > very loose approximation. > > ya, drop-sample or even linear interpolation is not a very good > approximation to the true sinc() interpolation. but if you *do* use a real > sinc-like interpolation (could be windowed sinc() or something that the > Parks-McClellan algorithm dreams up) to "upsample" by a factor of say, 512 > (that means 512 different polyphases or fractional delays), *then* you can > use linear interpolation on the remain precision (assuming your fractional > delay was not a multiple of 1/512th of a sample) and you're still golden. > again, if it's drop-sample, you need to have 512K polyphases (more memory, > less computation). > > >> as N gets larger, h[N](t) looks more > >> like a gaussian pulse, > > > > Ok that sounds reasonable if you mean a periodic Gaussian pulse, > > NO! just a regular Gaussian pulse or whatever you get when you convolve N > non-repeating rectangular pulses together. > > > but if > > you multiply a sinc by itself over and over you don't get something that > > starts to look like a Gaussian. > > that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i > think it's that the guassian pulse that h[N](t) becames when N gets large, > that pulse gets wider and wider, so the FT of it might look like a Gaussian > pulse that is getting narrower and narrower and the sinc^(N+1) might look > like that. > > >> but for Lagrange and Hermite polynomials, h[N](t) > >> will look more like 1/T * sinc(t/T), the ideal interpolating impulse > >> response. > > > > Well I don't see how you can express the frequency response purely as a > > function of the output sample spacing. If that would work then it should > > have worked for the counterexample I gave. To derive the frequency > > response of a resampling scheme you clearly need to take into account > > both the input and output sample spacing. > > i am using a different model. i am first converting this uniformly sampled > discrete-time sequence into a continuous-time function using a known > interpolation function that can be modeled as an impulse response. that has > a non-repeating spectrum which is a repeating spectrum (of the ideally > sampled original function repeated at integer multiples of the sampling > rate) multiplied by a non-repeating frequency response (the F.T. of the > impulse response that models the interpolation function). not periodic. > > i dunno how we're gonna get our language to converge so that we can even > know if we are agreeing or disagreeing. > > > Also consider the delay in the example I gave - suppose the delay was > > only a quarter sample. Then sampling the spline would still be > > equivalent to an FIR filter but not the same filter and not the same > > frequency response. But yet its still the same sample spacing. > > then, that's simply a constant delay if it's still the same sample spacing > (but a different fractional delay offset). that is an LTI filter. has > phase response and magnitude response, but no aliasing. what i'm talking > about is something where the sample spacing has changed from the original > sample spacing. that's what resampling is or sample-rate-conversion. then > it's like one output sample it's one fractional sample offset (in relation > to the input sample times). the next output sample, it's another fractional > sample offset. then you get other problems (like images folding into the > baseband). > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Reply by robert bristow-johnson April 29, 20052005-04-29
in article 42718191_1@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 20:35:

> robert bristow-johnson wrote: > >>> Take for example the simplest spline, where you just construct a >>> straight line from each point to the next. >> >> otherwise called "linear interpolation". i would not call this the simplest >> spline, i would call "zero-order hold" or "drop-sample" interpolation the >> simplest spline. using your language, that would be like discrete >> convolving with [1]. > > Well, I wouldn't call a set of unconnected points a spline.
so they have discontinuities in the 0th derivative. piecewise linear interpolation has discontinuities in the 1st derivative. piecewise quadratic has discontinuities in the 2nd derivative. etc.
> >>> If I sample this curve half >>> way between 2 points the result is just the average of the 2 points. If >>> I move ahead one sample width and take another sample I again get the >>> average. If I sample the whole curve in this manner its just as if I >>> filtered the whole set of points with the FIR filter [1 1]/2. The >>> frequency response of that filter is going to be periodic. >> >> sure. you are implementing a sorta fractional sample fixed-delay (which is >> an application of interpolation). that is a purely LTI operator, so what >> you have is a filter with phase or group delay of 1/2 sample and a >> cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period >> and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with >> N=2. >> > > The delay was so that the problem wasn't completely trivial - I could > have sampled the spline at the original sample points.
fine. but it still is not as general as sampling the spline at *any* uniformly spaced points.
>>> This is a complete, albeit in its simplest form, description of the actual >>> process of sampling a spline. >> >> not from my POV. you are fixing your sample times to be at this fixed delay >> of 1/2 sample and what you essentially have is a filter. there are no >> issues of SRC noise or error from resampling that happens in the real >> process of
my brain obviously was turned off. i meant to end the statement with the words "sample rate conversion".
> > The original point I made, which you took issue with, was that if you > sample a spline at 1 sample per control point you always have what is > equivalent to a simple convolution operation. And its a convolution that > is invertable so you can do the inverse and run a spline through a set > of samples by creating one new control point for each original sample.
i know that's what you're saying, and i guess that given a sequence of sampled data: { ... x[-1], x[0], x[1], x[2], ... } there is another set of data called "control points" { ... v[-1], v[0], v[1], v[2], ... } such that if you use the control points as weights on the corresponding B-spline, the summed result will go through the sampled data points x[0], x[1], x[2].
> Now you keep talking about sampling the spline at some other sample > rate what exactly does that have to do with the price of tea in China.
here in China, that's what we do when we interpolate uniformly spaced samples. precision delay is fine. that is a nice LTI filter that has the effect of delaying by something other than an integer number of samples. what i think about when i use any kind of interpolation between samples is resampling the interpolated function at *any* time.
>>> Where exactly do these continuous identical >>> rectangular pulses your talking about appear in this process? >> >> i am generalizing a little more (so as to include the application of >> resampling or sample-rate conversion). i had been talking about defining >> the interior space between samples for all t, not just t = (n+0.5)*T. here >> i am using the language of that sampling/reconstruction treatise i >> periodically post to comp.dsp: > > Yes, but you claimed frequency response which is a function of the > sample spacing. I gave a counter example where your claimed frequency > response doesn't work. Why didn't it work?
i tried looking back at my earlier posts. it's gonna take a bit to try to adapt our language to be talking about the same thing. see below ...
>> H[N](f) = ( sinc(f*T) )^(N+1) >> >> and the Nth order B-spline interpolation function, h[N](t) will be the >> convolution of h[N-1](t) with h[0](t). > > It seems to me that the problem with this is that you didn't start with > a kernel frequency response that was a sinc. You start with a very very > very loose approximation.
ya, drop-sample or even linear interpolation is not a very good approximation to the true sinc() interpolation. but if you *do* use a real sinc-like interpolation (could be windowed sinc() or something that the Parks-McClellan algorithm dreams up) to "upsample" by a factor of say, 512 (that means 512 different polyphases or fractional delays), *then* you can use linear interpolation on the remain precision (assuming your fractional delay was not a multiple of 1/512th of a sample) and you're still golden. again, if it's drop-sample, you need to have 512K polyphases (more memory, less computation).
>> as N gets larger, h[N](t) looks more >> like a gaussian pulse, > > Ok that sounds reasonable if you mean a periodic Gaussian pulse,
NO! just a regular Gaussian pulse or whatever you get when you convolve N non-repeating rectangular pulses together.
> but if > you multiply a sinc by itself over and over you don't get something that > starts to look like a Gaussian.
that's true. and the FT of a Gaussian is a Gaussian, so what's the deal? i think it's that the guassian pulse that h[N](t) becames when N gets large, that pulse gets wider and wider, so the FT of it might look like a Gaussian pulse that is getting narrower and narrower and the sinc^(N+1) might look like that.
>> but for Lagrange and Hermite polynomials, h[N](t) >> will look more like 1/T * sinc(t/T), the ideal interpolating impulse >> response. > > Well I don't see how you can express the frequency response purely as a > function of the output sample spacing. If that would work then it should > have worked for the counterexample I gave. To derive the frequency > response of a resampling scheme you clearly need to take into account > both the input and output sample spacing.
i am using a different model. i am first converting this uniformly sampled discrete-time sequence into a continuous-time function using a known interpolation function that can be modeled as an impulse response. that has a non-repeating spectrum which is a repeating spectrum (of the ideally sampled original function repeated at integer multiples of the sampling rate) multiplied by a non-repeating frequency response (the F.T. of the impulse response that models the interpolation function). not periodic. i dunno how we're gonna get our language to converge so that we can even know if we are agreeing or disagreeing.
> Also consider the delay in the example I gave - suppose the delay was > only a quarter sample. Then sampling the spline would still be > equivalent to an FIR filter but not the same filter and not the same > frequency response. But yet its still the same sample spacing.
then, that's simply a constant delay if it's still the same sample spacing (but a different fractional delay offset). that is an LTI filter. has phase response and magnitude response, but no aliasing. what i'm talking about is something where the sample spacing has changed from the original sample spacing. that's what resampling is or sample-rate-conversion. then it's like one output sample it's one fractional sample offset (in relation to the input sample times). the next output sample, it's another fractional sample offset. then you get other problems (like images folding into the baseband). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by jim April 28, 20052005-04-28

robert bristow-johnson wrote:

> > Take for example the simplest spline, where you just construct a > > straight line from each point to the next. > > otherwise called "linear interpolation". i would not call this the simplest > spline, i would call "zero-order hold" or "drop-sample" interpolation the > simplest spline. using your language, that would be like discrete > convolving with [1]. >
Well, I wouldn't call a set of unconnected points a spline.
> > If I sample this curve half > > way between 2 points the result is just the average of the 2 points. If > > I move ahead one sample width and take another sample I again get the > > average. If I sample the whole curve in this manner its just as if I > > filtered the whole set of points with the FIR filter [1 1]/2. The > > frequency response of that filter is going to be periodic. > > sure. you are implementing a sorta fractional sample fixed-delay (which is > an application of interpolation). that is a purely LTI operator, so what > you have is a filter with phase or group delay of 1/2 sample and a > cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period > and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with > N=2. >
The delay was so that the problem wasn't completely trivial - I could have sampled the spline at the original sample points.
> > This is a complete, albeit in its simplest form, description of the actual > > process of sampling a spline. > > not from my POV. you are fixing your sample times to be at this fixed delay > of 1/2 sample and what you essentially have is a filter. there are no > issues of SRC noise or error from resampling that happens in the real > process of >
The original point I made, which you took issue with, was that if you sample a spline at 1 sample per control point you always have what is equivalent to a simple convolution operation. And its a convolution that is invertable so you can do the inverse and run a spline through a set of samples by creating one new control point for each original sample. Now you keep talking about sampling the spline at some other sample rate what exactly does that have to do with the price of tea in China.
> > Where exactly do these continuous identical > > rectangular pulses your talking about appear in this process? > > i am generalizing a little more (so as to include the application of > resampling or sample-rate conversion). i had been talking about defining > the interior space between samples for all t, not just t = (n+0.5)*T. here > i am using the language of that sampling/reconstruction treatise i > periodically post to comp.dsp:
Yes, but you claimed frequency response which is a function of the sample spacing. I gave a counter example where your claimed frequency response doesn't work. Why didn't it work?
>snipped<
> H[N](f) = ( sinc(f*T) )^(N+1) > > and the Nth order B-spline interpolation function, h[N](t) will be the > convolution of h[N-1](t) with h[0](t).
It seems to me that the problem with this is that you didn't start with a kernel frequency response that was a sinc. You start with a very very very loose approximation.
>as N gets larger, h[N](t) looks more > like a gaussian pulse,
Ok that sounds reasonable if you mean a periodic Gaussian pulse, but if you multiply a sinc by itself over and over you don't get something that starts to look like a Gaussian.
>but for Lagrange and Hermite polynomials, h[N](t) > will look more like 1/T * sinc(t/T), the ideal interpolating impulse > response. >
Well I don't see how you can express the frequency response purely as a function of the output sample spacing. If that would work then it should have worked for the counterexample I gave. To derive the frequency response of a resampling scheme you clearly need to take into account both the input and output sample spacing. Also consider the delay in the example I gave - suppose the delay was only a quarter sample. Then sampling the spline would still be equivalent to an FIR filter but not the same filter and not the same frequency response. But yet its still the same sample spacing. -jim
> now, the power spectrum of *any* Nth order piecewise polynomial > interpolation function > > ( H[N](f) )^2 > > will have terms that are sinc^2, sinc^4, sinc^6, ... sinc^(2N). but for the > Nth order B-spline, all of those terms will be zero except for sinc^(2N). > the power spectrum of the other polynomial interpolators will have lower > powers of sinc^2 that will make those notches less deep or wide and less > effective in killing the images before resampling. Lagrange and Hermite > look much more like the ideal sinc() interpolator and might not need any > "pre-filtering" in the baseband to make up for the severe LPF that B-splines > do, but they don't wipe out the images (which become folded and noise in the > baseband) as well as the B-spline does. > > does that answer your question, Jim? if you (or anyone else) sends me a > despammified edress, i'll return a copy of the paper that Duane Wise and i > did about this, but i think that Olli Niemitalo did a better job: > > http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf > > or http://ldesoras.free.fr/doc/articles/resampler.pdf > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Reply by robert bristow-johnson April 28, 20052005-04-28
in article 4271117f$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 12:37:

> robert bristow-johnson wrote:
...
>> but you know that the F.T. of a single, rectangular pulse in >> continuous-time, with width of T (the sample time) is sinc(f*T), right? >> that's the Zero-Order-Hold or drop-sample interpolation. and each time you >> convolve with another identical rectangular pulse, you multiply by another >> sinc(f*T) function, right? so an (N-1)th order piecewise polynomial >> interpolation constructed this was will have a spectrum of sinc^N(f*T), >> right? >> > > Yes, I understand what you are claiming happens in the frequency domain. > What I haven't seen is any actual description of the physical operation > performed in the time domain that would produce this sort of frequency > response.
actually, i think we both understand what is happening, but from different POVs. it's still the same outcome, if the same conditions are applied.
> Take for example the simplest spline, where you just construct a > straight line from each point to the next.
otherwise called "linear interpolation". i would not call this the simplest spline, i would call "zero-order hold" or "drop-sample" interpolation the simplest spline. using your language, that would be like discrete convolving with [1].
> If I sample this curve half > way between 2 points the result is just the average of the 2 points. If > I move ahead one sample width and take another sample I again get the > average. If I sample the whole curve in this manner its just as if I > filtered the whole set of points with the FIR filter [1 1]/2. The > frequency response of that filter is going to be periodic.
sure. you are implementing a sorta fractional sample fixed-delay (which is an application of interpolation). that is a purely LTI operator, so what you have is a filter with phase or group delay of 1/2 sample and a cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with N=2.
> This is a complete, albeit in its simplest form, description of the actual > process of sampling a spline.
not from my POV. you are fixing your sample times to be at this fixed delay of 1/2 sample and what you essentially have is a filter. there are no issues of SRC noise or error from resampling that happens in the real process of
> Where exactly do these continuous identical > rectangular pulses your talking about appear in this process?
i am generalizing a little more (so as to include the application of resampling or sample-rate conversion). i had been talking about defining the interior space between samples for all t, not just t = (n+0.5)*T. here i am using the language of that sampling/reconstruction treatise i periodically post to comp.dsp: http://groups-beta.google.com/group/comp.dsp/msg/f9e0ad7b430bc653?hl=en (view both with a fixed-width font.) let d(t) =def "dirac delta function" T =def "sampling period" x[k] =def x(k*T) Fs =def 1/T the sampling function: &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; q(t) = T * SUM{ d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +inf &#4294967295; &#4294967295; = SUM{ exp(j*2*pi*n*Fs*t) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;n=-inf we know that when you filter the ideal sampled signal &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;= SUM{ x(t) * exp(j*2*pi*n*Fs*t) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf with an ideal brickwall filter (with cutoff at Nyquist) &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;{ &#4294967295;1 &#4294967295;for &#4294967295;|f| < &#4294967295;Fs/2 = 1/(2*T) &#4294967295; &#4294967295; H(f) = { &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;{ &#4294967295;0 &#4294967295;for &#4294967295;|f| > Fs/2 = 1/(2*T) h(t) = (1/T)*sinc(t/T) that fully recovers x(t) (assuming it was sufficiently bandlimited before it was sampled), so the ideal interpolation function for getting x(t) from x[n] is &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) = T * SUM{ x[k] * h(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; = SUM{ x[k] * sinc((t - k*T)/T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; = SUM{ x[m+k] * sinc((t/T - m) - k) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf m = any integer, we might conveniently choose it to be m = floor(t/T). using that ideal sinc function utterly wipes out all images, and leaves the original spectrum untouched. so when we resample x(t) at another synchronous or asynchronous sample rate (let's say a higher rate so that we don't have to argue about aliasing), the resampling will be perfect because no images are folded back into the baseband in the resampling process. we also know that the ideally sampled signal above has the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf where &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +inf &#4294967295; &#4294967295; X(f) =def FT{ x(t) } = integral{ x(t)*exp(-j*2*pi*f*t) dt} &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; -inf now, we can't use the ideal sinc() for resampling, because of two reasons: 1. it goes on forever. we can't sum an infinite sum. gonna have to truncate (or window) it somewhere 2. suppose we window it: &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+L &#4294967295; &#4294967295; x((m+a)*T) = SUM{ x[m+k] * sinc(a-k) * w((a-k)/L) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-L-1 where t = (m + a)*T &#4294967295; m = floor(t/T) &#4294967295; a = fract(t/T) = t/T - floor(t/T) 0 <= a < 1 and w(t) is a window function of non-zero width of 2L. L should be an integer and i found L=16 to be good enough, but the AD1890 uses L=32 (a 64 tap polyphase FIR). even if we window it so the summation is finite, if we are going to interpolate at any arbitrary fractional time, "a", we would still need an infinite number of sets of coefficients (each a vector of length 2L) to define g(a-k) = sinc(a-k)*w((a-k)/L) for all of the infinite number of values of "a" that are 0 <= a < 1. so, somehow, we have to approximate g(a) = sinc(a)*w(a/L) with a function that can be evaluated with a computer for any value of "a" within -L < a <= L . in general, these will turn out to piecewise finite order polynomials. this simplest crude approximation is zero-order hold or "drop-sample" interpolation. for symmetry sake, let's say we "round" to the nearest sample. that would be approximating g(a) with g(a) ~= p0(a) where { 1 -1/2 <= a < 1/2 p0(a) = { { 0 otherwise that means we are convoluting &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf with h[0](t) = 1/T * p0(t/T) getting us &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) ~= T * SUM{ x[k] * h[0](t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf and the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf is getting multiplied by H[0](f) = sinc(f*T) and then we are resampling that with a sampling frequency that is unrelated to the original Fs. then the images of H[0](f)*FT{x(t)*q(t)}, all of them for n not 0, get reduced by the notches in the sinc() function of H0(f). all of those reduced images can be potentially folded into the baseband with an arbitrary new Fs. i can show that for 120 dB S/N ratio, the original x(t) would have to be oversampled by a factor 512K so that all of those images are narrow enough to be killed enough so that their energy is 120 dB less than that of x(t). the next simplest is "linear interpolation". now we are approximating g(a) ~= p1(a) where { 1-|a| |a| < 1 p1(a) = { { 0 otherwise now the ideally sampled continuous-time function &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf is getting convoluted with h[1](t) = 1/T * p1(t/T) getting us &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;&#4294967295; &#4294967295;+inf &#4294967295; x(t) ~= T * SUM{ x[k] * h[1](t - k*T) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; k=-inf and the spectrum &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295;+inf &#4294967295; &#4294967295; FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) } &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; n=-inf is getting multiplied by H[1](f) = ( sinc(f*T) )^2 . so now we have deeper and wider notches killing those images of x(t) where n is not 0. so those images are reduced further. this time, for 120 dB S/N in the resampling process, all you need to do is first upsample by 512 (not 512K). at this point for 0th order and 1st order polynomial interpolation, Lagrange, Hermite, and B-spline all agree. but for order N=2 and greater, they begin to differ. as the order, N, gets larger, Lagrange and Hermite will look more like the ideal sinc() function for ideal interpolation (Hermite will be smoother since it tries to have as many continuous derivatives as possible), but the Nth order B-spline will have a frequency response of H[N](f) = ( sinc(f*T) )^(N+1) and the Nth order B-spline interpolation function, h[N](t) will be the convolution of h[N-1](t) with h[0](t). as N gets larger, h[N](t) looks more like a gaussian pulse, but for Lagrange and Hermite polynomials, h[N](t) will look more like 1/T * sinc(t/T), the ideal interpolating impulse response. now, the power spectrum of *any* Nth order piecewise polynomial interpolation function ( H[N](f) )^2 will have terms that are sinc^2, sinc^4, sinc^6, ... sinc^(2N). but for the Nth order B-spline, all of those terms will be zero except for sinc^(2N). the power spectrum of the other polynomial interpolators will have lower powers of sinc^2 that will make those notches less deep or wide and less effective in killing the images before resampling. Lagrange and Hermite look much more like the ideal sinc() interpolator and might not need any "pre-filtering" in the baseband to make up for the severe LPF that B-splines do, but they don't wipe out the images (which become folded and noise in the baseband) as well as the B-spline does. does that answer your question, Jim? if you (or anyone else) sends me a despammified edress, i'll return a copy of the paper that Duane Wise and i did about this, but i think that Olli Niemitalo did a better job: http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf or http://ldesoras.free.fr/doc/articles/resampler.pdf -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by jim April 28, 20052005-04-28

robert bristow-johnson wrote:
> > in article 4270cfc1$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on > 04/28/2005 07:56: > > > Your references to spectrum and images and sincs is so vague as to be > > meaningless to me. > > not surprized. but you know that the F.T. of a single, rectangular pulse in > continuous-time, with width of T (the sample time) is sinc(f*T), right? > that's the Zero-Order-Hold or drop-sample interpolation. and each time you > convolve with another identical rectangular pulse, you multiply by another > sinc(f*T) function, right? so an (N-1)th order piecewise polynomial > interpolation constructed this was will have a spectrum of sinc^N(f*T), > right? >
Yes, I understand what you are claiming happens in the frequency domain. What I haven't seen is any actual description of the physical operation performed in the time domain that would produce this sort of frequency response. Take for example the simplest spline, where you just construct a straight line from each point to the next. If I sample this curve half way between 2 points the result is just the average of the 2 points. If I move ahead one sample width and take another sample I again get the average. If I sample the whole curve in this manner its just as if I filtered the whole set of points with the FIR filter [1 1]/2. The frequency response of that filter is going to be periodic. This is a complete, albeit in its simplest form, description of the actual process of sampling a spline. Where exactly do these continuous identical rectangular pulses your talking about appear in this process? -jim
> so the notches in the sinc^N function (which are at non-zero multiples of > 1/T) get deeper and wider with increasing N, right? and those notches will > kill the ideal sampling images of the original spectrum before resampling, > right? an other piecewise (N-1)th order interpolation polynomial will have > sinc^N, but it will also have lower orders of sinc^n, therefore the notches > won't be quite as deep as the pure sinc^N(f*T), right? so the "B-spline" as > we have used the term is sorta optimal in this way. now do you get it? > > > If you construct a b-spline in the manner you > > describe (using your samples as the control points) and then sample the > > spline at the same sample rate (1:1 resample), each new sample point is > > merely a weighted sum of a set of control points in the neighborhood of > > that sample. > > yup. the language i use is that the input gets heavily filtered. all of > the weighting coefficients are positive. > > > I.E. its just a simple convolution operation, and the > > frequency response (and its inverse) is calculated as any other FIR > > filters response is calculated. > > not that easy, but somewhat doable with remez() or firls(). besides, > originally we we talking about IIR and *you* changed this to FIR. > > > If the spline is sampled at some other rate (a concept which you > > introduced into the discussion) > > only to tell you what i thought "B-splines" we about, which i didn't bring > into the discussion. > > > then the frequency response of that > > operation is calculated based on the new and old sample rate in a way > > not to different from a polyphase resampling scheme. > > it *is* about polyphase resampling, but with one difference from the > polyphase we normally talk about. using linear or some other polynomial > interpolation, we can resample at *any* rate. we are not limited to a > finite set of inter-sample sample-times or "delays". if we're doing SRC at > an arbitrary conversion rate, we can use traditional polyphase to optimall > "upsample" by some large integer, M. then the images will be narrow little > copies at integer multiples of M/T, then we use some function that we can > define with infinite precision (like piecewise linear) to get to whatever > precision of time in between samples that you want. and, because of the > notches of the sinc(f*(T/M)) function, those little narrow images will be > killed by the notches. > > -- > > r b-j rbj@audioimagination.com > > "Imagination is more important than knowledge."
----== Posted via Newsfeeds.Com - Unlimited-Uncensored-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----