> Ron N. wrote:
> > robert bristow-johnson wrote:
> > > dspunrelated wrote:
> > > > Have anyone tried to implement the polynomial based interpolator proposed
> > > > by Vesma Jussi?
> > >
> > > after Googling and looking at the papers i could get for free online, i
> > > don't see what particular new idea Jussi, et. al. is offering. the
> > > concept of interpolating using a variety of fitted polynomials is not
> > > new (i personally like Hermite, but B-spline might attenuate the images
> > > even better). and interpolating over-sampled input using polynomials
> > > (this is what you do if you want to use Parks-McClellan or similar to
> > > optimally design your interpolation LPF, but are left with a bunch of
> > > finite phases or fractional sample delays).
> >
> > My impression was that this interpolator was based on polynomial
> > interpolation of segments of your Parks-McClellan kernel (instead
> > of simple table-look-up, or a single-difference table-look-up
> > approximation).
>
> but that is precisely what Duane and I and Olli were doing. what are
> these "segments" of the interpolation kernal, if not a table lookup.
> how is that different?
I wasn't sure from your prior post whether you were interpolating
the input data or the filter kernel, and/or whether you were
building an interpolation polynomial to replace almost all of the
table entries, or just for between table entries. One Farrow
kernel interpolation version looked like it used one polynomial
per windowed sinc lobe, which means only one polynomial per tap
for some types of resampling FIR filters. I suppose you could
call one polynomial (or one constant) a one-entry table, but you
wouldn't need any table lookups in a per-tap parallel hardware
implementation if you use one polynomial per sinc lobe (or the
entire range of phase between two taps for other types of
kernels).
> still don't see what the big deel is.
No big deal. They all seem to be slightly different optimizations
of a just a few basic ideas.
IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
Reply by robert bristow-johnson●November 2, 20062006-11-02
Ron N. wrote:
> robert bristow-johnson wrote:
> > dspunrelated wrote:
> > > Have anyone tried to implement the polynomial based interpolator proposed
> > > by Vesma Jussi?
> >
> > after Googling and looking at the papers i could get for free online, i
> > don't see what particular new idea Jussi, et. al. is offering. the
> > concept of interpolating using a variety of fitted polynomials is not
> > new (i personally like Hermite, but B-spline might attenuate the images
> > even better). and interpolating over-sampled input using polynomials
> > (this is what you do if you want to use Parks-McClellan or similar to
> > optimally design your interpolation LPF, but are left with a bunch of
> > finite phases or fractional sample delays).
>
> My impression was that this interpolator was based on polynomial
> interpolation of segments of your Parks-McClellan kernel (instead
> of simple table-look-up, or a single-difference table-look-up
> approximation).
but that is precisely what Duane and I and Olli were doing. what are
these "segments" of the interpolation kernal, if not a table lookup.
how is that different?
> So there will be no finite phases or fractional sample delays.
you mean "finite [precision] phases or fractional sample delays",
right? if so, again, how is that different? (i.e. *sure* there are
finite precision phases or fractional sample delays, if there are
segments and it becomes infinite precision when we use polynomials or
some other continuously evaluated function to interpolate within a
segment.)
still don't see what the big deel is.
r b-j
Reply by Ron N.●November 1, 20062006-11-01
robert bristow-johnson wrote:
> dspunrelated wrote:
> > Have anyone tried to implement the polynomial based interpolator proposed
> > by Vesma Jussi?
>
> after Googling and looking at the papers i could get for free online, i
> don't see what particular new idea Jussi, et. al. is offering. the
> concept of interpolating using a variety of fitted polynomials is not
> new (i personally like Hermite, but B-spline might attenuate the images
> even better). and interpolating over-sampled input using polynomials
> (this is what you do if you want to use Parks-McClellan or similar to
> optimally design your interpolation LPF, but are left with a bunch of
> finite phases or fractional sample delays).
My impression was that this interpolator was based on polynomial
interpolation of segments of your Parks-McClellan kernel (instead
of simple table-look-up, or a single-difference table-look-up
approximation). So there will be no finite phases or fractional
sample delays.
But I could have misread the citation.
IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
Reply by robert bristow-johnson●November 1, 20062006-11-01
dspunrelated wrote:
> Have anyone tried to implement the polynomial based interpolator proposed
> by Vesma Jussi?
after Googling and looking at the papers i could get for free online, i
don't see what particular new idea Jussi, et. al. is offering. the
concept of interpolating using a variety of fitted polynomials is not
new (i personally like Hermite, but B-spline might attenuate the images
even better). and interpolating over-sampled input using polynomials
(this is what you do if you want to use Parks-McClellan or similar to
optimally design your interpolation LPF, but are left with a bunch of
finite phases or fractional sample delays). Duane Wise and i did a
paper (following Zolzer and followed by Olli Niemitalo
http://yehar.com/dsp/deip.pdf who did a better and more complete job).
there's not a lot more math that is left that you can throw at the
problem.
r b-j
Reply by dspunrelated●November 1, 20062006-11-01
Have anyone tried to implement the polynomial based interpolator proposed
by Vesma Jussi? I tried to solve the optimization problem using fminsearch
in matlab. But it wasn't successful. Please advice. Thank you.
Reply by John Herman●October 14, 20062006-10-14
If you are interpolating short sequences, you might want to investigate
perfect interpolation. This is implemented via and forward FFT into the
frequency domain, append zeroes to the FFT output, and then perform an inverse
FFT. This results in no change in the frequency domain characteristics though
getting the scaling right can cause problems. Parsaval is your friend in this
case.
The other thing I was thinking is that if you are interpolating by a multiple
than can be factored into the product of small integers, interpolating by
those small factors, probably smallest first, will result in a better result
with fewer FLOPS. It's sort of the reverse of decimation.
In article <1160606443.029373.8350@m73g2000cwd.googlegroups.com>, "Ron N."
<rhnlogic@yahoo.com> wrote:
>On Oct 9, 7:57 am, "Mark" <makol...@yahoo.com> wrote:
>> I have always been confused about the relationship between this kind of
>> interpolation (linear, cubic, spline etc) and the kind used in DSP
>> audio work for up sampling. In audio up-sampling we just insert zero
>> value samples and then pass the result through a low pass filter to
>> remove the image frequencies.
>
>The DSP kind of interpolation usually assumes the the data is
>from a bandlimited signal (or "close enough"). Other polynomial
>types of interpolation usually have an error bounded by the
>existance, continuity, or peak magnitude of some number of
>derivatives of the function.
>
>How are these two different types of constraints related?
>
>
Reply by Eric Jacobsen●October 12, 20062006-10-12
On 12 Oct 2006 14:22:12 -0700, "robert bristow-johnson"
<rbj@audioimagination.com> wrote:
>
>Eric Jacobsen wrote:
>> On 11 Oct 2006 14:46:25 -0700, "robert bristow-johnson"
>> <rbj@audioimagination.com> wrote:
>>
>> >i kinda agree with Eric that polyphase interpolation (essentially
>> >designing an optimal LPF and keeping the impulse response in a table)
>> >is preferable to spline or polynomial interpolation for any bandlimited
>> >reconstruction but where i might agree with Viognier is because of the
>> >cost. you can't have a polyphase table that has all infinite possible
>> >fractional delays. but you *can* evaluate a polynomial for any
>> >fractional delay expressible in the machine arithmetic. but if you can
>> >afford some memory for the polyphase table, you can use a polynomial
>> >interpolation between entries of that table.
>>
>> As I mentioned elsewhere, in a practical system with fixed precision
>> there is a number of phases beyond which the coefficients won't change
>> when trying to increase the resolution. I looked at this years ago
>> when designing a polyphase for a comm receiver and trying to answer
>> the question of how many phases did we really need to not lose any
>> performance.
>>
>> In other words, for an example system that uses 8-bit precision for
>> the coefficients there will be some minimum phase change beyond which
>> the change in phase will result in coefficient changes that are below
>> the LSB in the current coefficients.
>
>i think i understand. i think we get to the same place. for me, we
>have these images of the original input (hypothetically samples
>attached to uniformly spaced dirac impulses). we need to beat down
>those impulses to some degree before resampling. the polynomial
>interpolation can be expressed as convolving with something (a
>hypothetical impulse response) that will look more like a windowed sinc
>as the polynomial order (and the number of adjacent samples used in the
>interpolation) gets larger. we beat them images down to some level (by
>knowing what the hypothetical impulse response is) and assume what
>remains of all the images will get folded into the baseband. that is
>the noise that, in your case, should be less than 1 or 1/2 LSB.
>
>> Surprisingly, I found that for our case that meant that with a very
>> practical number of phases we had essentially infinite phase
>> precision, or at least within the precision we were using one would
>> never tell the difference.
>
>resampling is one thing. non-perfect accuracy of the reconstruction
>coefficients for a fast moving phase is gonna look something like
>noise.
Yes, but it can be made to be noise that's at or below the
quantization level, or lower, depending.
>a precision delay (with a constant or slowly varying fractional-sample
>delay amount) is another thing. even with your 8 bit word width, a
>precise fractional delay that is roughly constant but slightly in error
>(because of the limited number of fixed fractional delays) will show
>itself not as noise, but as a small error in the delay or frequency
>response or transfer function.
A comm guy would think of that as jitter in the symbol recovery clock.
>i dunno. i used to think i had a consistent and comprehensive way to
>look at it so to compare different methods. can't necessarily claim
>that now.
As far as conceptualizing the effects on the coefficients for the
timing jitter case, what I realized was that at some combination of
phase resolution and coefficient quantization, there's a point where
changing the desired sampling phase stops changing the coefficients.
In other words, for a given set of fixed-precision coefficients, there
is some number of phases above which the coefficient sets will just
start repeating themselves as you add more phases, since the
differences in the coefficient sets is less than the precision in the
coefficients.
You could design a polyphase system with N phases, with N a very large
number that gives you extremely fine precisions, and think that you
have reduced the effective jitter in the resampling process. But it
could be that if you looked at the coefficient sets that adjacent
FIRs, with different phases, are identical because the coefficient
precision is less than that required to discern that difference in
phase.
Whether a case like that really results in timing jitter or not is
arguable, since the coefficients are still correct in each case for
the precision used. In practice I used a reasonable N where the
coefficient sets did not duplicate and we had no measurable loss in
performance from theoretical.
So, one could also argue, perhaps, that there is no resampling jitter
in that case and you have, therefore, effectively infinite precision
in the time resolution of the resampling in a practical system with a
finite number of phases.
One could do some experiments to verify this, of course, and I've not
done them. ;)
Eric Jacobsen
Minister of Algorithms, Intel Corp.
My opinions may not be Intel's opinions.
http://www.ericjacobsen.org
Reply by robert bristow-johnson●October 12, 20062006-10-12
Ron N. wrote:
> On Oct 12, 2:22 pm, "robert bristow-johnson"
> <r...@audioimagination.com> wrote:
...
> >
> > a precision delay (with a constant or slowly varying fractional-sample
> > delay amount) is another thing. even with your 8 bit word width, a
> > precise fractional delay that is roughly constant but slightly in error
> > (because of the limited number of fixed fractional delays) will show
> > itself not as noise, but as a small error in the delay or frequency
> > response or transfer function.
>
> Wouldn't using weighted random selections of the two nearest
> table entries exchange noise for any absolute long term phase
> error? Or perhaps something similar to fraction saving instead
> of weighted random dithering.
not a bad idea(s). one seems to be dithering the fractional time
signal (before quantizing to the nearest discrete fractional delay) and
the other is noise-shaping the same signal (with no error at DC). i
think it would turn a discrete, stable, but slightly erroneous
effective delay (similar to coefficient quantization) to the correct
delay but with some jitter noise to the signal. being jitter, if the
signal so delayed was a low frequency signal, the noise would be less
than if it was the same amplitude but higher frequency content.
r b-j
Reply by Ron N.●October 12, 20062006-10-12
On Oct 12, 2:22 pm, "robert bristow-johnson"
<r...@audioimagination.com> wrote:
> Eric Jacobsen wrote:
> > On 11 Oct 2006 14:46:25 -0700, "robert bristow-johnson"
> > <r...@audioimagination.com> wrote:
>
> > >i kinda agree with Eric that polyphase interpolation (essentially
> > >designing an optimal LPF and keeping the impulse response in a table)
> > >is preferable to spline or polynomial interpolation for any bandlimited
> > >reconstruction but where i might agree with Viognier is because of the
> > >cost. you can't have a polyphase table that has all infinite possible
> > >fractional delays. but you *can* evaluate a polynomial for any
> > >fractional delay expressible in the machine arithmetic. but if you can
> > >afford some memory for the polyphase table, you can use a polynomial
> > >interpolation between entries of that table.
>
> > As I mentioned elsewhere, in a practical system with fixed precision
> > there is a number of phases beyond which the coefficients won't change
> > when trying to increase the resolution. I looked at this years ago
> > when designing a polyphase for a comm receiver and trying to answer
> > the question of how many phases did we really need to not lose any
> > performance.
>
> > In other words, for an example system that uses 8-bit precision for
> > the coefficients there will be some minimum phase change beyond which
> > the change in phase will result in coefficient changes that are below
> > the LSB in the current coefficients.i think i understand. i think we get to the same place. for me, we
> have these images of the original input (hypothetically samples
> attached to uniformly spaced dirac impulses). we need to beat down
> those impulses to some degree before resampling. the polynomial
> interpolation can be expressed as convolving with something (a
> hypothetical impulse response) that will look more like a windowed sinc
> as the polynomial order (and the number of adjacent samples used in the
> interpolation) gets larger. we beat them images down to some level (by
> knowing what the hypothetical impulse response is) and assume what
> remains of all the images will get folded into the baseband. that is
> the noise that, in your case, should be less than 1 or 1/2 LSB.
>
> > Surprisingly, I found that for our case that meant that with a very
> > practical number of phases we had essentially infinite phase
> > precision, or at least within the precision we were using one would
> > never tell the difference.resampling is one thing. non-perfect accuracy of the reconstruction
> coefficients for a fast moving phase is gonna look something like
> noise.
>
> a precision delay (with a constant or slowly varying fractional-sample
> delay amount) is another thing. even with your 8 bit word width, a
> precise fractional delay that is roughly constant but slightly in error
> (because of the limited number of fixed fractional delays) will show
> itself not as noise, but as a small error in the delay or frequency
> response or transfer function.
Wouldn't using weighted random selections of the two nearest
table entries exchange noise for any absolute long term phase
error? Or perhaps something similar to fraction saving instead
of weighted random dithering.
IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
Reply by robert bristow-johnson●October 12, 20062006-10-12
Eric Jacobsen wrote:
> On 11 Oct 2006 14:46:25 -0700, "robert bristow-johnson"
> <rbj@audioimagination.com> wrote:
>
> >i kinda agree with Eric that polyphase interpolation (essentially
> >designing an optimal LPF and keeping the impulse response in a table)
> >is preferable to spline or polynomial interpolation for any bandlimited
> >reconstruction but where i might agree with Viognier is because of the
> >cost. you can't have a polyphase table that has all infinite possible
> >fractional delays. but you *can* evaluate a polynomial for any
> >fractional delay expressible in the machine arithmetic. but if you can
> >afford some memory for the polyphase table, you can use a polynomial
> >interpolation between entries of that table.
>
> As I mentioned elsewhere, in a practical system with fixed precision
> there is a number of phases beyond which the coefficients won't change
> when trying to increase the resolution. I looked at this years ago
> when designing a polyphase for a comm receiver and trying to answer
> the question of how many phases did we really need to not lose any
> performance.
>
> In other words, for an example system that uses 8-bit precision for
> the coefficients there will be some minimum phase change beyond which
> the change in phase will result in coefficient changes that are below
> the LSB in the current coefficients.
i think i understand. i think we get to the same place. for me, we
have these images of the original input (hypothetically samples
attached to uniformly spaced dirac impulses). we need to beat down
those impulses to some degree before resampling. the polynomial
interpolation can be expressed as convolving with something (a
hypothetical impulse response) that will look more like a windowed sinc
as the polynomial order (and the number of adjacent samples used in the
interpolation) gets larger. we beat them images down to some level (by
knowing what the hypothetical impulse response is) and assume what
remains of all the images will get folded into the baseband. that is
the noise that, in your case, should be less than 1 or 1/2 LSB.
> Surprisingly, I found that for our case that meant that with a very
> practical number of phases we had essentially infinite phase
> precision, or at least within the precision we were using one would
> never tell the difference.
resampling is one thing. non-perfect accuracy of the reconstruction
coefficients for a fast moving phase is gonna look something like
noise.
a precision delay (with a constant or slowly varying fractional-sample
delay amount) is another thing. even with your 8 bit word width, a
precise fractional delay that is roughly constant but slightly in error
(because of the limited number of fixed fractional delays) will show
itself not as noise, but as a small error in the delay or frequency
response or transfer function.
i dunno. i used to think i had a consistent and comprehensive way to
look at it so to compare different methods. can't necessarily claim
that now.
r b-j