in article 4271117f$1_2@127.0.0.1, jim at "N0sp"@m.sjedging@mwt.net wrote on
04/28/2005 12:37:
> robert bristow-johnson wrote:
...
>> but you know that the F.T. of a single, rectangular pulse in
>> continuous-time, with width of T (the sample time) is sinc(f*T), right?
>> that's the Zero-Order-Hold or drop-sample interpolation. and each time you
>> convolve with another identical rectangular pulse, you multiply by another
>> sinc(f*T) function, right? so an (N-1)th order piecewise polynomial
>> interpolation constructed this was will have a spectrum of sinc^N(f*T),
>> right?
>>
>
> Yes, I understand what you are claiming happens in the frequency domain.
> What I haven't seen is any actual description of the physical operation
> performed in the time domain that would produce this sort of frequency
> response.
actually, i think we both understand what is happening, but from different
POVs. it's still the same outcome, if the same conditions are applied.
> Take for example the simplest spline, where you just construct a
> straight line from each point to the next.
otherwise called "linear interpolation". i would not call this the simplest
spline, i would call "zero-order hold" or "drop-sample" interpolation the
simplest spline. using your language, that would be like discrete
convolving with [1].
> If I sample this curve half
> way between 2 points the result is just the average of the 2 points. If
> I move ahead one sample width and take another sample I again get the
> average. If I sample the whole curve in this manner its just as if I
> filtered the whole set of points with the FIR filter [1 1]/2. The
> frequency response of that filter is going to be periodic.
sure. you are implementing a sorta fractional sample fixed-delay (which is
an application of interpolation). that is a purely LTI operator, so what
you have is a filter with phase or group delay of 1/2 sample and a
cos(pi*f*T) magnitude response (unnormalized f, so T is the sampling period
and 1/(2T) is the Nyquist freq). that will be the Dirichlet kernal with
N=2.
> This is a complete, albeit in its simplest form, description of the actual
> process of sampling a spline.
not from my POV. you are fixing your sample times to be at this fixed delay
of 1/2 sample and what you essentially have is a filter. there are no
issues of SRC noise or error from resampling that happens in the real
process of
> Where exactly do these continuous identical
> rectangular pulses your talking about appear in this process?
i am generalizing a little more (so as to include the application of
resampling or sample-rate conversion). i had been talking about defining
the interior space between samples for all t, not just t = (n+0.5)*T. here
i am using the language of that sampling/reconstruction treatise i
periodically post to comp.dsp:
http://groups-beta.google.com/group/comp.dsp/msg/f9e0ad7b430bc653?hl=en
(view both with a fixed-width font.)
let d(t) =def "dirac delta function"
T =def "sampling period"
x[k] =def x(k*T)
Fs =def 1/T
the sampling function:
� � � � � � � �+inf
� � q(t) = T * SUM{ d(t - k*T) }
� � � � � � � k=-inf
� � � � � +inf
� � = SUM{ exp(j*2*pi*n*Fs*t) }
� � � � �n=-inf
we know that when you filter the ideal sampled signal
� � � � � � � �� �+inf
� x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) }
� � � � � � � � k=-inf
� � � � � � � �+inf
� � � � � � �= SUM{ x(t) * exp(j*2*pi*n*Fs*t) }
� � � � � � � n=-inf
with an ideal brickwall filter (with cutoff at Nyquist)
� � � � � �{ �1 �for �|f| < �Fs/2 = 1/(2*T)
� � H(f) = {
� � � � � �{ �0 �for �|f| > Fs/2 = 1/(2*T)
h(t) = (1/T)*sinc(t/T)
that fully recovers x(t) (assuming it was sufficiently bandlimited before it
was sampled), so the ideal interpolation function for getting x(t) from x[n]
is
� � � � � � � �� �+inf
� x(t) = T * SUM{ x[k] * h(t - k*T) }
� � � � � � � � k=-inf
� � � � � � � �� �+inf
� = SUM{ x[k] * sinc((t - k*T)/T) }
� � � � � � � � k=-inf
� � � � � � � �� �+inf
� = SUM{ x[m+k] * sinc((t/T - m) - k) }
� � � � � � � � k=-inf
m = any integer, we might conveniently choose it to be m = floor(t/T).
using that ideal sinc function utterly wipes out all images, and leaves the
original spectrum untouched. so when we resample x(t) at another
synchronous or asynchronous sample rate (let's say a higher rate so that we
don't have to argue about aliasing), the resampling will be perfect because
no images are folded back into the baseband in the resampling process.
we also know that the ideally sampled signal above has the spectrum
� � � � � � � � � � �+inf
� � FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) }
� � � � � � � � � � n=-inf
where
� � � � � � � � � � � � � +inf
� � X(f) =def FT{ x(t) } = integral{ x(t)*exp(-j*2*pi*f*t) dt}
� � � � � � � � � � � � � -inf
now, we can't use the ideal sinc() for resampling, because of two reasons:
1. it goes on forever. we can't sum an infinite sum. gonna have to
truncate (or window) it somewhere
2. suppose we window it:
� � � � � � � � �+L
� � x((m+a)*T) = SUM{ x[m+k] * sinc(a-k) * w((a-k)/L) }
� � � � � � � � k=-L-1
where t = (m + a)*T
� m = floor(t/T)
� a = fract(t/T) = t/T - floor(t/T) 0 <= a < 1
and w(t) is a window function of non-zero width of 2L. L should be an
integer and i found L=16 to be good enough, but the AD1890 uses L=32 (a 64
tap polyphase FIR).
even if we window it so the summation is finite, if we are going to
interpolate at any arbitrary fractional time, "a", we would still need an
infinite number of sets of coefficients (each a vector of length 2L) to
define
g(a-k) = sinc(a-k)*w((a-k)/L)
for all of the infinite number of values of "a" that are 0 <= a < 1. so,
somehow, we have to approximate g(a) = sinc(a)*w(a/L) with a function that
can be evaluated with a computer for any value of "a" within -L < a <= L .
in general, these will turn out to piecewise finite order polynomials.
this simplest crude approximation is zero-order hold or "drop-sample"
interpolation. for symmetry sake, let's say we "round" to the nearest
sample. that would be approximating g(a) with
g(a) ~= p0(a)
where
{ 1 -1/2 <= a < 1/2
p0(a) = {
{ 0 otherwise
that means we are convoluting
� � � � � � � �� �+inf
� x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) }
� � � � � � � � k=-inf
with
h[0](t) = 1/T * p0(t/T)
getting us
� � � � � � � �� �+inf
� x(t) ~= T * SUM{ x[k] * h[0](t - k*T) }
� � � � � � � � k=-inf
and the spectrum
� � � � � � � � � � �+inf
� � FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) }
� � � � � � � � � � n=-inf
is getting multiplied by
H[0](f) = sinc(f*T)
and then we are resampling that with a sampling frequency that is unrelated
to the original Fs. then the images of H[0](f)*FT{x(t)*q(t)}, all of them
for n not 0, get reduced by the notches in the sinc() function of H0(f).
all of those reduced images can be potentially folded into the baseband with
an arbitrary new Fs. i can show that for 120 dB S/N ratio, the original
x(t) would have to be oversampled by a factor 512K so that all of those
images are narrow enough to be killed enough so that their energy is 120 dB
less than that of x(t).
the next simplest is "linear interpolation". now we are approximating
g(a) ~= p1(a)
where
{ 1-|a| |a| < 1
p1(a) = {
{ 0 otherwise
now the ideally sampled continuous-time function
� � � � � � � �� �+inf
� x(t)*q(t) = T * SUM{ x[k] * d(t - k*T) }
� � � � � � � � k=-inf
is getting convoluted with
h[1](t) = 1/T * p1(t/T)
getting us
� � � � � � � �� �+inf
� x(t) ~= T * SUM{ x[k] * h[1](t - k*T) }
� � � � � � � � k=-inf
and the spectrum
� � � � � � � � � � �+inf
� � FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) }
� � � � � � � � � � n=-inf
is getting multiplied by
H[1](f) = ( sinc(f*T) )^2 .
so now we have deeper and wider notches killing those images of x(t) where n
is not 0. so those images are reduced further. this time, for 120 dB S/N
in the resampling process, all you need to do is first upsample by 512 (not
512K).
at this point for 0th order and 1st order polynomial interpolation,
Lagrange, Hermite, and B-spline all agree. but for order N=2 and greater,
they begin to differ. as the order, N, gets larger, Lagrange and Hermite
will look more like the ideal sinc() function for ideal interpolation
(Hermite will be smoother since it tries to have as many continuous
derivatives as possible), but the Nth order B-spline will have a frequency
response of
H[N](f) = ( sinc(f*T) )^(N+1)
and the Nth order B-spline interpolation function, h[N](t) will be the
convolution of h[N-1](t) with h[0](t). as N gets larger, h[N](t) looks more
like a gaussian pulse, but for Lagrange and Hermite polynomials, h[N](t)
will look more like 1/T * sinc(t/T), the ideal interpolating impulse
response.
now, the power spectrum of *any* Nth order piecewise polynomial
interpolation function
( H[N](f) )^2
will have terms that are sinc^2, sinc^4, sinc^6, ... sinc^(2N). but for the
Nth order B-spline, all of those terms will be zero except for sinc^(2N).
the power spectrum of the other polynomial interpolators will have lower
powers of sinc^2 that will make those notches less deep or wide and less
effective in killing the images before resampling. Lagrange and Hermite
look much more like the ideal sinc() interpolator and might not need any
"pre-filtering" in the baseband to make up for the severe LPF that B-splines
do, but they don't wipe out the images (which become folded and noise in the
baseband) as well as the B-spline does.
does that answer your question, Jim? if you (or anyone else) sends me a
despammified edress, i'll return a copy of the paper that Duane Wise and i
did about this, but i think that Olli Niemitalo did a better job:
http://www.biochem.oulu.fi/~oniemita/dsp/deip.pdf
or http://ldesoras.free.fr/doc/articles/resampler.pdf
--
r b-j rbj@audioimagination.com
"Imagination is more important than knowledge."