1. Why do you represent yourself as an escapee from the
school playground?
Airy has not been around for months and yet you are
repeatedly making rather silly and infantile remarks
in his direction.
If you don't understand the points that Airy makes, then
admit it, or better to keep quiet. As it is, by dealing with
technical points in an international public forum such as this
one by just bad-mouthing your opponents makes you appear to
be a charlatan peddling some snake-oil elixir.
2. Talking of snake-oil, where have your Diracian impulses come from?
The Diracian has some interesting properties - zero width,
area of unity, infinite sum of all possible cosines, a height
which is not discussed but which appears to be greater in magnitude
than a Bristow-Johnson ego (if that were indeed possible). In the
systems that you deal with, what experimental evidence do you
have that there are pulses with the attributes of Diracians upon
which to base your theory?
3. Talking of snake-oil, where did the factor of "T" come from in your
opening lines?
Consider a 16-bit ADC capable of 100 M Samples per sec. In the first
instance we'll use it to sample a geophysical signal of bandwidth
limited
to 300 HZ and sample at 1 kHz, with suitable analogue instrumentation
to match the input signal to the full range of the ADC. If we now
keep the circuit the same, but now sample at 65.536 MHZ, your claimed
factor of "T" will result in the 16 bit range being compressed down
to one bit. We know this doesn't happen -there will be more samples,
but they'll still be of the same magnitude and 16-bit range.
There
robert bristow-johnson wrote:
> in article VVLIe.108880$Pf3.64017@fe2.news.blueyonder.co.uk, ma at
> ma@nowhere.com wrote on 08/05/2005 11:51:
>
> > Where can I find some mathematical modeling? I need to know the theory and
> > then use the code as a learning practice.
>
> you want theory? i'll give you theory.
>
> below is about as fundamental for the theory as you can get. it really is
> just a result or application of the Nyquist/Shannon Sampling and
> Reconstruction Theorem. imagine outputting your signal, at the original
> sampling rate, to an ideal D/A converter and then resampling that analog
> output signal with an ideal A/D converter (running at the new sampling
> rate). now convert that imagination to mathematics (see below).
>
> i could have just referred to the post (responding to the infamous
> "Airy-head") but Google is messing up the ASCII spacing and you don't wanna
> read what Airy had to say, anyway. so it's reposted below.
>
> --
>
> r b-j rbj@audioimagination.com
>
> "Imagination is more important than knowledge."
>
>
>
>
>
> (a monospaced font needed to view the following correctly)
>
>
> ____________________________________________________________________________
>
> Bandlimited Interpolation, Sample Rate Conversion, or
> Fractional-Sample Delay from the P.O.V. of the
> Nyquist/Shannon Sampling and Reconstruction Theorem
>
> ____________________________________________________________________________
>
> Here is the mathematical expression of the sampling and reconstruction
> theorem:
>
>
>
> x(t)*q(t) = T*SUM{x[k]*d(t-k*T)} .------.
> x(t)--->(*)------------------------------------->| H(f) |---> x(t)
> ^ '------'
> |
> | +inf
> '------- q(t) = T * SUM{ d(t - k*T) }
> k=-inf
>
>
> where: d(t) = 'dirac' impulse function
> and T = 1/Fs = sampling period
> Fs = sampling frequency
>
>
> +inf
> q(t) = T * SUM{ d(t - k*T) } is the "sampling function", is periodic
> k=-inf with period T, and can be expressed as a
> Fourier series. It turns out that ALL of
> the Fourier coefficients are equal to 1.
>
>
> +inf
> q(t) = SUM{ c[n]*exp(j*2*pi*n/T*t) }
> n=-inf
>
>
> where c[n] is the Fourier series coefficient:
>
>
> t0+T
> c[n] = 1/T * integral{q(t) * exp(-j*2*pi*n/T*t) dt}
> t0
>
>
> and t0 can be any time. We choose t0 to be -T/2.
>
>
> T/2 +inf
> c[n] = 1/T * integral{T * SUM{ d(t - k*T) } * exp(-j*2*pi*n/T*t) dt}
> -T/2 k=-inf
>
>
> T/2
> = integral{ d(t - 0*T) * exp(-j*2*pi*n/T*t) dt}
> -T/2
>
>
> T/2
> = integral{ d(t) * exp(-j*2*pi*n/T*t) dt}
> -T/2
>
>
> = exp(-j*2*pi*n/T*0)
>
>
> = 1
>
>
> So all Fourier series coefficients for the periodic q(t) are 1.
> This results in another valid expression for q(t):
>
>
> +inf
> q(t) = SUM{ exp(j*2*pi*n*Fs*t) } .
> n=-inf
>
>
> The sampled signal looks like:
>
>
> +inf
> x(t)*q(t) = x(t) * T * SUM{ d(t - k*T) }
> k=-inf
>
>
> +inf
> = T * SUM{ x(t) * d(t - k*T) }
> k=-inf
>
>
> +inf
> = T * SUM{ x(k*T) * d(t - k*T) }
> k=-inf
>
>
> +inf
> = T * SUM{ x[k] * d(t - k*T) }
> k=-inf
>
>
> where x[k] =def x(k*T) by definition. (We are using the notation
> that x[] is a discrete sequence and x() is a continuous function.)
> BTW, if you where to compute the Laplace Transform of the sampled
> signal x(t)*q(t), you would get the Z Transform of x[k] scaled by
> the constant T.
>
>
> An alternative (and equally important representation) of
> the sampled signal is:
>
>
> +inf
> x(t)*q(t) = x(t) * SUM{ exp(j*2*pi*n*Fs*t) }
> n=-inf
>
>
> +inf
> = SUM{ x(t) * exp(j*2*pi*n*Fs*t) }
> n=-inf
>
> Defining the spectrum of x(t) as its Fourier Transform:
>
>
> +inf
> X(f) =def FT{ x(t) } =def integral{ x(t)*exp(-j*2*pi*f*t) dt}
> -inf
>
>
> Then using the frequency shifting property of the Fourier Transform,
>
>
> FT{ x(t)*exp(j*2*pi*f0*t) } = X(f - f0)
>
>
> results in
>
>
> +inf
> FT{ x(t)*q(t) } = SUM{ X(f - n*Fs) }
> n=-inf
>
>
> This says, what we all know, that the spectrum of our signal
> being sampled is shifted and repeated forever at multiples
> of the sampling frequency, Fs. If x(t) or X(f) is bandlimited
> to B (i.e. X(f) = 0 for all |f| > B) AND if there is no
> overlap of the tails of adjacent images X(f), that is:
>
>
> right tail of nth image of X(f) < left tail of (n+1)th image
>
>
> n*Fs + B < (n+1)*Fs - B = n*Fs + Fs - B
>
>
> B < Fs - B
>
>
> 2*B < Fs = 1/T
>
>
> Then we ought to be able to reconstruct X(f) (and also x(t)) by
> low pass filtering out all of the images of X(f). To do that,
> Fs > 2*B (to prevent overlap) and H(f) must be:
>
>
> { 1 for |f| < Fs/2 = 1/(2*T)
> H(f) = {
> { 0 for |f| >= Fs/2 = 1/(2*T)
>
>
> This is the "sampling" half of the Nyquist/Shannon sampling and
> reconstruction theorem. It says that the sampling frequency, Fs,
> must be strictly greater than twice the bandwidth, B, of the
> continuous-time signal, x(t), for no information to be lost (or
> "aliased"). The "reconstruction" half follows:
>
>
> The impulse response of the reconstruction LPF, H(f), is the
> inverse Fourier Transform of H(f), called h(t):
>
>
> +inf
> h(t) = iFT{ H(f) } = integral{ H(f)*exp(+j*2*pi*f*t) df}
> -inf
>
>
> 1/(2*T)
> = integral{ exp(+j*2*pi*f*t) df}
> -1/(2*T)
>
>
> = (exp(+j*2*pi/(2*T)*t) - exp(+j*2*pi/(-2*T)*t))/(j*2*pi *t)
>
>
> = ( exp(+j*pi/T*t) - exp(-j*pi/T*t) )/(j*2*pi*t)
>
>
> = sin((pi/T)*t)/(pi*t)
>
>
> = (1/T)*sin(pi*t/T)/(pi*t/T)
>
>
> = (1/T)*sinc(t/T)
>
>
> where sinc(u) =def sin(pi*u)/(pi*u)
>
>
> The input to the LPF is x(t)*q(t) = T*SUM{x[k]*d(t - k*T)} .
> Each d(t - k*T) impulse generates its own impulse response and
> since the LPF is linear and time invariant, all we have to do
> is add up the time-shifted impulse responses weighted by their
> coefficients, x[k]. The T and 1/T kill each other off.
>
> The output of the LPF is
>
>
> +inf +inf
> x(t) = SUM{ x[k]*sinc((t-k*T)/T) } = SUM{ x[k]*sinc(t/T - k) }
> k=-inf k=-inf
>
>
> This equation tells us explicitly how to reconstruct our
> sampled, bandlimited input signal from the samples. This is
> the "reconstruction" half of the Nyquist/Shannon sampling and
> reconstruction theorem.
>
>
>
> Interpolation, Fractional Delay, Sample Rate Conversion, etc:
>
> When interpolating (whether for fractional sample delay or
> for doing sample rate conversion), you must evaluate this
> equation for times that might not be integer multiples of your
> sampling period, T.
>
> The sinc(t/T) function is one for t = 0 and zero for t = k*T
> for k = nonzero integer. This means that if your new
> sample time, t, happens to land exactly on an old sample
> time, k*T, only that sample (and none of the neighbors)
> contributes to the output sample and the output sample
> is equal to the input sample. Only in the case where the
> output sample is in between input samples, do the neighbors
> contribute to the calculation.
>
> Since the sinc() function goes on forever to +/- infinity,
> it must be truncated somewhere to be of practical use.
> Truncating is actually applying the rectangular window (the
> worst kind) so it is advantageous to window down the sinc()
> function gradually using something like a Hamming or Kaiser
> window. In my experience, with a good window, you'll need to
> keep the domain of the windowed sinc() function from -16 to +16
> (a 32 tap FIR) and sample it 16384 times in that region (that
> would be 16384/32 = 512 polyphases or "fractional delays" if
> linear interpolation is subsequently used to get in between
> neighboring polyphases).
>
> Now we simply split the time, t, into an integer part, m, and
> fractional part, a, and substitute:
>
>
> +inf
> x((m+a)*T) = SUM{ x[k]*sinc((m+a)*T/T - k) }
> k=-inf
>
>
> +inf
> = SUM{ x[k]*sinc(a + m - k) }
> k=-inf
>
>
> where: t = (m + a)*T
> and m = floor(t/T)
> a = fract(t/T) = t/T - floor(t/T)
>
> t-T < m*T <= t which means a is always nonnegative and less
> than 1: 0 <= a < 1
>
>
> The limits of the sum can be bumped around a bit without changing
> it and by substituting k <- k+m :
>
>
> +inf
> x((m+a)*T) = SUM{ x[m+k] * sinc(a - k) }
> k=-inf
>
>
> Now, an approximation is made by truncating the summation to a
> finite amount (which is effectively windowing):
>
>
> +16
> x((m+a)*T) = SUM{ x[m+k] * sinc(a-k) * w((a-k)/16) }
> k=-15
>
>
> w(u) is the window function. A Hamming window would be:
>
> { 0.54 + 0.46*cos(pi*u) for |u| < 1
> w(u) = {
> { 0 for |u| >= 1
>
> A Kaiser window would be better.
>
> This requires two 32 point FIR computations to calculate two
> adjacent interpolated samples (these are linearly interpolated
> to get the final output sample). Since the sinc(u)*w(u)
> function is symmetrical, that means 8193 numbers stored somewhere
> in memory. When interpolating, the integer part of t/T (or 'm')
> determines which 32 adjacent samples of x[k] to use, the
> fractional part of t/T (or 'a') determines which set of 32 windowed
> sinc() coefficients to be used to combine the 32 samples.
>
> ____________________________________________________________________________
>
> ____________________________________________________________________________