# Beating Nyquist?

Started by July 25, 2007
```I  wrote:
> Friends,
>
> just now I stumbled upon this webpage:
>
> http://www.edi.lv/dasp-web/
>
> Specifically, in this chapter
>
> http://www.edi.lv/dasp-web/sec-5.htm
>
> they state that they can sample a 1.2GHz signal using a pseudo-random
> sampling instants with an average rate of 80MHz (in the last line of
> section "5.2 Aliasing, how to avoid it").
>
> I know that for nonuniform sampling, a generalization of the Sampling
> Theorem has been proved , which states that a bandlimited signal
> maybe reconstructed from nonuniformly spaced samples if the "average"
> sampling rate is higher than twice the bandwidth of the signal.
>
> This doesn't immediately contradict the claim above - it just says
> that if the average sampling rate exceeds a certain limit, it can be
> shown that the samples are sufficient for reconstruction. It might
> well be that if the average rate is below the limit, reconstruction is
> still possible. However, the claim still seems like magic to me,
> specifically in the light that the sampled signals underlie no
> restrictions (apart from the 1.2GHz bandwidth).
>
>
> Regards,
> Andor
>
>  F. J. Beutler, "Error-free recovery of signals from irregularly
> spaced samples," SIAM Rev., vol. 8, pp. 328-335, July 1966.

Thanks all for your interesting replies!

The part that made me really suspicious are the two figures Fig. 5.1
and Fig. 5.2. They show four equidistant respectively randomly-spaced
samples of a sine wave. In the equidistant case, the baseband sine
wave and it's images above Nyquist are shown to pass through the
sample points. In the randomly-spaced figure, the images of the sine
wave above Nyquist are also displayed, and lo!, they don't pass
through the sample points.

However, it is obvious that one can still find sine waves at other
(higher) frequencies that _will_ pass through those points in Fig. 5.2
(I calculated this myself, using randomly selected samples of a sine
wave). It seems like humbug to display the same sine waves again as in
the equidistant case and say: "see, those sine waves don't pass
through those points, so we don't have aliasing".

It would really be interesting to see exactly how the pseudo-random
sample times are spaced to fully exclude aliasing up to 30 times the
Nyquist rate (as was the claim).

Regards,
Andor

```
```Andor wrote:
(snip on non-uniform sampling and aliasing)

> However, it is obvious that one can still find sine waves at other
> (higher) frequencies that _will_ pass through those points in Fig. 5.2
> (I calculated this myself, using randomly selected samples of a sine
> wave). It seems like humbug to display the same sine waves again as in
> the equidistant case and say: "see, those sine waves don't pass
> through those points, so we don't have aliasing".

One that I was going to mention previously, but didn't, is that
with non-uniform sampling you need to pass along the information

> It would really be interesting to see exactly how the pseudo-random
> sample times are spaced to fully exclude aliasing up to 30 times the
> Nyquist rate (as was the claim).

30 sounds pretty high.  As has been indicated, the result is an
increase in noise.   If the noise is going to increase, one might as
well use fewer bits/sample and increase the number of samples.
I suppose, then, that there is a balance somewhere between
the two.

Considering the problem further, it would seem to be significant
only in signals that have a large amount of energy in some
narrow frequency ranges.  In that case, a compression method
that recognizes the spectral content of the signal and stores
it appropriately would seem a better use of bits.

Also, this reminds me of my favorite deconvolution book
(by Jansson).  In spectroscopy problems, you have a diffraction
grating and two slits that determine the spectral width and
amount of light that goes through.  Wider slits allow more light
through but also blur out the spectrum.  More light means a higher
signal/noise ratio.  If I remember it right, there is a problem
where the S/N depends on the fifth power of the intensity such
that wider slits and a non-linear deconvolution algorithm gives
the best results.  That is, the extra S/N more than makes up for
the loss in the deconvolution.

-- glen

```
```"glen herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:46SdnZyaId6hKzrbnZ2dnUVZ_sSlnZ2d@comcast.com...

..............................

> With non-uniform sampling the reconstruction isn't a sinc,
> but if the sample points are rational then a finer grid will exist.
> It they are irrational then no such grid exists, but non-uniform
> sampling still works the same way.

Glen,

I don't see why not.  Unless you're talking about periodic epochs and
Dirichlets.
If you pass the non-uniform samples through a lowpass filter / convolve with
a particularly structured sinc / then that would seem to work fine.

Now I can imagine having a different sinc for each sample but then what
would be it's related lowpass?  You'd have to relate at least one sample to
another, etc. etc.  And, in the end, the sincs are of infinite extent and so
forth.  So, I think they all have to be of the same periodicity.  Now, there
*are* other basis sets but I believe they can always be decomposed into a
basis of sincs as long as the underlying signal space is bandlimited.  And,
that's a condition we accept perforce isn't it?  That is, once sampled you
can't tell the difference.

One could add that perfect lowpass filter basis isn't necessary if one has a
basis that meets the Gibby Smith criterion for filter shape (antisymmetric

Can you illuminate?

Fred

```
```Fred Marshall wrote:
(snip)

> I don't see why not.  Unless you're talking about periodic epochs and
> Dirichlets.
> If you pass the non-uniform samples through a lowpass filter / convolve with
> a particularly structured sinc / then that would seem to work fine.

Each basis function should be one at its sample point, and zero at all
the other sample points.  I do agree that it isn't obvious that is
consistent with a Nyquist frequency.

> Now I can imagine having a different sinc for each sample but then what
> would be it's related lowpass?  You'd have to relate at least one sample to
> another, etc. etc.  And, in the end, the sincs are of infinite extent and so
> forth.  So, I think they all have to be of the same periodicity.  Now, there
> *are* other basis sets but I believe they can always be decomposed into a
> basis of sincs as long as the underlying signal space is bandlimited.  And,
> that's a condition we accept perforce isn't it?  That is, once sampled you
> can't tell the difference.

I am trying to remember the way it is done in crystallography.
(Crystallography is pretty much sampling theory in three dimensions.)
In that case, more complicated structures are described using
a simpler lattice with a basis, where the basis is the combination
of atoms in the unit cell.  See:

http://en.wikipedia.org/wiki/Structure_factor

To me, the basis that you want is the basis that is one at the
corresponding sample point and zero at others sample points.
I think, though, that it will not be orthogonal in continuous space.
It is the right one, though, if you consider reconstruction as a
sum over basis functions multiplied by the sample values.

-- glen

```
```On 26 Jul., 15:00, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Andor wrote:
>
> (snip on non-uniform sampling and aliasing)
>
> > However, it is obvious that one can still find sine waves at other
> > (higher) frequencies that _will_ pass through those points in Fig. 5.2
> > (I calculated this myself, using randomly selected samples of a sine
> > wave). It seems like humbug to display the same sine waves again as in
> > the equidistant case and say: "see, those sine waves don't pass
> > through those points, so we don't have aliasing".
>
> One that I was going to mention previously, but didn't, is that
> with non-uniform sampling you need to pass along the information

Continuing this point, once you are given the sampling times {t_n} and
you have the samples {s_n}, how are you going to process them further?
For example, an important algorithm would resample the sampling points
{t_n, s_n} to a uniformly sampled sequence {t_n', s_n'}. How?

I know how to do it perfectly if the average sample rate is twice the
Nyquist rate. But how can you do it if the average sample rate is
about 1/30 the Nyquist rate? What interpolation kernels do you use for
reconstruction?

>
> > It would really be interesting to see exactly how the pseudo-random
> > sample times are spaced to fully exclude aliasing up to 30 times the
> > Nyquist rate (as was the claim).
>
> 30 sounds pretty high.  As has been indicated, the result is an
> increase in noise.   If the noise is going to increase, one might as
> well use fewer bits/sample and increase the number of samples.
> I suppose, then, that there is a balance somewhere between
> the two.
>
> Considering the problem further, it would seem to be significant
> only in signals that have a large amount of energy in some
> narrow frequency ranges.

Because this then would correspond to bandpass sampling with random
sampling points where the average sampling rate is high enough. Old
hat (at least as old as Cauchy who turned 218 this year).

> In that case, a compression method
> that recognizes the spectral content of the signal and stores
> it appropriately would seem a better use of bits.

Uniform sampling is numerically less stable for reconstruction than
uniform sampling. As soon as the sampling points deviate from uniform
spacing, the maximum of the interpolation kernels moves away from the
sampling points (the maximum of the sinc is right on the sampling
point). The less uniform, the higher this maximum becomes, and thus
more accuracy is required from the samples for the same reconstruction
error energy than from uniform sampling.

In other words, if you have the possibility for uniform sampling,
rather use that than nonuniform sampling. You get most out of a given
number of quantization bits that way.

>
> Also, this reminds me of my favorite deconvolution book
> (by Jansson).  In spectroscopy problems, you have a diffraction
> grating and two slits that determine the spectral width and
> amount of light that goes through.  Wider slits allow more light
> through but also blur out the spectrum.  More light means a higher
> signal/noise ratio.  If I remember it right, there is a problem
> where the S/N depends on the fifth power of the intensity such
> that wider slits and a non-linear deconvolution algorithm gives
> the best results.  That is, the extra S/N more than makes up for
> the loss in the deconvolution.

Sheesh, physicists never cease cease to amaze me :-).

Regards,
Andor

```
```> Uniform sampling is numerically less stable for reconstruction than
^^^^^^^^^^
Nonuniform

> uniform sampling.

```
```Andor wrote:

(snip)

> Uniform sampling is numerically less stable for reconstruction than
> uniform sampling. As soon as the sampling points deviate from uniform
> spacing, the maximum of the interpolation kernels moves away from the
> sampling points (the maximum of the sinc is right on the sampling
> point). The less uniform, the higher this maximum becomes, and thus
> more accuracy is required from the samples for the same reconstruction
> error energy than from uniform sampling.

This is true, but if you don't move very far from uniform, but
have a complex pattern of shift with a very long period you can
greatly reduce the single frequency alias from a single frequency
source.  That is pretty much what dithered clocks for processors do.
You don't want the clock period to vary by a large amount,
and you never want it less than the minimum clock period.

-- glen

```
```On 27 Jul., 11:30, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Andor wrote:
>
> (snip)
>
> > Uniform sampling is numerically less stable for reconstruction than
> > uniform sampling. As soon as the sampling points deviate from uniform
> > spacing, the maximum of the interpolation kernels moves away from the
> > sampling points (the maximum of the sinc is right on the sampling
> > point). The less uniform, the higher this maximum becomes, and thus
> > more accuracy is required from the samples for the same reconstruction
> > error energy than from uniform sampling.
>
> This is true, but if you don't move very far from uniform, but
> have a complex pattern of shift with a very long period you can
> greatly reduce the single frequency alias from a single frequency
> source.  That is pretty much what dithered clocks for processors do.
> You don't want the clock period to vary by a large amount,
> and you never want it less than the minimum clock period.

I know a CD player that dithers the read-out clock of the CD. In
audio, it is generally accepted that correlated noise (showing up as
lines in the spectrum) is more disturbing (audio wise) than
uncorrelated wideband noise - this can be translated to transforming
spectral lines in the audio jitter to wideband noise. However, this
noise is not passed on as specific nonuniform sampling times {t_n} -
sampling is still assumed to be uniform. If the sampling times were
passed on, they could be corrected, at which point it wouldn't matter
if the noise is correlated or not - only the amplitude of the noise
matters (because it reduces numerical stability, as described above).

Why is clock jitter dithering used for processors?

Regards,
Andor

```
```On Jul 27, 5:02 am, Andor <andor.bari...@gmail.com> wrote:
> On 27 Jul., 11:30, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
>
>
>
>
>
> > Andor wrote:
>
> > (snip)
>
> > > Uniform sampling is numerically less stable for reconstruction than
> > > uniform sampling. As soon as the sampling points deviate from uniform
> > > spacing, the maximum of the interpolation kernels moves away from the
> > > sampling points (the maximum of the sinc is right on the sampling
> > > point). The less uniform, the higher this maximum becomes, and thus
> > > more accuracy is required from the samples for the same reconstruction
> > > error energy than from uniform sampling.
>
> > This is true, but if you don't move very far from uniform, but
> > have a complex pattern of shift with a very long period you can
> > greatly reduce the single frequency alias from a single frequency
> > source.  That is pretty much what dithered clocks for processors do.
> > You don't want the clock period to vary by a large amount,
> > and you never want it less than the minimum clock period.
>
> I know a CD player that dithers the read-out clock of the CD. In
> audio, it is generally accepted that correlated noise (showing up as
> lines in the spectrum) is more disturbing (audio wise) than
> uncorrelated wideband noise - this can be translated to transforming
> spectral lines in the audio jitter to wideband noise. However, this
> noise is not passed on as specific nonuniform sampling times {t_n} -
> sampling is still assumed to be uniform. If the sampling times were
> passed on, they could be corrected, at which point it wouldn't matter
> if the noise is correlated or not - only the amplitude of the noise
> matters (because it reduces numerical stability, as described above).
>
> Why is clock jitter dithering used for processors?
>
> Regards,
> Andor- Hide quoted text -
>
> - Show quoted text -

Hello Andor,

Sometimes clock dithering is used for processors to keep their emitted
radiation down to a lower level. A processor I like to use definitely
has this feature (Rabbit 4000). Computer clocks tend to be a horrible
source of noise, so anything that can reduce it is welcome.

Clay

```
```Andor wrote:

...

> Why is clock jitter dithering used for processors?