Forums

Beating Nyquist?

Started by Andor July 25, 2007
On 31 Jul, 09:58, Andor <andor.bari...@gmail.com> wrote:
> On 31 Jul., 02:29, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > > > > > > Fred Marshall wrote: > > > (snip) > > > > It can be shown that a sinc is just a sum of sinusoids and that sines and > > > cosines form an equally valid temporal basis set for a finite and regularly > > > discrete spectrum. Just being finite and discrete makes it pretty obvious, > > > eh? > > > Any linear, and linearly independent, combination of basis functions > > can be used as a new basis. The sinc basis is convenient for uniform > > spaced samples, as each sample is represented by one basis function > > in the reconstruction. > > > Similarly, there are basis functions that represent the reconstructed > > samples of a non-linearly sampled signal. Those are not sinc. > > Glen, can you tell me more about those basis functions for non- > linearly sampled signals?
I would be very surprised if simple analytical expressions exist for such functions. The statement "a set of basis functions exists" is a far cry from "this is the set of basis functions." The statement "a linear signal can be reconstructed from any complete set of orthogonal basis functions" can be proved using the mathematical tools covered in an intro course on real analysis. However, the set of sinc(x) functions is the one set of such functions that has the property that the copntribution to the reconstructed signal from one sample is represented by exactly one of the basis functions. As others have already mentioned, once you throw uniform sampling out the window, you throw all the convenient, well understood easy-to-use tools with it. Rune
Rune Allnor wrote:
(snip)

>>Glen, can you tell me more about those basis functions for non- >>linearly sampled signals?
> I would be very surprised if simple analytical expressions > exist for such functions. The statement "a set of basis > functions exists" is a far cry from "this is the set of > basis functions."
Simpler the sample spacing will make simpler basis functions. Next to uniform sampling, shifting every other sample over, (that is, uniformly spaced pairs) shouldn't be so hard. Uniformly spaced triplets will be a little harder.
> The statement "a linear signal can be reconstructed from > any complete set of orthogonal basis functions" can be > proved using the mathematical tools covered in an intro > course on real analysis.
(snip) -- glen
On 31 Jul., 22:08, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Rune Allnor wrote: > > (snip) > > >>Glen, can you tell me more about those basis functions for non- > >>linearly sampled signals? > > I would be very surprised if simple analytical expressions > > exist for such functions. The statement "a set of basis > > functions exists" is a far cry from "this is the set of > > basis functions." > > Simpler the sample spacing will make simpler basis > functions. Next to uniform sampling, shifting every > other sample over, (that is, uniformly spaced pairs) > shouldn't be so hard.
What exactly would they look like? How would you go about computing them? Regards, Andor
"Andor" <andor.bariska@gmail.com> wrote in message 
news:1186048818.648597.28730@d55g2000hsg.googlegroups.com...
> On 31 Jul., 22:08, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: >> Rune Allnor wrote: >> >> (snip) >> >> >>Glen, can you tell me more about those basis functions for non- >> >>linearly sampled signals? >> > I would be very surprised if simple analytical expressions >> > exist for such functions. The statement "a set of basis >> > functions exists" is a far cry from "this is the set of >> > basis functions." >> >> Simpler the sample spacing will make simpler basis >> functions. Next to uniform sampling, shifting every >> other sample over, (that is, uniformly spaced pairs) >> shouldn't be so hard. > > What exactly would they look like? How would you go about computing > them? > > Regards, > Andor >
I'm a bit unclear with Glen's specifications for such a basis set. Here's a guess: The basis set has to be 1.0 at the intended sample time *and* zero at all the other known sample times (?). So, knowing the sample times, one might think one could construct a polynomial that fits all those points I suppose. I'd expect to see some weird functions if the sample times are bunched and sparse. In fact, I expect it can be proven that zeros can't be bunched too closely together without the functions "blowing up" outside. Bandwidth limitations limit the regular spacing of zeros .. as with the sinc. This makes me think that my "guess" above isn't what Glen had in mind. Fred
Fred Marshall wrote:

   ...

> Here's a guess: > The basis set has to be 1.0 at the intended sample time *and* zero at all > the other known sample times (?).
Suppose that two of the manufactured basis functions were each .5 at one of the sample points? Suppose that instead of being individually zero at all other sample points, the *sum* of all functions not specific to a point is zero? Suppose, suppose.... It makes my head spin. Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
"Jerry Avins" <jya@ieee.org> wrote in message 
news:JuidndhNMfkEiS_bnZ2dnUVZ_uKpnZ2d@rcn.net...
> Fred Marshall wrote: > > ... > >> Here's a guess: >> The basis set has to be 1.0 at the intended sample time *and* zero at all >> the other known sample times (?). > > Suppose that two of the manufactured basis functions were each .5 at one > of the sample points? Suppose that instead of being individually zero at > all other sample points, the *sum* of all functions not specific to a > point is zero? Suppose, suppose.... It makes my head spin. > > Jerry
Jerry, There aren't that many choices for criteria. I guessed that Glen wanted zero at each "other" sample instant and 1.0 at the "designated" sample instant - the same criterion as when one uses a sinc on a regular grid. Otherwise, e.g. when using when using sinusoids, the only criterion is that the sums equal the sample values at the sample points. How's that for a general requirement? errrr... that's the Fourier Transform or Series depending ..... But, since you mentioned it, if one uses the sum of three adjacent sincs with weights .5 1.0 .5 as the basis set then the summation is a bit more involved as three samples determine the weight for each basis and the payoff is that the "tails" decay much more rapidly than a sinc. Sometimes that's handy. In the frequency domain it's the difference between a gate and a raised cosine. Forcing the sum to be zero instead of forcing the basis functions to be individually zero is doable as long as the number of such points is limited. For example, if doing a minimax approximation like Parks McClellan does, one can force zeros (or other equalities) in the approximant. So, with the Remez algorithm, doing this causes the error to skip a sign alternation - as it uses up one degree of freedom for each such point. Specifically, the zeros need to be in the error so they would also be response zeros in a stopband. Passband forced zero error points would be where the approximant is equal to the desired passband response. Fred
Fred Marshall wrote:
(snip of discussion on basis function for non-uniform sampling)

> Here's a guess: > The basis set has to be 1.0 at the intended sample time *and* zero at all > the other known sample times (?).
> So, knowing the sample times, one might think one could construct a > polynomial that fits all those points I suppose. I'd expect to see some > weird functions if the sample times are bunched and sparse. In fact, I > expect it can be proven that zeros can't be bunched too closely together > without the functions "blowing up" outside. Bandwidth limitations limit the > regular spacing of zeros .. as with the sinc.
If by "blowing up" you mean infinite, no, it shouldn't do that. As the spacing gets less uniform, though, the basis functions can get large, which is the cause for the sensitivity of the reconstruction to the accuracy of the samples, including any noise. Consider sampling at points ..., -3, -3+a, -2, -2+a, -1, -1+a, 0, a, 1, 1+a, 2, 2+a, ... That is, all integers and integers plus a constant a. What do the functions look like when a=0.5? They must be sinc(2x). But sinc(2x)=sin(2pi x)/(2 pi x) = cos(pi x)sin(pi x)/(pi x) where the cos(pi x) adds the extra zeros where they are needed. If you put two samples between each integer, at a and b, and have a=1/3 and b=2/3, then 3 sinc(3x)=sin(3 pi x)/(pi x)=(4 cos(pi x)**2 -1) sin(pi x)/(3 pi x) or (2 cos(pi x)-1) (2 cos(pi x)+1) sin(pi x) / (3 pi x) where the cos terms supply the new zeros. This gives some hint as to what the functions will look like for other a and b in factored form. For samples at integers and integers+a, the function for x=0 (and shifted, for other integers) f(x)=sin(pi x)sin(pi (x-a))/sin(-pi a)/(pi x) the function for the sample at a will be the mirror image around a, g(x)=f(a-x) g(x)=sin(pi (a-x)) sin(-x)/sin(-pi a)/(pi (a-x)) as a gets close to 0 or 1 the peak gets larger as abs(sin(pi a)) gets smaller. -- glen
glen herrmannsfeldt wrote:
> Fred Marshall wrote: > > (snip of discussion on basis function for non-uniform sampling) > > > Here's a guess: > > The basis set has to be 1.0 at the intended sample time *and* zero at all > > the other known sample times (?). > > So, knowing the sample times, one might think one could construct a > > polynomial that fits all those points I suppose.
In general, there are an infinite number of sample points, and the polynomial becomes a power series. But the idea is correct.
> > I'd expect to see some > > weird functions if the sample times are bunched and sparse. In fact, I > > expect it can be proven that zeros can't be bunched too closely together > > without the functions "blowing up" outside. Bandwidth limitations limit the > > regular spacing of zeros .. as with the sinc. > > If by "blowing up" you mean infinite, no, it shouldn't do that.
If by "blowing up" Fred means growing without bound, then that seems very good intuition to me. You can see it in your functions below if you let a->0 or 1.
> > As the spacing gets less uniform, though, the basis functions can > get large, which is the cause for the sensitivity of the reconstruction > to the accuracy of the samples, including any noise. > > Consider sampling at points ..., -3, -3+a, -2, -2+a, -1, -1+a, 0, a, 1, > 1+a, 2, 2+a, ... That is, all integers and integers plus a constant a. > What do the functions look like when a=0.5? They must be sinc(2x). > But sinc(2x)=sin(2pi x)/(2 pi x) = cos(pi x)sin(pi x)/(pi x) where > the cos(pi x) adds the extra zeros where they are needed. > > If you put two samples between each integer, at a and b, and > have a=1/3 and b=2/3, then > 3 sinc(3x)=sin(3 pi x)/(pi x)=(4 cos(pi x)**2 -1) sin(pi x)/(3 pi x) > or (2 cos(pi x)-1) (2 cos(pi x)+1) sin(pi x) / (3 pi x) > > where the cos terms supply the new zeros. > > This gives some hint as to what the functions will look like > for other a and b in factored form. > > For samples at integers and integers+a, the function > for x=0 (and shifted, for other integers) > > f(x)=sin(pi x)sin(pi (x-a))/sin(-pi a)/(pi x)
I'm not quite sure how you arrived at that function. Does that follow from what you wrote above?
> > the function for the sample at a will be the mirror image around a, > g(x)=f(a-x) > > g(x)=sin(pi (a-x)) sin(-x)/sin(-pi a)/(pi (a-x))
If you compare g and f with the kernels given in [1], these certainly seem to be the correct interpolation functions in the case where every second sample shifted.
> > as a gets close to 0 or 1 the peak gets larger as > abs(sin(pi a)) gets smaller.
Yup. Regards, Andor [1] J. L. Yen, "On Nonuniform Sampling of Bandwidth-Limited Signals," IRE Trans. Circuit Theory, vol. 3, pp. 251-257, Dec. 1956.
"Andor" <andor.bariska@gmail.com> wrote in message 
news:1186154018.682574.177520@22g2000hsm.googlegroups.com...
> glen herrmannsfeldt wrote: >> Fred Marshall wrote: >> >> (snip of discussion on basis function for non-uniform sampling) >> >> > Here's a guess: >> > The basis set has to be 1.0 at the intended sample time *and* zero at >> > all >> > the other known sample times (?). >> > So, knowing the sample times, one might think one could construct a >> > polynomial that fits all those points I suppose. > > In general, there are an infinite number of sample points, and the > polynomial becomes a power series. But the idea is correct. > >> > I'd expect to see some >> > weird functions if the sample times are bunched and sparse. In fact, I >> > expect it can be proven that zeros can't be bunched too closely >> > together >> > without the functions "blowing up" outside. Bandwidth limitations >> > limit the >> > regular spacing of zeros .. as with the sinc. >> >> If by "blowing up" you mean infinite, no, it shouldn't do that. > > If by "blowing up" Fred means growing without bound, then that seems > very good intuition to me. You can see it in your functions below if > you let a->0 or 1. > >> >> As the spacing gets less uniform, though, the basis functions can >> get large, which is the cause for the sensitivity of the reconstruction >> to the accuracy of the samples, including any noise. >> >> Consider sampling at points ..., -3, -3+a, -2, -2+a, -1, -1+a, 0, a, 1, >> 1+a, 2, 2+a, ... That is, all integers and integers plus a constant a. >> What do the functions look like when a=0.5? They must be sinc(2x). >> But sinc(2x)=sin(2pi x)/(2 pi x) = cos(pi x)sin(pi x)/(pi x) where >> the cos(pi x) adds the extra zeros where they are needed. >> >> If you put two samples between each integer, at a and b, and >> have a=1/3 and b=2/3, then >> 3 sinc(3x)=sin(3 pi x)/(pi x)=(4 cos(pi x)**2 -1) sin(pi x)/(3 pi x) >> or (2 cos(pi x)-1) (2 cos(pi x)+1) sin(pi x) / (3 pi x) >> >> where the cos terms supply the new zeros. >> >> This gives some hint as to what the functions will look like >> for other a and b in factored form. >> >> For samples at integers and integers+a, the function >> for x=0 (and shifted, for other integers) >> >> f(x)=sin(pi x)sin(pi (x-a))/sin(-pi a)/(pi x) > > I'm not quite sure how you arrived at that function. Does that follow > from what you wrote above? > >> >> the function for the sample at a will be the mirror image around a, >> g(x)=f(a-x) >> >> g(x)=sin(pi (a-x)) sin(-x)/sin(-pi a)/(pi (a-x)) > > If you compare g and f with the kernels given in [1], these certainly > seem to be the correct interpolation functions in the case where every > second sample shifted. > >> >> as a gets close to 0 or 1 the peak gets larger as >> abs(sin(pi a)) gets smaller. > > Yup. > > Regards, > Andor > > [1] J. L. Yen, "On Nonuniform Sampling of Bandwidth-Limited Signals," > IRE Trans. Circuit Theory, vol. 3, pp. 251-257, Dec. 1956.
Andor and Glen, Well, it wasn't *entirely* intuition but close enough. I was thinking of supergained functions which "blow up" in the sense that the approximated function gets very large - perhaps not infinite - outside the region of interest (and sometimes in the "invisible" region to use antenna pattern language). This occurs when you push the approximant between zero and 2pi without bounding what happens beyond 2pi. A limiting case for supergaining is the vanDerMaas antenna pattern function that has perfectly flat sidelobes / sinc-like functions with non-decaying tails / extending beyond 2pi and to infinity (this is seen if the independent variable is taken as the angle and not the cosine of the angle), and has a window transform that *is* necessarily infinite at the edges to produce the never-edning sinusoidal component. Andor interprets what I called "bunched up" as the separation between samples approaches zero. That's a good way of looking at it. You might ask: What happens to the implied bandwidth when that happens? I think it must go up. There's another way of looking at this: if we assume a strictly bandlimited function is sampled irregularly. Reconstruction can happen by convolving the samples with a sinc / i.e. passing them through a perfect lowpass filter. Looking at it that way, the basis set doesn't change from the most familiar one - but the construction expressions are more complicated is all. Glen's construction is interesting nonetheless! Fred