DSPRelated.com
Forums

Beating Nyquist?

Started by Andor July 25, 2007
Friends,

just now I stumbled upon this webpage:

http://www.edi.lv/dasp-web/

Specifically, in this chapter

http://www.edi.lv/dasp-web/sec-5.htm

they state that they can sample a 1.2GHz signal using a pseudo-random
sampling instants with an average rate of 80MHz (in the last line of
section "5.2 Aliasing, how to avoid it").

I know that for nonuniform sampling, a generalization of the Sampling
Theorem has been proved [1], which states that a bandlimited signal
maybe reconstructed from nonuniformly spaced samples if the "average"
sampling rate is higher than twice the bandwidth of the signal.

This doesn't immediately contradict the claim above - it just says
that if the average sampling rate exceeds a certain limit, it can be
shown that the samples are sufficient for reconstruction. It might
well be that if the average rate is below the limit, reconstruction is
still possible. However, the claim still seems like magic to me,
specifically in the light that the sampled signals underlie no
restrictions (apart from the 1.2GHz bandwidth).

Comments?

Regards,
Andor

[1] F. J. Beutler, "Error-free recovery of signals from irregularly
spaced
samples," SIAM Rev., vol. 8, pp. 328-335, July 1966.

On 25 Jul, 11:24, Andor <andor.bari...@gmail.com> wrote:
> Friends, > > just now I stumbled upon this webpage: > > http://www.edi.lv/dasp-web/ > > Specifically, in this chapter > > http://www.edi.lv/dasp-web/sec-5.htm > > they state that they can sample a 1.2GHz signal using a pseudo-random > sampling instants with an average rate of 80MHz (in the last line of > section "5.2 Aliasing, how to avoid it"). > > I know that for nonuniform sampling, a generalization of the Sampling > Theorem has been proved [1], which states that a bandlimited signal > maybe reconstructed from nonuniformly spaced samples if the "average" > sampling rate is higher than twice the bandwidth of the signal. > > This doesn't immediately contradict the claim above - it just says > that if the average sampling rate exceeds a certain limit, it can be > shown that the samples are sufficient for reconstruction. It might > well be that if the average rate is below the limit, reconstruction is > still possible. However, the claim still seems like magic to me, > specifically in the light that the sampled signals underlie no > restrictions (apart from the 1.2GHz bandwidth). > > Comments? > > Regards, > Andor > > [1] F. J. Beutler, "Error-free recovery of signals from irregularly > spaced > samples," SIAM Rev., vol. 8, pp. 328-335, July 1966.
I don't like "random" algorithms which work "on average." Maybe the claim that random sampling works "on average" can be formally justified; I don't have the competence to comment on such a *formal* claim either way.
>From a practical perspective, assume you have a signal
x(t) which comprises one transient of the type of signal I prefer to call an "energy signal", i.e. +inf integral |x(t)|^2 dt < inf [1] -inf AS you know, one of the consequences of [1] is that the magnitude of x(t) is "significantly larger than 0, |x(t)| > eps" only inside some finite domain, say, a < t < b. This means that you *can* get the "random" samplingh to work, beating Nyquist, provided most of the sampling points are "spent" inside the domain [a,b]. On the other hand, you are screwed if the converse happens to be the case, that no sampling instances are inside the domain [a,b]. This is a perfectly valid case, as the properties of the random sampling scheme is defined on average, not per instance. Basically, by employing a randomized sampling scheme you exchange a global scheme with well-defined, well understood, guaranteed properties, for a random scheme which might be better "on average" but where every the worst case scenario is that you loose the properties of the signal you want. It's a matter of economy and damage control: Is it acceptable that the properties of a particular instance of a sampled signal can not be guaranteed? Would your project sponsors accept it? Or does the application require deterministic, if poor, performance characteristics? Rune
On Jul 25, 5:24 am, Andor <andor.bari...@gmail.com> wrote:
> Friends, > > just now I stumbled upon this webpage: > > http://www.edi.lv/dasp-web/ > > Specifically, in this chapter > > http://www.edi.lv/dasp-web/sec-5.htm > > they state that they can sample a 1.2GHz signal using a pseudo-random > sampling instants with an average rate of 80MHz (in the last line of > section "5.2 Aliasing, how to avoid it").
I agree with Rune's comments too, but the thing that kind of sticks out to me is that you're throwing away most of the basic tools of DSP (like linear filtering), as they have the implicit assumption that the data is periodically sampled. I've never studied nonuniform sampling techniques, so there may be a way around this, but it seems like you would have to exert a lot more effort to account for the times at which each sample was taken. Jason
cincydsp@gmail.com wrote in news:1185365183.924874.258340
@b79g2000hse.googlegroups.com:

> On Jul 25, 5:24 am, Andor <andor.bari...@gmail.com> wrote: >> Friends, >> >> just now I stumbled upon this webpage: >> >> http://www.edi.lv/dasp-web/ >> >> Specifically, in this chapter >> >> http://www.edi.lv/dasp-web/sec-5.htm >> >> they state that they can sample a 1.2GHz signal using a pseudo-random >> sampling instants with an average rate of 80MHz (in the last line of >> section "5.2 Aliasing, how to avoid it"). > > I agree with Rune's comments too, but the thing that kind of sticks > out to me is that you're throwing away most of the basic tools of DSP > (like linear filtering), as they have the implicit assumption that the > data is periodically sampled. I've never studied nonuniform sampling > techniques, so there may be a way around this, but it seems like you > would have to exert a lot more effort to account for the times at > which each sample was taken. > > Jason > >
If the signal can be reconstructed, you can always resample, so you throw away nothing (but we can argue about the "if") -- Scott Reverse name to reply
On Jul 25, 5:24 am, Andor <andor.bari...@gmail.com> wrote:
> Friends, > > just now I stumbled upon this webpage: > > http://www.edi.lv/dasp-web/ > > Specifically, in this chapter > > http://www.edi.lv/dasp-web/sec-5.htm > > they state that they can sample a 1.2GHz signal using a pseudo-random > sampling instants with an average rate of 80MHz (in the last line of > section "5.2 Aliasing, how to avoid it"). > > I know that for nonuniform sampling, a generalization of the Sampling > Theorem has been proved [1], which states that a bandlimited signal > maybe reconstructed from nonuniformly spaced samples if the "average" > sampling rate is higher than twice the bandwidth of the signal. > > This doesn't immediately contradict the claim above - it just says > that if the average sampling rate exceeds a certain limit, it can be > shown that the samples are sufficient for reconstruction. It might > well be that if the average rate is below the limit, reconstruction is > still possible. However, the claim still seems like magic to me, > specifically in the light that the sampled signals underlie no > restrictions (apart from the 1.2GHz bandwidth). > > Comments? > > Regards, > Andor > > [1] F. J. Beutler, "Error-free recovery of signals from irregularly > spaced > samples," SIAM Rev., vol. 8, pp. 328-335, July 1966.
It says actually.. "... for fully digital analysis of RF signals in Time, Frequency and Modulation Domains in the frequency range from dc up to 1.2 GHz...." So the RF carrier frequenies can be DC to 1.2GHz but it does not say anything about the instantaneous bandwidth of those signals.... I suspect this is simple sub-sampling... Mark
Rune Allnor wrote:
> I don't like "random" algorithms which work "on average." > Maybe the claim that random sampling works "on average" > can be formally justified; I don't have the competence > to comment on such a *formal* claim either way.
Aren't algorithms that work statistically the very essence of signal processing? :-\ Steve
On Wed, 25 Jul 2007 06:27:47 -0700, Mark wrote:

> On Jul 25, 5:24 am, Andor <andor.bari...@gmail.com> wrote: >> Friends, >> >> just now I stumbled upon this webpage: >> >> http://www.edi.lv/dasp-web/ >> >> Specifically, in this chapter >> >> http://www.edi.lv/dasp-web/sec-5.htm >> >> they state that they can sample a 1.2GHz signal using a pseudo-random >> sampling instants with an average rate of 80MHz (in the last line of >> section "5.2 Aliasing, how to avoid it"). >> >> I know that for nonuniform sampling, a generalization of the Sampling >> Theorem has been proved [1], which states that a bandlimited signal >> maybe reconstructed from nonuniformly spaced samples if the "average" >> sampling rate is higher than twice the bandwidth of the signal. >> >> This doesn't immediately contradict the claim above - it just says >> that if the average sampling rate exceeds a certain limit, it can be >> shown that the samples are sufficient for reconstruction. It might >> well be that if the average rate is below the limit, reconstruction is >> still possible. However, the claim still seems like magic to me, >> specifically in the light that the sampled signals underlie no >> restrictions (apart from the 1.2GHz bandwidth). >> >> Comments? >> >> Regards, >> Andor >> >> [1] F. J. Beutler, "Error-free recovery of signals from irregularly >> spaced >> samples," SIAM Rev., vol. 8, pp. 328-335, July 1966. > > It says actually.. > "... for fully digital analysis of RF signals in Time, Frequency and > Modulation Domains in the frequency range from dc up to 1.2 GHz...." > > So the RF carrier frequenies can be DC to 1.2GHz but it does not say > anything about the instantaneous bandwidth of those signals.... I > suspect this is simple sub-sampling... > > Mark
If that is, indeed, what they mean, this could be a dandy way to take a signal at a single frequency and smear it over a spectrum so that it gets lost in the noise. It wouldn't really be _anti_ aliasing, but it may still be a good thing. -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
On 25 Jul, 15:57, Steve Underwood <ste...@dis.org> wrote:
> Rune Allnor wrote: > > I don't like "random" algorithms which work "on average." > > Maybe the claim that random sampling works "on average" > > can be formally justified; I don't have the competence > > to comment on such a *formal* claim either way. > > Aren't algorithms that work statistically the very essence of signal > processing? :-\
No. They play a large part -- adaptive DSP and model-based DSP based on parameter estimation, come to mind -- but all suffer from the same weakness: They only work to whatever extent the statistical model fit reality. Or, alternatively, to the extent the signal fits the model. As long as there is a certain degree of match between signal and statistsical model, OK, the algorithms work. The problems occur when the mismatch becomes large and go undetected. Are these instances acceptable "on average" or does the application require *guaranteed* worst-case behaviour? Rune
"Mark" <makolber@yahoo.com> wrote in message 
news:1185370067.325274.266580@22g2000hsm.googlegroups.com...
> It says actually.. > "... for fully digital analysis of RF signals in Time, Frequency and > Modulation Domains in the frequency range from dc up to 1.2 GHz...." > > So the RF carrier frequenies can be DC to 1.2GHz but it does not say > anything about the instantaneous bandwidth of those signals.... I > suspect this is simple sub-sampling... > > Mark >
Yes, it does say that but I'm persuaded to believe that he meant the bandwidth. Without studying it too hard it looks to me like the proposed approach tends to spread the spectrum of the aliases - generating what looks more like noise. Thus creating a tradeoff between having aliases and noise. Seems like it would depends on lot on the nature of the signal for it to work. Periodic waveforms might work fine while more random waveforms might not - because how do you differentiate between "signal" and "noise" in that latter case? I'm not endorsing the claim, just pondering it. Knowing the sample times precisely only positions the samples for reconstruction. Pure sinc reconstruction (or lowpass filtering of precisely-placed samples) works fine for the fundamental components. But, for higher frequency components that would alias, the dithering might spread their spectrum, would't it? Here's another way of looking at it - a thought experiment: Assume a really high sample rate that is equivalent to the temporal resolution of the samples to be taken "randomly". Surely there is a time grid that must underlie the method - for example, on which to place the reconstructing waveforms (e.g. sincs). This creates a sample rate that is (by definition) "high enough" for the actual bandwidth and Nyquist. Sample at this rate. You know what the spectrum will look like if you know the signal being sampled. Then, decimate but randomly. That is, decimate (but not regularly) so that the average sample rate is what you wanted in the first place. Note that this is a nonlinear operation so familiar methods of analysis don't work. Now, use a test signal that is above the "new" fs/2. It seems to me that the randomized sample points will modulate the heck out of a higher-frequency sinusoid - turning its samples into what looks like random noise - because of the rather drastic phase hops between samples. Thus, no "alias" tonals in the result - but, more noise. If you switch the input to a test signal that is below the "new" fs/2 then there will be no modulation (spectral spreading) because of the careful temporal registration of the samples and the reconstructing sincs. Well, that's an arm-waving description and I'm sure others can do a better, more thoughtful, job of explaining why this might be. I think that's what the author is driving at without being very clear about it. So, no free lunch, and no cheating Nyquist, just a method to trade tonal aliases into added noise with a nonlinear step in the "contructed" process above. ***** OK - I thought about it some more... The "nonlinear" step above can be replaced with a linear step. Create the desired sample points in time. By definition they align on the fine temporal grid. Note that the gross view of these samples looks like they are regularly spaced. But they aren't of course - on purpose. Because the temporal location will be dithered at decimation, using a new set of sample points, there is a spectrum of this lower-frequency unit sample train that's not the typical picket fence repeating at fs. Rather, there's a broader clump at fs - the width determined by how much temporal deviation is built in between the samples. Now, multiply the high-frequency signal samples by the lower-frequency dithered unit samples. This linear operation zeros out all the unwanted samples from the higher fs sampled set and results in a set of signal samples with dithered sample times. At the same time, this convolves their spectra. I don't know.... there's something about the reconstruction that should work for the signals below fs/2 just fine. Since the reconstruction steps are linear, it should be easy to figure out. Fred
Rune Allnor wrote:
> On 25 Jul, 15:57, Steve Underwood <ste...@dis.org> wrote: >> Rune Allnor wrote: >>> I don't like "random" algorithms which work "on average." >>> Maybe the claim that random sampling works "on average" >>> can be formally justified; I don't have the competence >>> to comment on such a *formal* claim either way. >> Aren't algorithms that work statistically the very essence of signal >> processing? :-\ > > No. They play a large part -- adaptive DSP and model-based > DSP based on parameter estimation, come to mind -- but all > suffer from the same weakness: They only work to whatever > extent the statistical model fit reality. Or, alternatively, > to the extent the signal fits the model. > > As long as there is a certain degree of match between signal > and statistsical model, OK, the algorithms work. The problems > occur when the mismatch becomes large and go undetected. > Are these instances acceptable "on average" or does the > application require *guaranteed* worst-case behaviour?
Pretty much anything in comms is statistical. Pretty much anything in sensing is statistical. I thought you'd worked in acoustic sensing. That's a very statistical area. Steve