DSPRelated.com
Forums

conventional wisdom how to upsample very large arrays, accurately?

Started by all4dsp January 6, 2010
On Jan 8, 4:01&#4294967295;am, Andor <andor.bari...@gmail.com> wrote:
> On 7 Jan., 17:18, Andor <andor.bari...@gmail.com> wrote: > > > > > > > On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote: > > > > Thanks everyone for your replies, you've given me a lot to think about. > > > There are obviously a lot of tradeoffs in selecting a method. > > > > >I'm assuming that the initial FFT samples, X(m), are *VERY* > > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the > > > >original sample rate in Hz). > > > > This is true. > > > > >In those FFTs you're expecting the software to accuarately > > > >estimate the sine and cosine of angles in the range > > > >of 360/2^30 degrees > > > > Wow, good point! It sounds like polyphase filtering might be the best > > > choice. > > > Since you are upsampling by 4 (an integer factor) you don't even need > > to implement a polyphase resampler. All you need is a filterbank with > > three filters. As I wrote in another post, I would suggest a two step > > procedure: > > > 1. Interpolate by 4 with polynomial FIR filters (very small kernels, > > e.g. 4 taps for third-order polynomial interpolation): > > > x(n) ---+-----------------> x(n) > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; |-->| &#4294967295;H1(z) &#4294967295;|---> x1(n) > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; |-->| &#4294967295;H2(z) &#4294967295;|---> x2(n) > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; +-->| &#4294967295;H3(z) &#4294967295;|---> x3(n) > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +---------+ > > > H1, H2 and H3 are the interpolation filters (polynomial filters are > > simple to design from scratch), your upsampled sequence will be: > > > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... ) > > > Using a third-order polynomial filter requires 3*4*n MACS to resample > > by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of > > seconds on a normal PC. > > > 2. Use a simple method to find zero-crossings in the upsampled data > > and then "polish" the zero-crossing location with more accurate > > interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPinterpolation.pdf > > hmm. For fun I implemented this scheme and applied it to the samples > of the signal > > x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t) > > for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with > period 1, sampled with sampling interval T = 1. The sampling sequence x > [n] = x(n) has the property that all samples from the time interval > > 0 <= t < 1/(1-f) > > are positive, and all samples from the interval > > 1/(1-f) <= t < 2/(1-f) > > are negative. For small eps, the number of zero-crossings of x(t) in > both intervals is about 1/(1-f)/f. > > Unfortunately, upsampling by a factor of 4 with third-order polynomial > FIR did not change the fact that the first half of the samples are all > positive and the second half are all negative (meaning we can't find > the zero-crossings from the sampled signal using the "multiply-two- > consecutive-samples-and-check-the-sign" trick). > > So I think the OP might possibly not get around to using high-order > sinc-interpolation filters for the factor 4 upsampling on the whole > data set to find the zero-crossings. > > Regards, > Andor- Hide quoted text - > > - Show quoted text -
A question about pathelogical signals is do they represent a common or rare occurance for what the OP is looking for. Certainly things like this have to be tested. Clay
On 8 Jan., 22:08, Clay <c...@claysturner.com> wrote:
> On Jan 8, 4:01&#4294967295;am, Andor <andor.bari...@gmail.com> wrote: > > > > > > > On 7 Jan., 17:18, Andor <andor.bari...@gmail.com> wrote: > > > > On 7 Jan., 16:48, "all4dsp" <all4...@comcast.net> wrote: > > > > > Thanks everyone for your replies, you've given me a lot to think about. > > > > There are obviously a lot of tradeoffs in selecting a method. > > > > > >I'm assuming that the initial FFT samples, X(m), are *VERY* > > > > >low in magnitude in the vicinity of Fs/2 Hz (Fs is the > > > > >original sample rate in Hz). > > > > > This is true. > > > > > >In those FFTs you're expecting the software to accuarately > > > > >estimate the sine and cosine of angles in the range > > > > >of 360/2^30 degrees > > > > > Wow, good point! It sounds like polyphase filtering might be the best > > > > choice. > > > > Since you are upsampling by 4 (an integer factor) you don't even need > > > to implement a polyphase resampler. All you need is a filterbank with > > > three filters. As I wrote in another post, I would suggest a two step > > > procedure: > > > > 1. Interpolate by 4 with polynomial FIR filters (very small kernels, > > > e.g. 4 taps for third-order polynomial interpolation): > > > > x(n) ---+-----------------> x(n) > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; |-->| &#4294967295;H1(z) &#4294967295;|---> x1(n) > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; |-->| &#4294967295;H2(z) &#4294967295;|---> x2(n) > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; | &#4294967295; +---------+ > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; +-->| &#4294967295;H3(z) &#4294967295;|---> x3(n) > > > &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; &#4294967295; +---------+ > > > > H1, H2 and H3 are the interpolation filters (polynomial filters are > > > simple to design from scratch), your upsampled sequence will be: > > > > (... x(n),x1(n),x2(n),x3(n),x(n+1),x1(n+1),x2(n+1),x3(n+1), ... ) > > > > Using a third-order polynomial filter requires 3*4*n MACS to resample > > > by 4. For n = 1e9 you have 12e9 MACS, which should take in a couple of > > > seconds on a normal PC. > > > > 2. Use a simple method to find zero-crossings in the upsampled data > > > and then "polish" the zero-crossing location with more accurate > > > interpolation - this is outlined here:http://www.claysturner.com/dsp/DSPinterpolation.pdf > > > hmm. For fun I implemented this scheme and applied it to the samples > > of the signal > > > x(t) = sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t) > > > for 0.5 < f < 1.0, eps > 0 and s(t) the square wave function with > > period 1, sampled with sampling interval T = 1. The sampling sequence x > > [n] = x(n) has the property that all samples from the time interval > > > 0 <= t < 1/(1-f) > > > are positive, and all samples from the interval > > > 1/(1-f) <= t < 2/(1-f) > > > are negative. For small eps, the number of zero-crossings of x(t) in > > both intervals is about 1/(1-f)/f. > > > Unfortunately, upsampling by a factor of 4 with third-order polynomial > > FIR did not change the fact that the first half of the samples are all > > positive and the second half are all negative (meaning we can't find > > the zero-crossings from the sampled signal using the "multiply-two- > > consecutive-samples-and-check-the-sign" trick). > > > So I think the OP might possibly not get around to using high-order > > sinc-interpolation filters for the factor 4 upsampling on the whole > > data set to find the zero-crossings. > > > Regards, > > Andor- Hide quoted text - > > > - Show quoted text - > > A question about pathelogical signals is do they represent a common or > rare occurance for what the OP is looking for.
Clearly. I was just trying to counter the widely accepted notion that zero-crossings of sampled signals can easily be found from the (non- up-)sampled data.
> > Certainly things like this have to be tested.
In the end I had to use 7th order polynomial interpolation FIR filters on my test function to actually catch the zero-crossings using upsampling by factor 4 (5th order was not enough). This can still be implemented efficiently (8*3*1e9 = 24e9 MACS should easily be computable in under a minute on a normal PC). Regards, Andor
Andor wrote:

   ...

> In the end I had to use 7th order polynomial interpolation FIR filters > on my test function to actually catch the zero-crossings using > upsampling by factor 4 (5th order was not enough). This can still be > implemented efficiently (8*3*1e9 = 24e9 MACS should easily be > computable in under a minute on a normal PC).
I've been puzzling over this for a few days now. Set aside the search for actual locations of the zero crossings. If there are at least two samples in the period of the highest frequency present in the sampled signal, how can any zero crossing fail to be among those counted? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
On Jan 7, 9:59&#4294967295;am, Andor <andor.bari...@gmail.com> wrote:

> I think the main problem is to find the rough location of the zero- > crossing in the sampled data. As my example shows, a zero-crossing of > the underlying sampled function will not necessarily show up as a > "zero-crossing" in the samples. I would opt for Ron's idea to use > quick polynomial interpolation with order >= 3 (can be implemented > with very small kernel FIR filters) to oversample the data by 4 or so > to find "zero-crossings" in the samples and then use the method of > sinc-interpolation outlined in your paper to find the exact location. > > This reminds me of the old comp.dsp debate on the maximum number of > zero-crossings that a bandlimited function may have on any given > interval. As it turned out (I think Matt Timmermans came up with the > example of the prolate spheroidal wave functions), the number of zero- > crossings is not limited ...
But if there could be infinitely many zero-crossings, then no matter how much we oversample, we could never be sure that we have found them all, could we? If a (low-pass) signal is truly band-limited to W Hz, meaning that X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded by 2\pi W max |x(t)| (cf. Temes, Barcilon, and Marshall, The optimization of bandlimited systems, Proc. IEEE, Feb. 1973). So, depending on the sampling rate and the value of two successive positive samples, we can be sure in *some* cases that there is no zero-crossing between the two samples: the bound on how quickly x(t) can change does not allow for the possibility that the signal could dip down below zero in between the two positive samples. --Dilip Sarwate
dvsarwate wrote:
> On Jan 7, 9:59 am, Andor <andor.bari...@gmail.com> wrote: > >> I think the main problem is to find the rough location of the zero- >> crossing in the sampled data. As my example shows, a zero-crossing of >> the underlying sampled function will not necessarily show up as a >> "zero-crossing" in the samples. I would opt for Ron's idea to use >> quick polynomial interpolation with order >= 3 (can be implemented >> with very small kernel FIR filters) to oversample the data by 4 or so >> to find "zero-crossings" in the samples and then use the method of >> sinc-interpolation outlined in your paper to find the exact location. >> >> This reminds me of the old comp.dsp debate on the maximum number of >> zero-crossings that a bandlimited function may have on any given >> interval. As it turned out (I think Matt Timmermans came up with the >> example of the prolate spheroidal wave functions), the number of zero- >> crossings is not limited ... > > But if there could be infinitely many zero-crossings, then no matter > how > much we oversample, we could never be sure that we have found them > all, could we? > > If a (low-pass) signal is truly band-limited to W Hz, meaning that > X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded > by > 2\pi W max |x(t)| (cf. Temes, Barcilon, and Marshall, The > optimization > of bandlimited systems, Proc. IEEE, Feb. 1973). So, depending on > the sampling rate and the value of two successive positive samples, > we can be sure in *some* cases that there is no zero-crossing between > the two samples: the bound on how quickly x(t) can change does not > allow for the possibility that the signal could dip down below zero in > between the two positive samples.
But doesn't that imply some frequency well above half the sample rate? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
On Jan 11, 12:37&#4294967295;pm, Jerry Avins <j...@ieee.org> wrote:
> dvsarwate wrote: > > On Jan 7, 9:59 am, Andor <andor.bari...@gmail.com> wrote: > > >> I think the main problem is to find the rough location of the zero- > >> crossing in the sampled data. As my example shows, a zero-crossing of > >> the underlying sampled function will not necessarily show up as a > >> "zero-crossing" in the samples. I would opt for Ron's idea to use > >> quick polynomial interpolation with order >= 3 (can be implemented > >> with very small kernel FIR filters) to oversample the data by 4 or so > >> to find "zero-crossings" in the samples and then use the method of > >> sinc-interpolation outlined in your paper to find the exact location. > > >> This reminds me of the old comp.dsp debate on the maximum number of > >> zero-crossings that a bandlimited function may have on any given > >> interval. As it turned out (I think Matt Timmermans came up with the > >> example of the prolate spheroidal wave functions), the number of zero- > >> crossings is not limited ... > > > But if there could be infinitely many zero-crossings, then no matter > > how > > much we oversample, we could never be sure that we have found them > > all, could we? > > > If a (low-pass) signal is truly band-limited to W Hz, meaning that > > X(f) = 0 for |f| > W, then the magnitude of its derivative is bounded > > by > > 2\pi W max |x(t)| &#4294967295;(cf. Temes, Barcilon, and Marshall, The > > optimization > > of bandlimited systems, Proc. IEEE, Feb. 1973). &#4294967295;So, depending on > > the sampling rate and the value of two successive positive samples, > > we can be sure in *some* cases that there is no zero-crossing between > > the two samples: the bound on how quickly x(t) can change does not > > allow for the possibility that the signal could dip down below zero in > > between the two positive samples. > > But doesn't that imply some frequency well above half the sample rate? > > Jerry > -- > Engineering is the art of making what you want from things you can get. > &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
I think Jerry and I are talking at cross-purposes. Suppose we have two samples spaced T seconds apart from a signal x(t) that is strictly band-limited to have bandwidth W; X(f) = 0 for |f| > W. Suppose also (for simplicity) that |x(t)| < 1 and so the magnitude of the derivative is bounded by 2\pi W. If x(0) = a > 0, then the signal cannot decrease to 0 earlier than t = a/(2\pi W) because the rate of change of x(t) is bounded. Similarly, if x(T) = b
> 0 is
the next sample (note that we are not assuming any relation between T and W), then the signal must be above zero during the time interval (T - b/ (2\pi W), T) since otherwise it cannot change fast enough to reach b at t = T. Thus, if a/(2\pi W) > T - b/(2\pi W), then there is no zero-crossing between 0 and T. Note that the stated criterion is equivalent to a + b > 2\pi WT, and of course since a + b = x(0) + x(T) < 2, the criterion will never be satisfied if 2\pi WT > 2. But, assuming that \pi WT < 1, it is possible that we can deduce for *some* adjacent positive samples that x(t) does not cross zero between the samples. If T is very small (some upsampling may be necessary to get this), then maybe we can say that for *most* positive adjacent sample values a and b, there is no zero-crossing between the samples. Hope this helps --Dilip Sarwate
On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote:
> Andor wrote: > > =A0 =A0... > > > In the end I had to use 7th order polynomial interpolation FIR filters > > on my test function to actually catch the zero-crossings using > > upsampling by factor 4 (5th order was not enough). This can still be > > implemented efficiently (8*3*1e9 =3D 24e9 MACS should easily be > > computable in under a minute on a normal PC). > > I've been puzzling over this for a few days now. Set aside the search > for actual locations of the zero crossings. If there are at least two > samples in the period of the highest frequency present in the sampled > signal, how can any zero crossing fail to be among those counted?
Like this: http://www.zhaw.ch/~bara/files/zero-crossings_example.png Plot of the function described above with f=3D0.98. The function has 80 zero-crossings in the displayed time window, the samples of the function have exactly one zero-crossing. Regards, Andor
Andor wrote:
> On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote: >> Andor wrote: >> >> ... >> >>> In the end I had to use 7th order polynomial interpolation FIR filters >>> on my test function to actually catch the zero-crossings using >>> upsampling by factor 4 (5th order was not enough). This can still be >>> implemented efficiently (8*3*1e9 = 24e9 MACS should easily be >>> computable in under a minute on a normal PC). >> I've been puzzling over this for a few days now. Set aside the search >> for actual locations of the zero crossings. If there are at least two >> samples in the period of the highest frequency present in the sampled >> signal, how can any zero crossing fail to be among those counted? > > Like this: > > http://www.zhaw.ch/~bara/files/zero-crossings_example.png > > Plot of the function described above with f=0.98. The function has 80 > zero-crossings in the displayed time window, the samples of the > function have exactly one zero-crossing.
I see now, thanks. Very nice! What procedures fail if we count zero touchings as zero crossings? Does it come down to the difference between open and closed intervals? Maybe I don't see after all. are the samples close to but not actually on the axis? I see only a grapg, no equation. How does .98 come in? Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
On 12 Jan., 19:48, Jerry Avins <j...@ieee.org> wrote:
> Andor wrote: > > On 11 Jan., 16:16, Jerry Avins <j...@ieee.org> wrote: > >> Andor wrote: > > >> =A0 =A0... > > >>> In the end I had to use 7th order polynomial interpolation FIR filter=
s
> >>> on my test function to actually catch the zero-crossings using > >>> upsampling by factor 4 (5th order was not enough). This can still be > >>> implemented efficiently (8*3*1e9 =3D 24e9 MACS should easily be > >>> computable in under a minute on a normal PC). > >> I've been puzzling over this for a few days now. Set aside the search > >> for actual locations of the zero crossings. If there are at least two > >> samples in the period of the highest frequency present in the sampled > >> signal, how can any zero crossing fail to be among those counted? > > > Like this: > > >http://www.zhaw.ch/~bara/files/zero-crossings_example.png > > > Plot of the function described above with f=3D0.98. The function has 80 > > zero-crossings in the displayed time window, the samples of the > > function have exactly one zero-crossing. > > I see now, thanks. Very nice! > > What procedures fail if we count zero touchings as zero crossings? Does > it come down to the difference between open and closed intervals? > > Maybe I don't see after all. are the samples close to but not actually > on the axis? I see only a grapg, no equation. How does .98 come in?
Yup, the samples don't touch the axis. The equation for this function can be found higher up in the thread. For convenience: x(t) =3D sin(pi f t) + sin(pi (1-f) t) + eps s((1-f)/2 t) where s(t) is the square-wave function with period 1 (replace with truncated Fourier series to make x(t) bandlimited). x(t) is sampled with sampling period T=3D1. The samples close to the axis have value eps (in this example, eps =3D 0.02) and f=3D0.98. Regards, Andor
On 11 Jan., 19:17, dvsarwate <dvsarw...@gmail.com> wrote:
> On Jan 7, 9:59=A0am, Andor <andor.bari...@gmail.com> wrote: > > > I think the main problem is to find the rough location of the zero- > > crossing in the sampled data. As my example shows, a zero-crossing of > > the underlying sampled function will not necessarily show up as a > > "zero-crossing" in the samples. I would opt for Ron's idea to use > > quick polynomial interpolation with order >=3D 3 (can be implemented > > with very small kernel FIR filters) to oversample the data by 4 or so > > to find "zero-crossings" in the samples and then use the method of > > sinc-interpolation outlined in your paper to find the exact location. > > > This reminds me of the old comp.dsp debate on the maximum number of > > zero-crossings that a bandlimited function may have on any given > > interval. As it turned out (I think Matt Timmermans came up with the > > example of the prolate spheroidal wave functions), the number of zero- > > crossings is not limited ... > > But if there could be infinitely many zero-crossings, then no matter > how > much we oversample, we could never be sure that we have found them > all, could we? > > If a (low-pass) signal is truly band-limited to W Hz, meaning that > X(f) =3D 0 for |f| > W, then the magnitude of its derivative is bounded > by > 2\pi W max |x(t)| =A0(cf. Temes, Barcilon, and Marshall, The > optimization > of bandlimited systems, Proc. IEEE, Feb. 1973). =A0So, depending on > the sampling rate and the value of two successive positive samples, > we can be sure in *some* cases that there is no zero-crossing between > the two samples: the bound on how quickly x(t) can change does not > allow for the possibility that the signal could dip down below zero in > between the two positive samples.
Yes, nice Popper-esque argument :-). Regards, Andor