DSPRelated.com
Forums

CAUTION! was "What is the advantage on high-sampling rate ?"

Started by Rick Lyons April 23, 2004
glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in message news:<H6Ujc.7709$lz5.854606@attbi_s53>...
> If the function is not periodic, then you should not use a DFT.
The DFT is a perfectly reasonable, completely orthonormal linear-algebraic transform that reads on 'n' samples, in isolation, period, no if, no and, and no but. In other words, Glen, BUSHWHA! Now, you might want to argue that you might chose some other transform and get better results, and you MIGHT be right, and you might not, depending on just whatever you needed to know about the data. Any function is accurately analyzed into a set of orthonormal basis vectors by a DFT or FFT. Any. Period, well, unless the data contains infinities. So what ARE you talking about? You're easily shown to be completely wrong here, simply because a DFT is known to be orthonormal, or a tight frame, or whatever set of words one wants to use. There's nothing wrong with using a DFT on non-periodic data IF WHAT YOU NEED TO KNOW IS WHAT THE DFT OF THE NON-PERIODIC DATA IS. The example of fast convolution, in and of itself, utterly destroys you assertion, providing a perfect counterexample where fast convolution, using overlap-add if necessary, can trivially convolve any two arbitrary non-infinite signals. Have a nice day, Glen.
Bob Cain wrote:
(snip regarding periodicity and DFT)
(someone wrote)

>> As long as you agree that we have a set of discrete values in frequency >> spaced evenly apart (i.e., of the form \sum_{k=-\infty}^{+\infty} >> a[k]delta[f-kF]" it doesn't have to have anything to do with the DFT >> definition.
> And if I don't agree with that? All I see after a DFT is the weights to > be applied to set of harmonic functions (sinusoids and cosinusoids) that > are defined only in the same interval as that of the set of time domain > points. That set is comprised of all those harmonics which have an > integral number of cycles within the interval with frequencies up to the > Nyquist limit.
> Nothing in the DFT definition is assumed about their value outside that > interval. Infinity only appears if you define those sinusoids and > cosinusoids to extend to +/- infinity in which case you will obviously > will the periodicity you can't shake but there is nothing in the DFT > definition which makes that extension to the infinities explicit. That > falls out of your assumption about the basis set.
Say, for example, that you use the DFT to implement a low pass filter. You might apply the DFT to some data, reduce the high frequency terms by some amount, and then IDFT to get a filtered time series. A low pass filter smoothes the data so that X(i) is closer to X(i+1) than it might have otherwise been. Because of the periodic boundary conditions, the DFT filter will also smooth the transition from X(0) to X(N-1) where N is the number of data points. For a non-periodic time series there is no reason a filter should apply that condition. The DFT boundary condition is that both the function and its derivative are periodic. Now, consider the DST and DCT. The DST boundary condition is that the function value is 0 at both ends. X(0) = X(N) = 0, while the DCT boundary condition is that the derivative, X'(0) = X'(N) = 0. Even though they use periodic basis functions they don't force periodic boundary conditions. Now, I am going to go read again the Numerical Recipes description, as I am sure that I still have more to learn on this subject, as I have learned much reading other posts in comp.dsp. -- glen

glen herrmannsfeldt wrote:

> > > Say, for example, that you use the DFT to implement a low pass filter. > You might apply the DFT to some data, reduce the high frequency terms > by some amount, and then IDFT to get a filtered time series.
To do that I'd use long, fast convolution by FFT overlap add/save of the data with the FIR of the filter. In this approach the result length would always be the sum of the lengths of the operands and so periodicity wouldn't enter the picture. The result would be identical to having performed the convolution the hard way and I hope no one feels that periodicity enters that kind of calculation. I just know that once I purged the wierdness of periodicity from my thinking and began to consider what most people mean by it as just a buffer wrap around affect to be avoided by obvious means, things became much easier to work with and I consistenly got results that were intuitively clearer. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Bob Cain wrote:


> glen herrmannsfeldt wrote:
>> Say, for example, that you use the DFT to implement a low pass filter. >> You might apply the DFT to some data, reduce the high frequency terms >> by some amount, and then IDFT to get a filtered time series.
> To do that I'd use long, fast convolution by FFT overlap add/save of the > data with the FIR of the filter. In this approach the result length > would always be the sum of the lengths of the operands and so > periodicity wouldn't enter the picture. The result would be identical > to having performed the convolution the hard way and I hope no one feels > that periodicity enters that kind of calculation.
From Numerical Recipes: "The discrete convolution theorem is this: If a signal s(j) is periodic with period N, so that it is completely determined by the N values s(0), ..., s(N-1), then its discrete convolution with a response function of finite duration N is a member of the discrete Fourier transform pair," (usual convolution vs. multiplication transform pair. But yes, if you extend the series with the appropriate number of zeros you can remove the effect of periodicity.
> I just know that once I purged the wierdness of periodicity from my > thinking and began to consider what most people mean by it as just a > buffer wrap around affect to be avoided by obvious means, things became > much easier to work with and I consistenly got results that were > intuitively clearer.
There are systems that are naturally periodic. For those that aren't, as you say, the problem can be fixed. -- glen
Randy Yates wrote:
> an2or@mailcircuit.com (Andor) writes: > > Ronald H. Nicholson Jr. wrote:
...
> > > A DFT is only an operator which munges one finite set of bits into > >> another. > > > > That's what I would say also. The DFT is a unitary linear map on a > > hilbert space: > > DFT: CC^n -> CC^n
...
> Tell you what - you go ahead and constrain yourselves to thinking of > the FFT output as nothing more than the result of a unitary linear > operator operating on a set of vectors in a Hilbert space.
It is you who is constraining himself. The DFT (as well as the FT) is a transform with certain properties. If you think of it only in time-frequency domain you are missing plenty of applications to areas where time and frequency are not applicable terms.
> I much prefer to use it as a tool to operate on real-world data and > real-world problems to accomplish real-world goals, and thus to have > real-world significance. > > I just as capable of abstracting the math into meaninglessness as > the rest of you - I choose instead to use it as a tool.
It is very important to be able to map mathematical theorems into real world applications. But by keeping the abstract core of these theorems in the back of your head you may be able to find new and innovative solutions to new real world problems. Regards, Andor
In article d61ab25b.0405011423.2a066928@posting.google.com, Mark Ovchain at
mark_ovchain@yahoo.com wrote on 05/01/2004 18:23:

> glen herrmannsfeldt <gah@ugcs.caltech.edu> wrote in message > news:<H6Ujc.7709$lz5.854606@attbi_s53>... > >> If the function is not periodic, then you should not use a DFT.
...
> The example of fast convolution, in and of itself, utterly destroys > you assertion, providing a perfect counterexample where fast > convolution, using overlap-add if necessary, can trivially convolve > any two arbitrary non-infinite signals.
i've already gone on record here opining that Glen goes a little too far with the above quoted statement even though i think that Glen and i are on the same page (or same side of the debate). the example of fast convolution is good. what makes it "fast" is only that a certain kind of implementation of the DFT (often called the "FFT") is used to perform a certain component of the end task of linear convolution. that component task that the fast DFT is performing is "circular convolution", an operation naturally performed by use of the DFT, iDFT, and element-by-element array multiplication. another necessary component is either one of the "overlap-add" or "overlap-save" techniques which is necessary to get a tool that only performs circular convolution to successfully contribute to the task of linear convolution. it's like supposing you don't have access to a backhoe (or similar power shovel) and what you want to do is dig a trench about 4 foot deep for some longer length. you've got spades and shovels and you've got a post-hole-digger (an auger). now post-hole-diggers are well suited for digging holes and are not designed for digging trenches but it's not inconceivable how one might use the same to dig a trench. if it were a *manual* post-hole-digger (the non-fast DFT), it might not be worth it to use it for digging a trench. may as well use the spades. but suppose it were a powered post-hole-digger that could dig a 4 foot deep hole is seconds? (that would be like the FFT.) even though it's fitting a tool to a task it isn't naturally suited to, it might well be worth it do use it to do so. but fitting that round tool to the linear task requires some workaround. that is what the overlap-add or overlap-save techniques are for. Jerry can probably come up with a better practical metaphorical example. it's the best i could do at the time. fast convolution is sorta an example of the truism that "if all you have in your toolbox is a hammer, then every problem looks like a nail" or something like that. In article c6u3pd0bb6@enews4.newsguy.com, Bob Cain at arcane@arcanemethods.com wrote on 04/30/2004 13:52:
> robert bristow-johnson wrote: > >>> You can DFT a pile of random numbers if you want. >> >> and that pile of (i.e. a finite set of) random numbers would be periodically >> extended to infinity in both directions by the DFT. > > And this is the crux of our disagreement.
ya. i think so.
> You get the > original result if you assume that all the sinusoids and > cosinusoids involved in the inner products of the DFT (the > basis functions) are tone bursts that only last for the > duration of the interval. Under that assumption you can > perform the DFT integral ...
i think you mean "DFT summation", no?
> ... from negative to positive infinity
but if the limits are -inf to +inf, it ain't what we call "the DFT", is it? i think that would be what we call a DTFT and the result ain't N numbers but a continuous and periodic function in frequency with period 2*pi. i want to debate specifically whether or not the DFT and iDFT as defined in the normal compact manner (e.g. Eqs. 8.61 and 8.62 in O&S 1989) are "inherently periodic" or "naturally periodic" with period N. by that i mean that the DFT "understands" that the N samples supplied to it are one complete and contiguous period of a periodic sequence with period length of N. that is what i mean by saying that "the DFT periodically extends the data given to it". and same for the iDFT.
> and the inverse will give back the original signal interval > with it extended by zeros in both directions outside the > interval.
only the result of the inverse DTFT (which *does* require an integral from -pi to +pi) will you get back a non-periodic sequence x[n]. that non-periodic sequence could be zero extended if what goes into the iDTFT integral are a set of periodic sinc-like functions, but what comes out of a normal iDTFT need not be some zero extended thing, but generally just a non-periodic thingie. but i wanna debate the alleged periodic nature of the DFT: N-1 X[k] = SUM{ x[n] * exp(-j*2*pi*n*k/N) } n=0 and the iDFT: N-1 x[n] = 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } k=0 r b-j
Randy Yates wrote:
> Bob Cain <arcane@arcanemethods.com> writes: > > >>robert bristow-johnson wrote: >> >> >> >>>>You can DFT a pile of random numbers if you want. >>> >>>and that pile of (i.e. a finite set of) random numbers would be >>>periodically >> >>>extended to infinity in both directions by the DFT. >>> >> >> >>And this is the crux of our disagreement. You get the original result >>if you assume that all the sinusoids and cosinusoids involved in the >>inner products of the DFT (the basis functions) are tone bursts that >>only last for the duration of the interval. Under that assumption you >>can perform the DFT integral from negative to positive infinity and >>the inverse will give back the original signal interval with it >>extended by zeros in both directions outside the interval. >> >> >>This is how I think about impulse response transforms. > > > Bob, > > You're wrong. I don't mean to be disrespectul, and Lord knows > I not tactful, but you are flat-out wrong. Here's why. > > The DFT, BY DEFINITION, produces a *FINITE* set of values as output. > These are, BY DEFINITION, frequency-domain output samples. It is > an absolute IRREFUTABLE property that discrete-time samples in the > frequency REQUIRE a periodic time-domain function. Period. End of > argument.
The first paragraph of the following paper is an explicit presentation of the kth bin of a DFT (FFT) as a linear filter. Discrete Fourier transforms, linear filters, and spectrum weighting Bruce, J.; Audio and Electroacoustics, IEEE Transactions on , Volume: 16 , Issue: 4 , Dec 1968 Pages:495 - 499 The point is that historically the DFT has been derived from a number of different view points. I don't see how your "DEFINITION" is the most fundamental, precisely because you have used it to preclude other interpretations which the mathematics clearly permits.
Bob Cain wrote:
> > > Ronald H. Nicholson Jr. wrote: > >> >> Only if you make certain assumptions about the input. You've gone >> in a complete circle. > > > Exactly. > >> >> A DFT is only an operator which munges one finite set of bits into >> another. > > > Period. End of story. > > > Bob
Actually the DFT doesn't really require quantized bits.

Stan Pawlukiewicz wrote:


> > The point is that historically the DFT has been derived from a number of > different view points. I don't see how your "DEFINITION" is the most > fundamental, precisely because you have used it to preclude other > interpretations which the mathematics clearly permits.
That isn't my intent, Stan, as should be evident from my other posts on the subject. It's quite the opposite. I don't think it necessasary to work with the periodic assumption to learn or apply the DFT yet many are religious about it's assumption. Of course in certain problem domains it can be a useful extension but in mine it is merely cumbersome and belies the reality of my signals. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Bob, i finally got around to responding Sunday night.  did you see it?  (it
shows up on Google groups, so i didn't think it got lost into the USENET bit
bucket.)

In article c78fao0j1a@enews2.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 05/04/2004 12:11:

> Stan Pawlukiewicz wrote: > >> The point is that historically the DFT has been derived from a number of >> different view points. I don't see how your "DEFINITION" is the most >> fundamental, precisely because you have used it to preclude other >> interpretations which the mathematics clearly permits. > > That isn't my intent, Stan, as should be evident from my > other posts on the subject. It's quite the opposite.
Stan was responding to Randy's post that was responding to yours, Bob.
> I don't think it necessasary to work with the periodic > assumption to learn or apply the DFT yet many are religious > about it's assumption.
it's necessary if you apply any operation that involves shifting or delaying of the data in one domain or the other: DFT{ exp(j*2*pi*k0*n/N) * x[n]) = X[k-k0] N-1 iDFT{ H[k] * X[k] } = h(*)x[n] = SUM{ h[m]*x[n-m] } m=0 ( " (*) " means "discrete convolution" )
> Of course in certain problem domains it can be a useful extension ...
it is *useful* in very few problem domains. the inherent periodicity usually gets in the way (as it does in fast convolution) requiring a workaround (overlap-add or overlap-save or zero-padding, etc). the only problem domain i can think of where this inherent periodic property *is* useful is in dealing with periodic or quasi-periodic functions that have exactly an integer N samples per period (or can be resampled to have exactly N samples per period). that's what was done in that Wavetable-101 paper. but it's a small problem domain. (BTW, i like that term "problem domain". i'm gonna have to use it in the future.)
> but in mine it is merely cumbersome and belies the reality of my signals.
are all of your signals periodic?! i sorta doubt it. then, if they ain't periodic, you need to worry about the inherent periodic property of the DFT so you don't screw things up. these are the only DEFINITIONs (as long as we're capitalizing it) that i am working on: N-1 DFT{ x[n] } = X[k] = SUM{ x[n] * exp(-j*2*pi*n*k/N) } n=0 N-1 iDFT{ x[n] } = x[n] = 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } k=0 i must confess, as long as we're talking "definition", that i wish that they put the 1/N factor in the DFT and not in the iDFT. from a conceptual POV, that is where it belongs. (some like having 1/sqrt(N) in both the DFT and iDFT summations, which has some use for fixed-point radix-4 implementations of the FFT.) i am specifically not using either of the following two definitions: { N-1 { SUM{ x[n] * exp(-j*2*pi*n*k/N) } for 0 <= k < N X[k] = { n=0 { { 0 otherwise { N-1 { 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } for 0 <= n < N x[n] = { k=0 { { 0 otherwise nor { N-1 { SUM{ x[n] * exp(-j*2*pi*n*k/N) } for 0 <= k < N X[k] = { n=0 { { undefined otherwise { N-1 { 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } for 0 <= n < N x[n] = { k=0 { { undefined otherwise ... because there is nothing to be gained by those definitions. to contrive those "definitions" not only requires more ink (or keystrokes) in expressing them, but moreover requires the shifting and convolution theorems to be further crapped up, requiring the use of modulo arithmetic in order for them to be correct: DFT{ exp(j*2*pi*k0*n/N) * x[n]) = X[modulo(k-k0,N)] N-1 iDFT{ H[k] * X[k] } = h(*)x[n] = SUM{ h[m]*x[modulo(n-m, N)] } m=0 ( " (*) " means "discrete circular convolution" and modulo(n,N) = n - N*floor(n/N) and floor(.) is what the C or MATLAB implementations say it is.) now how can that be more natural or useful or safe or real or less contrived or less cumbersome than just understanding that N-1 DFT{ x[n] } = X[k] = SUM{ x[n] * exp(-j*2*pi*n*k/N) } n=0 N-1 iDFT{ x[n] } = x[n] = 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } k=0 ? which clearly shows that the DFT periodically extends the N samples given to it, for better or for worse. r b-j