DSPRelated.com
Forums

CAUTION! was "What is the advantage on high-sampling rate ?"

Started by Rick Lyons April 23, 2004
In article c7dp1a016pj@enews3.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 05/06/2004 12:27:
 
> Yes, the basis functions employed are of infinite extent. > However, they are only evaluated in a finite region of that > extent within the DFT
not if you do the shifting or convolution operations.
> and as such their value outside that > region need not be assumed periodic unless the problem > domain is such that one knows that the region is exactly one > period of an infinitely repeating sequence.
NO! even if you *know* you yanked those N samples from some non-repeating sequence, you gotta COUNT ON the DFT thinking that they came from a periodic sequence of period N. this is why we have to worry about workarounds like zero-padding (which is used in overlap-add) or discarding bad samples (used in overlap-save). *you* may know those N samples to be from a non-periodic input, but the DFT thinks they be from a periodic input. (again, Rick, please forgive the anthropomorphizing.) there may be a few trivial examples (like multiplying the spectrum by a constant) where it makes no difference whether the input is periodically extended or not, but there is *no* example where the "assumption" of periodic extension gets you in trouble.
> If that is not the case, no value obtains from extending the > definition of the basis functions as repeating outside the > region. If I define the basis functions to be zero outside > the region of evaluation in the DFT the result is identical.
but if you multiply the DFT result by some H[k], which only does *circular* convolution, you will not get the same results. if you decide to perform "wrap-around" or modulo arithmetic to do that, it is identical in concept and practice to periodic extension. for me, that means identical in definition. wrap-around or modulo N arithmetic on the indices of x[n] or X[k] is one and the same with periodic extension.
> What isn't is the presumed behavior of the transformed > function outside the region. One should presume the > behavior of the basis outside the region to match the > problem under attack rather than take a fixed view of it.
i don't know how you come to that!?? the behavior of the basis functions outside of whatever region (i presume that means outside of [0 .. N-1]) is determined solely by the nature of the basis functions. last time i checked, sinusoidal functions are periodic. now we *do* choose to match the family of basis functions to the problem under attack (e.g. given N samples we *could* match an (N-1)th order polynomial to them and beyond that region those N samples would be extended as such) and perhaps, for some reason, periodic extension outside of that region is not appropriate. but then don't choose the DFT (here is where i agree with Glen H).
> In finite systems the presumption of zero outside the region > fits the physical reality so that any consideration of > infinite periodicity are simply wrong and give results that > don't match the system.
that may be so, and then the DFT *might* give you wrong results. unless, perhaps, you align your POV of the data to that of the DFT. when you do that, then workarounds are possible such as zero-padding to prevent overlap in the convolution or simply tossing bad output samples when overlap occurs.
> All I am saying is that it is not productive to limit the > definition of the regions that are not evaluated within the > DFT to any particular set of values as is done by the people > why claim that the infinitely repeating basis is the only > legitimate view.
but if you don't "limit" (i would use the term "recognize") your understanding of the inferred nature of those regions of dispute, then you run into trouble. when you impose modulo N arithmetic on the indices of your DFT or iDFT data, that is a de facto recognition of the periodicity of the data. if you didn't do that, you would get wrong results in practically any non-trivial situation. I think O&S agree, Chapter 8 (1989), at the end of the intro (p. 515): "This [periodic-sequence] approach to the DFT emphasizes the fundamental inherent periodicity of the DFT representation and ensures that this periodicity is not overlooked in applications of the DFT." that is why we look at it that way. for historical purposes, here are google links to where this has been hashed out in the past (little has changed): http://groups.google.com/groups?threadm=B9743716.64B7%25robert%40wavemechani cs.com (unwrap line) http://groups.google.com/groups?threadm=3ki7dg%24g75%40homesick.cs.unlv.edu BTW, Bob, you *don't* have to assume both your FIR length and zero-padded data are 1/2 of the "container" size (N). just as long as they add to no more than 1 sample more than the container size: http://groups.google.com/groups?selm=383650AD.13EE%40viconet.com r b-j
Bob Cain <arcane@arcanemethods.com> wrote:
> > Yes, the basis functions employed are of infinite extent.
Right.
> However, they are only evaluated in a finite region
Right.
> of that > extent within the DFT and as such their value outside that > region need not be assumed periodic
No! For example, let's consider two signals of length N, where N is your transform size. Both signals could be said to be defined inside that region and zero everywhere else. If you convolve them by multiplying their Fourier coefficients you get a result that can only be explained if you accept that the transform is periodic. In fact, if you were to use the DFT to compute a time shift you would soon see that there is no way to consider the bases to be zero outside the transform region.
> unless the problem > domain is such that one knows that the region is exactly one > period of an infinitely repeating sequence.
With a DFT, the region is *always* a multiple of a period of an infinitely repeating sequence.
> If that is not the case, no value obtains from extending the > definition of the basis functions as repeating outside the > region. If I define the basis functions to be zero outside > the region of evaluation in the DFT the result is identical.
No, that's just not true! Let's put it another way: You are effectively saying that the Fourier basis functions are the complex exponentials inside your region of evaluation, and zero everywhere else. You see, the Balian-Low theorem does not allow such windowed exponentials to be orthogonal and at the same time localized in time and frequency - their Heisenberg product would be infinite. In other words: if you were to define the DFT in such a way, your coefficients of this transform would no longer be useful. To say it again: if you don't accept the periodicity, which means the basis functions are not limited in time, your transform would not only *not* be a DFT, it wouldn't even work!
> What isn't is the presumed behavior of the transformed > function outside the region. One should presume the > behavior of the basis outside the region to match the > problem under attack rather than take a fixed view of it.
I'd say one should presume the basis to be defined exactly as it is. For the DFT, we're talking about non-windowed, non-time-localized complex exponentials. You can either view this as the transform being periodic, or by thinking of the bases as extending outside the transform indefinitely. If you have a peak at a certain location, you have the exact same peak at a multiple of the transform size everywhere else.
> In finite systems the presumtion of zero outside the region > fits the physical reality..
I don't think that this kind of generalization is possible!
> ..so that any consideration of > infinite periodicity are simply wrong and give results that > don't match the system.
On the contrary - the consideration of infinite periodicity is neither wrong, nor can it be avoided if you are to understand and use the DFT!
> All I am saying is that it is not productive to limit the > definition of the regions that are not evaluated within the > DFT to any particular set of values as is done by the people > why claim that the infinitely repeating basis is the only > legitimate view.
I can't comment on the general productivity of such considerations. I can, however, say that for the DFT the infinitely repeating basis *is* in fact the only legitimate view. The DFT is a mathematical tool. You can't simply generalize from experiments you might have seen in physics that "zero outside the region of evaluation" is a valid assumption here, too. While this might in some cases be possible, for the DFT it's not. --smb

"Stephan M. Bernsee" wrote:
>
> > To say it again: if you don't accept the periodicity, which means the > basis functions are not limited in time, your transform would not only > *not* be a DFT, it wouldn't even work! >
Oh come on. To make a DFT work all you need to know is how to multiply and add. Next you will be telling us you have to sprinkle chicken blood and frog guts around the perimeter of your computer or the answer will come out wrong. -jim -----= Posted via Newsfeeds.Com, Uncensored Usenet News =----- http://www.newsfeeds.com - The #1 Newsgroup Service in the World! -----== Over 100,000 Newsgroups - 19 Different Servers! =-----
In article 409b7cab_7@corp.newsgroups.com, jim at "N0sp"@m.sjedging@mwt.net
wrote on 05/07/2004 08:13:

> > > "Stephan M. Bernsee" wrote: >> > >> >> To say it again: if you don't accept the periodicity, which means the >> basis functions are not limited in time, your transform would not only >> *not* be a DFT, it wouldn't even work! >> > Oh come on. To make a DFT work all you need to know is how to multiply and > add.
no. to use the DFT in anything but the most trivial applications, you also need to know how to either conceptually do modulo N arithmetic on the indices of x[n] and X[k] or to conceptually periodically extend x[n] and X[k] beyond the original limits of [0 .. N-1]. i contend these are the same thing and that if you can't do them, you cannot make the DFT work in applications requiring shifting (multiplying X[k] by exp(-j*2*pi*n0*k/N)) or convolution (multiplying X[k[ by H[k]). r b-j

robert bristow-johnson wrote:

> In article c7dp1a016pj@enews3.newsguy.com, Bob Cain at > arcane@arcanemethods.com wrote on 05/06/2004 12:27: > > >>Yes, the basis functions employed are of infinite extent. >>However, they are only evaluated in a finite region of that >>extent within the DFT > > > not if you do the shifting or convolution operations.
How does that change the region of evaluation of the basis?
> > >>and as such their value outside that >>region need not be assumed periodic unless the problem >>domain is such that one knows that the region is exactly one >>period of an infinitely repeating sequence. > > > NO! even if you *know* you yanked those N samples from some non-repeating > sequence, you gotta COUNT ON the DFT thinking that they came from a periodic > sequence of period N.
It doesn't think.
> this is why we have to worry about workarounds like > zero-padding (which is used in overlap-add) or discarding bad samples (used > in overlap-save). *you* may know those N samples to be from a non-periodic > input, but the DFT thinks they be from a periodic input. (again, Rick, > please forgive the anthropomorphizing.)
I just don't see where you get this. Zero padding is not a work around, it merely provides enough space for a result that is not aliased in the time domain. If you want to use the term periodicity instead of aliasing, ok, but I don't.
> > there may be a few trivial examples (like multiplying the spectrum by a > constant) where it makes no difference whether the input is periodically > extended or not, but there is *no* example where the "assumption" of > periodic extension gets you in trouble.
Would you agree that, if I filter my music with a repeating FIR rather than a single instance, I get troublesome results? If the FIR has compact support, which it does by definition, why should the size of the region I chose to enclose it in make a difference to the end result of convolving it with something else?
> > >>If that is not the case, no value obtains from extending the >>definition of the basis functions as repeating outside the >>region. If I define the basis functions to be zero outside >>the region of evaluation in the DFT the result is identical. > > > but if you multiply the DFT result by some H[k], which only does *circular* > convolution, you will not get the same results. if you decide to perform > "wrap-around" or modulo arithmetic to do that, it is identical in concept > and practice to periodic extension. for me, that means identical in > definition. wrap-around or modulo N arithmetic on the indices of x[n] or > X[k] is one and the same with periodic extension.
I dunno from modular arithmetic, at least in the framework of the definition of the DFT. I can define the basis functions apriori to be compact tone bursts instead of the usual complex exponentials and the result of the operation will be identical. What more need be said regarding the relevance of what is outside the region?
> > >>What isn't is the presumed behavior of the transformed >>function outside the region. One should presume the >>behavior of the basis outside the region to match the >>problem under attack rather than take a fixed view of it. > > > i don't know how you come to that!?? the behavior of the basis functions > outside of whatever region (i presume that means outside of [0 .. N-1]) is > determined solely by the nature of the basis functions. last time i > checked, sinusoidal functions are periodic. now we *do* choose to match the > family of basis functions to the problem under attack (e.g. given N samples > we *could* match an (N-1)th order polynomial to them and beyond that region > those N samples would be extended as such) and perhaps, for some reason, > periodic extension outside of that region is not appropriate. but then > don't choose the DFT (here is where i agree with Glen H).
I maintain that the DFT is the same operation whether you use Fourier's infinitely repeating basis functions or others which merely have the same values in the region of evaluation. History should not limit our view when a better generalization is available. Functional analysis and decomposition allow for variation of the basis which does not change the result of the algorithm. Would you agree that in wavelet decomposition no basis periodicity is presumed?
> BTW, Bob, you *don't* have to assume both your FIR length and zero-padded > data are 1/2 of the "container" size (N). just as long as they add to no > more than 1 sample more than the container size: >
Robert! Do you really think I don't know that? :-) Nonetheless, it was worth pointing out. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein
Robert, it looks like we've arrived at our yearly impasse 
yet again.  I've gotta say, though, that every time we come 
back to it, my understanding of what goes on has deepened 
over the year in between in response to your arguments. 
Maybe next year I can give you the irrefutable argument. 
You know I'll be thinking about it even while sleeping.  :-)


Bob
-- 

"Things should be described as simply as possible, but no 
simpler."

                                              A. Einstein
Bob Cain wrote:

> ... I can define the basis functions apriori to be > compact tone bursts instead of the usual complex exponentials and the > result of the operation will be identical. What more need be said > regarding the relevance of what is outside the region? ...
Maybe you can define the basis functions as tone bursts, but you don't. You use them as complex exponentials (now made famous in song) of form exp(j*2*pi*k0*n/N). I don't see a tone burst there. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Bob Cain wrote:
> > > Stephan M. Bernsee wrote: > >> The DFT is inherently periodic. The complex exponentials have no >> localization in the time domain, and a single time domain sample has >> no localization in the frequency domain. No localization means that it >> is totally irrelevant if you're right here, or one period away - it's >> exactly the same world. This is practically the definition of >> "periodic".
> Yes, the basis functions employed are of infinite extent. However, they > are only evaluated in a finite region of that extent within the DFT and > as such their value outside that region need not be assumed periodic > unless the problem domain is such that one knows that the region is > exactly one period of an infinitely repeating sequence.
> If that is not the case, no value obtains from extending the definition > of the basis functions as repeating outside the region. If I define the > basis functions to be zero outside the region of evaluation in the DFT > the result is identical.
(snip, hopefully not too much) It is both the basis functions and the boundary condition that need to be considered. Non periodic basis functions with periodic boundary conditions would have a periodic (and probably very strange) result. Now, DST and DCT, for a transform of length L have solutions that are periodic with length 2L, which allows them to match functions that are not periodic over length L. There are a number of physics problems where the solution comes out in the right form for applying Fourier transforms. Vibrational waves on a string of constant mass per unit length, sound waves in an cylindrical tube, or voltage (or current) waves on a coaxial cable. Applying boundary conditions, such that the function or its derivative are zero at certain points chooses which of the possible solutions are allowed. The solution is then periodic with period 2L or 4L for the usual boundary conditions where the function or its derivative are zero at the ends. -- glen
In article c7gd5f0292p@enews4.newsguy.com, Bob Cain at
arcane@arcanemethods.com wrote on 05/07/2004 12:23:

> Robert, it looks like we've arrived at our yearly impasse > yet again.
we can let it rest.
> I've gotta say, though, that every time we come > back to it, my understanding of what goes on has deepened > over the year in between in response to your arguments.
cool.
> Maybe next year I can give you the irrefutable argument.
i'l try to be open minded. (maybe i'll even succeed.)
> You know I'll be thinking about it even while sleeping. :-)
sounds like a nightmare. In article c7gc4s02i4p@enews1.newsguy.com, Bob Cain at arcane@arcanemethods.com wrote on 05/07/2004 12:05:
>> BTW, Bob, you *don't* have to assume both your FIR length and zero-padded >> data are 1/2 of the "container" size (N). just as long as they add to no >> more than 1 sample more than the container size: >> > > Robert! Do you really think I don't know that? :-)
i thought you did. however i was reacting to this: In article c7b8rk0k55@enews4.newsguy.com, Bob Cain at arcane@arcanemethods.com wrote on 05/05/2004 13:39:
> It is just a whole lot more straightforward to keep in mind > that convolution requires a double length result container > to deal with finite systems than to try and keep track of > periodicity considerations.
i didn't understand why "double" length in particular. i know there are a few in the music-dsp community (not necessarily the list) who, when they are doing fast convolution, always choose the FFT length to be twice the buffer length (which is also the hop length), which allows them to have an FIR length of the same size (1/2 of the FFT). they wouldn't have to do it that way. given an FIR length and knowledge of the processing costs of the FFT and iFFT, there is a more optimal selection of the FFT length and the audio buffer size can be the FFT length less the FIR (plus 1). i figgered you knew that, but still wondered where "double" came from. mucho apologies to Jerry or anyone else i may have pissed off in this sorta pedantic argument. i would have considered this to have more practical use to neophytes if it weren't for the fact that Bob (and Adrian Hey, in the past) and i (and some others like Glen and Randy, not sure exactly where Eric J sits on this now) disagree as diametrically as we do. if one of us says "always think of the DFT as periodically extending the data given it, don't do that and you might run into trouble." and the other says "no! the DFT does not do that." what's a newbie to think? it's not really a "how many angels are dancing on the head of a pin" kind of argument. at least not IMO. there was another O&S quote that i was looking for earlier and only found now. (Eric J once likened this to "quoting O&S scripture", but i don't care.) it's on p. 532 of the 1989 edition of O&S, right after Eq. (8.63): {my editorial comments are in braces} "In recasting Eqs. (8.11) and (8.12) {the DFS} in the form of Eqs. (8.61) and (8.62) {the DFT which appear functionally identical to the DFS} for finite-duration sequences, we have not eliminated the inherent periodicity. As with the DFS, the DFT X[k] is equal to samples of the periodic Fourier transform X(e^jw), and if Eq. (8.62) {the iDFT} is evaluated for values of n outside the interval 0 <= n <= N-1, the result will not be zero but rather a periodic extension of x[n]. The inherent periodicity is always present. Sometimes it causes us difficulty and sometimes we can exploit it, but to totally ignore it is to invite trouble. In defining the DFT representation we are simply recognizing that we are *interested* in values of x[n] only in the interval 0 <= n <= N-1 because x[n] is really zero outside that interval, and we are *interested* in values of X[k] only in the interval 0 <= k <= N-1 because these are the only value needed in Eq. (8.62)." {Example 8.5 deleted} "The distinction between the finite-duration sequenc x[n] and the periodic sequence ~x[n] related through Eqs. (8.51) and (8.54) {explicitly spelling out the periodic extension of x[n]} may seem minor {how about totally obviated?}, since by using these equations it is straightforward to construct one from the other. However, the distinction becomes important in considering properties of the DFT and in considering the effect on x[n] of modifications to X[k] {such as multiplying by H[k] or exp(-j*2*pi*n0*k/N)}. This will become evident in the next section, where we discuss the properties of the DFT representation." {end of quoted O&S} for those without the book, *both* Eqs. (8.11) and (8.12) {the Discrete Fourier Series, DFS} and Eqs. (8.61) and (8.62) {the DFT} look like: N-1 X[k] = SUM{ x[n] * exp(-j*2*pi*n*k/N) } n=0 N-1 x[n] = 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } k=0 but the DFS equations have a little squiggle on top of the x[] and X[]. now here is where i *both* draw support for my POV *and* where i take pedagogical issue with O&S (which, i think, Bob and/or Adrian Hey and whomever else might draw support if they wanted it). I SAY there is NOTHING to gain, conceptually, pedagogically, or practically to differentiate between the DFS and the DFT. nothing. and there is some risk of screwing things up if you *do* make that differentiation. if you *do* make that differentiation you better *always* think of performing modulo N arithmetic on the indices of x[n] and X[k]. O&S go through all this rigmarole defining { ~x[n] for 0 <= n <= N-1 x[n] = { { 0 otherwise and undoing it ~x[n] = x[modulo(n,N)] AND IT GAINS NOTHING!!! there is no benefit, conceptually nor practically, to do it. it *does* essentially define the DFT and iDFT to be: { N-1 { SUM{ x[n] * exp(-j*2*pi*n*k/N) } for 0 <= k < N X[k] = { n=0 { { 0 otherwise { N-1 { 1/N * SUM{ X[k] * exp(+j*2*pi*n*k/N) } for 0 <= n < N x[n] = { k=0 { { 0 otherwise WHICH HAS NO BENEFIT over the previous (and simpler) definition. so after O&S go through all that rigmarole, then to explore the properties of the DFT (you know, linearity, shifting, convolution, etc.), they have to pass through MORE rigmarole by cluttering up the indices of x[n] and X[k] in any property that can potentially shift those indices with modulo N notation. WHAT GOOD DOES THAT DO? if you compare the DFS properties on Table 8.1 (p. 525), to the DFT properties on Table 8.2 (p. 547) there is absolutely no difference save for two things: the DFS has little squiggles on the x[n] and X[k] sequences and the DFT has modulo N notation in all properties except for linearity. WHY!!! why waste the pages redefining the DFT as they have when all you're doing is hacking off the periodic extension some times and re-attaching the periodic extension other times when you need it? it's like running the car heater *and* the A/C at the same time. (actually that *does* have a use in defogging windows sometimes). but there is nothing to be gained with it in the DFT. what is *much* easier is just to remember that: "The DFT and the DFS are one and the same. The DFT and iDFT always periodically extend the data passed to it (assuming it to be one period of a periodic sequence) and always return data that is periodic (one period of another periodic sequence). The DFT maps one periodic sequence of period N to another periodic sequence of the same period and the iDFT maps it back." if you keep that in mind, you will never go wrong. (at least you will never err due to that concept.) this is not radical. this is a very moderate or conservative approach to the issue. it is mathematically safe and compact. (i did not plan to type as much as i did here. se la vie.) r b-j

robert bristow-johnson wrote:


> >>You know I'll be thinking about it even while sleeping. :-) > > > sounds like a nightmare.
Nah, I love thinking about this stuff. It's when I start babbling it aloud around people who haven't a clue that it gets a bit nightmarish. :-)
> i didn't understand why "double" length in particular.
It comes out of my unstated assumption that the low level chunks we convolve are of equal length. Yeah, I know about the double less one.
> i know there are a > few in the music-dsp community (not necessarily the list) who, when they are > doing fast convolution, always choose the FFT length to be twice the buffer > length (which is also the hop length), which allows them to have an FIR > length of the same size (1/2 of the FFT). they wouldn't have to do it that > way. given an FIR length and knowledge of the processing costs of the FFT > and iFFT, there is a more optimal selection of the FFT length and the audio > buffer size can be the FFT length less the FIR (plus 1). i figgered you > knew that, but still wondered where "double" came from.
You're right, I didn't know that. How does one determine those optimal sizes?
> > > mucho apologies to Jerry or anyone else i may have pissed off in this sorta > pedantic argument. i would have considered this to have more practical use > to neophytes if it weren't for the fact that Bob (and Adrian Hey, in the > past) and i (and some others like Glen and Randy, not sure exactly where > Eric J sits on this now) disagree as diametrically as we do. if one of us > says "always think of the DFT as periodically extending the data given it, > don't do that and you might run into trouble." and the other says "no! the > DFT does not do that." what's a newbie to think? it's not really a "how > many angels are dancing on the head of a pin" kind of argument. at least > not IMO.
Mine either, obviously. :-) I do appreciate that such opposed views can be discussed without recourse to angry name calling or "you're just wrong" kinds of dead ends. Well, mostly without those dead ends.
> > there was another O&S quote that i was looking for earlier and only found > now. (Eric J once likened this to "quoting O&S scripture", but i don't > care.) it's on p. 532 of the 1989 edition of O&S, right after Eq. (8.63): > > {my editorial comments are in braces} > > "In recasting Eqs. (8.11) and (8.12) {the DFS} in the form of Eqs. (8.61) > and (8.62) {the DFT which appear functionally identical to the DFS} for > finite-duration sequences, we have not eliminated the inherent periodicity. > As with the DFS, the DFT X[k] is equal to samples of the periodic Fourier > transform X(e^jw), and if Eq. (8.62) {the iDFT} is evaluated for values of n > outside the interval 0 <= n <= N-1, the result will not be zero but rather a > periodic extension of x[n].
Therein lies the big if. It is not a requirement, it's a choice.
> The inherent periodicity is always present.
If you impose that on the basis function.
> However, the distinction becomes important in > considering properties of the DFT and in considering the effect on x[n] of > modifications to X[k] {such as multiplying by H[k] or exp(-j*2*pi*n0*k/N)}. > This will become evident in the next section, where we discuss the > properties of the DFT representation."
This is where I need to jump into O&S, which I'm chagrined to say I don't own. I may well find that certain of the "properties" are the properties of a DFT with basis functions extended +/- forever. That's what I want to understand next.
> > now here is where i *both* draw support for my POV *and* where i take > pedagogical issue with O&S (which, i think, Bob and/or Adrian Hey and > whomever else might draw support if they wanted it). > > I SAY there is NOTHING to gain, conceptually, pedagogically, or practically > to differentiate between the DFS and the DFT. > > nothing.
What it does that floats my boat is remove the infinity from what are otherwise finite systems. I do understand what you said in what follows the above and that's why I want to study the properties more deeply to see if there is in fact a pedagogically simple way to keep infinity out of finite systems when considering those properties. Back next year. Maybe sooner. Bob -- "Things should be described as simply as possible, but no simpler." A. Einstein