Forums

Low freq "analog" of Nyquist? ( possibly naive question )

Started by Richard Owlett July 2, 2003
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:cKuOa.2486$Jk5.1496296@feed2.centurytel.net...

(snip)

> I feel we've been sucked into a long discussion about nothing. > > Glen's assertion: > > "Consider a signal, f, sampled over time T, from t=0 to t=T. Assume that > f(0)=f(T)=0 for now. All the components must be sine with periods that
are
> multiples of 2T. " > > was what started this lengthy discussion. What preceded this was a
question
> about what happens to spectral analysis if you take a relatively short > temporal sample length. > > I disagreed with the assertion above and mostly ignored the introduction
of
> solutions of differential equations. I disagree that "all of the
components
> must be sine with periods that are multiples of 2T" - don't you? I would > rather say that f can be "anything reasonable" and unrelated to T in > general. That's the situation one has in normal DSP practice. > > There are counter examples given earlier in the thread ....... including > sin(t) + sin(pi*t) > > It's a circular discussion to now change and say: but gee, we were talking > about solutions to differential equations in the first place - because we > weren't. So, defending the assertion on the basis that it matches up with > solutions to differential equations is not to the point. > > Perhaps we're simply talking about two different things.....
I made that assertion after the restriction that f(0)=f(T)=0 Without that restriction, sin(t) + sin(pi * t) could be a solution. If one knew the solution was of the form f(t)=A sin(t) + B sin(pi t), only two sample points (with a few restrictions) are needed to solve for A and B. As far as I know, (yes, I should find the reference) the Fourier series started as a solution to a differential equation. sin() and cos() are the preferred basis set for these problems because they are the solutions to the appropriate differential equation. Now, once you have the abstraction you can use it, in the appropriate instances, without going back to the original problem, but, fundamentally these are solutions to differential equations. -- glen
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:<cKuOa.2486$Jk5.1496296@feed2.centurytel.net>...

> I feel we've been sucked into a long discussion about nothing.
> I disagreed with the assertion above and mostly ignored the introduction of > solutions of differential equations
> It's a circular discussion to now change and say: but gee, we were talking > about solutions to differential equations in the first place - because we > weren't. So, defending the assertion on the basis that it matches up with > solutions to differential equations is not to the point.
Fred, I enjoy reading your posts and have learned quite a bit from you in the past, but I'll have to disagree with you here. Differential equations are precisely the point. I would even go so far as to say that they are at the very core of the problem. To simplify the argument, let's start with countinuous functions of the type f(t)=A*exp(iwt) (where the sin(wt) and cos(wt) functions can be generated as linear combinations). Why is it that the exp function is so useful in Fourier theory? Why do mathematicians express their functions in terms exp functions? Why do DSPers express their signals in terms of sinusoidals and not, say, the Taylor formula? The reason is that the exp function is a solution to a differential equation. Yes I know, the more "applied" or "physical" argument is that sinusoidals are "natural". Those who prefer that sort of argument usually refers to various aspects of sound. It's only a detour to get to the same answer, as sound is governed by differential equations. If you take the function f(t)=A*exp(iwt), you can integrate and differentiate it very easily: f'(t)=iwA*exp(iwt) [1] F(t) = A/iw*exp(iwt) [2] Now, take a closer look at those equations, and you will find that they have a most peculiar property, namely that f'(t) and F(t) can be written in terms of f(t) itself as a factor and *without* the differential operator D and integration operator I: f'(t)=D{f(t)}=iw*f(t) [3] F(t) =I{f(t)}=1/iw*f(t) [4] How many functions have that property? Not very many. In fact, only those fungtions that are "eigenfunctions to differential operators". Note the similarity between [3] and [4] and the eigenvector and eigenvalue to a matrix A: Ax=kx [5] Similarily, one can say that iw is an eigenvalue of the differential operator in [3] and 1/iw is an eigenvalue to the integral operator in [4]. As one looks into various geometrical coordinate systems and boundary conditions, the differential operator D needs to be modified a bit from the familiar df/dt form, but still, Bessel functions, Legendre polynomials and Normal Modes all share the property that their differentials and integrals can be expressed on forms similar to [3] and [4]. If you go back to linear systems theory, you will find that all systems of interest are described in terms of differential equations. This is, unfortunately, a point that is not always emphasized in the texts on DSP, but is avery important prerequisite in control theory. Then, all of a sudden, signals f(t) that can be written on the forms [3] and [4] become very useful. Simply because they allow you to separate the signal itself from the effects the system. Indeed, this is the very reason it makes sense to speak of a "transfer function" without having to include anything about the signal to be fed into the system. As for the discrete, finite length case, the systems are no longer described in terms of differential equations, they are described in terms of difference equations. I think that's how one would prefer to implement IIR filters, as opposed to convolving with impulse responses or multiplying with transfer functions in the frequency domain. Again, note the close link between the mathemathical basis of your analysis and the options you have: Difference equation, spectrum or impulse response. You can choose among three different formulations for the one that serves you best. The reason you can do that is the close link between the Fourier basis and the governing differential equations. The exp(i2pi kn/N) function is an eigenfunction to the difference operator. However, the analysis becomes more involved, as one needs to find the correct formulation to see what's going on. For finite-length sequences it usually pays off to choose a matrix formulation and study the various eigen vectors of the matrixes involved, but this comes at the cost of no obvious link between eigenvectors and analytical expressions. The Karhunen-Loeve transform is one example on general analyis based on eigen vectors, the MUSIC algorithm is based on the (missing) link between analyitical expressions and eigen vectors of the data covariance matrix. Back to your post, I think understanding the properties of solutions to differential or difference equations is cruical for understanding what can and can not be done in terms of the DFT. Simply because DEs are the very basis of the DFT, whether is's said explicitly or not. Rune
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:<cKuOa.2486$Jk5.1496296@feed2.centurytel.net>...

> Rune, > > I feel we've been sucked into a long discussion about nothing. > > Glen's assertion: > > "Consider a signal, f, sampled over time T, from t=0 to t=T. Assume that > f(0)=f(T)=0 for now. All the components must be sine with periods that are > multiples of 2T. "
In principle, Glen's assertions make sense, provided "All the components" means "All the Fourier componets". Let's see what this means. For simplicity, I assume there are N+1 samples in the sequence and that the sampling period T' is T'=T/N. [1] Define the sampling frequency F_s as F_s=1/T'=N/T. [2] The Fourier basis frequencies fall, under the above conditions, on the angular frequencies w_k=2pi k/N*F_s k=0,1,...,N. [3] By [2] w_k=2pi k/T k=0,1,...,N. [4] This is a very interesting result. Consider two different sampling frequencies F_s1 and F_s2 that yields two different numbers of samples, N_1 and N_2 such that N_1 < N_2, when observing the signal *over*the*same*time*window* T. What [4] tells me is that no matter the sampling rate, the first N_1 frequencies are mutual between the two spectra. Because the sampling frequencies are different, there are more lines in the 2nd series, but once T is specified, the first non-DC spectrum line falls on w_1=2pi/T. So we have stablished that the location of the Fourier components is independent of sampling, but *does* depend on observation window. As for the boundary conditions in the ends, the periodic assumption on the DFT translates to x(-1)=x(N) [5] x(0) =x(N+1) [6] where x(-1) and x(N+1) are the first samples in the hypothetical extentions towards left and right, respectively. Of course, the actual values of the boundary conditions are left unspecified for now, but something must be left to the data to decide.
> I disagreed with the assertion above and mostly ignored the introduction of > solutions of differential equations. I disagree that "all of the components > must be sine with periods that are multiples of 2T" - don't you? I would > rather say that f can be "anything reasonable" and unrelated to T in > general. That's the situation one has in normal DSP practice.
I will not unconditially say "multiples of 2T" as opposed to any other specific number, but I have made an effort to show that the type of argument is correct. The period of the Fourier components are intimately related to the length of the observation window. Since there are limitations to where the Fourier components fall in the spectrum, there are limitations to what information can be extracted from the spectrum. That's basically what the uncertainty principle says. One can, of course, express a sinusoidal at any frequency below F_s/2 in terms of these Fourier components. But that's another discussion. Rune
"Rune Allnor" <allnor@tele.ntnu.no> wrote in message
news:f56893ae.0307090040.1b9598dc@posting.google.com...
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<cKuOa.2486$Jk5.1496296@feed2.centurytel.net>...
> > > I feel we've been sucked into a long discussion about nothing. > > > I disagreed with the assertion above and mostly ignored the introduction
of
> > solutions of differential equations > > > It's a circular discussion to now change and say: but gee, we were
talking
> > about solutions to differential equations in the first place - because
we
> > weren't. So, defending the assertion on the basis that it matches up
with
> > solutions to differential equations is not to the point. > > Fred, > > I enjoy reading your posts and have learned quite a bit from you in the > past, but I'll have to disagree with you here. Differential equations > are precisely the point. I would even go so far as to say that they are > at the very core of the problem. > > To simplify the argument, let's start with countinuous functions of the > type f(t)=A*exp(iwt) (where the sin(wt) and cos(wt) functions can be > generated as linear combinations). > > Why is it that the exp function is so useful in Fourier theory? Why do > mathematicians express their functions in terms exp functions? Why do > DSPers express their signals in terms of sinusoidals and not, say, > the Taylor formula? > > The reason is that the exp function is a solution to a differential > equation. Yes I know, the more "applied" or "physical" argument is that > sinusoidals are "natural". Those who prefer that sort of argument > usually refers to various aspects of sound. It's only a detour to > get to the same answer, as sound is governed by differential equations. > > If you take the function f(t)=A*exp(iwt), you can integrate and > differentiate it very easily: > > f'(t)=iwA*exp(iwt) [1] > F(t) = A/iw*exp(iwt) [2] > > Now, take a closer look at those equations, and you will find that they > have a most peculiar property, namely that f'(t) and F(t) can be written > in terms of f(t) itself as a factor and *without* the differential > operator D and integration operator I: > > f'(t)=D{f(t)}=iw*f(t) [3] > F(t) =I{f(t)}=1/iw*f(t) [4] > > How many functions have that property? Not very many. In fact, only > those fungtions that are "eigenfunctions to differential operators". > Note the similarity between [3] and [4] and the eigenvector and > eigenvalue to a matrix A: > > Ax=kx [5] > > Similarily, one can say that iw is an eigenvalue of the differential > operator in [3] and 1/iw is an eigenvalue to the integral operator in [4]. > > As one looks into various geometrical coordinate systems and boundary > conditions, the differential operator D needs to be modified a bit from > the familiar df/dt form, but still, Bessel functions, Legendre > polynomials and Normal Modes all share the property that their > differentials and integrals can be expressed on forms similar > to [3] and [4]. > > If you go back to linear systems theory, you will find that all systems > of interest are described in terms of differential equations. This is, > unfortunately, a point that is not always emphasized in the texts on DSP, > but is avery important prerequisite in control theory. Then, all of a > sudden, signals f(t) that can be written on the forms [3] and [4] become > very useful. Simply because they allow you to separate the signal itself > from the effects the system. Indeed, this is the very reason it makes > sense to speak of a "transfer function" without having to include > anything about the signal to be fed into the system. > > As for the discrete, finite length case, the systems are no longer > described in terms of differential equations, they are described in > terms of difference equations. I think that's how one would prefer to > implement IIR filters, as opposed to convolving with impulse responses > or multiplying with transfer functions in the frequency domain. Again, > note the close link between the mathemathical basis of your analysis > and the options you have: Difference equation, spectrum or impulse > response. You can choose among three different formulations for the > one that serves you best. The reason you can do that is the close > link between the Fourier basis and the governing differential equations. > > The exp(i2pi kn/N) function is an eigenfunction to the difference > operator. However, the analysis becomes more involved, as one needs to > find the correct formulation to see what's going on. For finite-length > sequences it usually pays off to choose a matrix formulation and study > the various eigen vectors of the matrixes involved, but this comes at > the cost of no obvious link between eigenvectors and analytical > expressions. The Karhunen-Loeve transform is one example on general > analyis based on eigen vectors, the MUSIC algorithm is based on the > (missing) link between analyitical expressions and eigen vectors of > the data covariance matrix. > > Back to your post, I think understanding the properties of solutions to > differential or difference equations is cruical for understanding what > can and can not be done in terms of the DFT. Simply because DEs are > the very basis of the DFT, whether is's said explicitly or not. > > Rune
Rune, I *was* getting grumpy when I wrote that..... Very nice treatment of a lot of good stuff! Much better than my brief post recently in this thread regarding linear systems, etc. There's nothing I see to disagree with. However, in the context of the discussion, we were talking about arbitrary time-limited signals and not systems - if that distinction matters. So, even though the exponential form may be handy, and indeed it is, I was having trouble making the connection between Fourier Series and arbitrary time-limited functions. Fourier Transform, no trouble. Fourier Series, yes. I hasten to add that I have not been limiting the discussion to discrete representations because that wasn't the crux of the issue - as I view how one might parse the problem and help illuminate an answer. I will post in response to your next one that gets into this more..... Don't know what that's going to look like yet! Please yes disagree with me. I'm doing this to learn as much as to help. If I acquiesce then I'm not going to learn anything. Here is the question: Given an arbitrary time-limited function, how do you relate this function to the solutions to differential equations *as such* (in a meaningful way) without applying some simplifications such as sampling, assumptions of periodicity, etc? The "relation" should not simply be that the same mathematical tools are used in both cases. There needs to be more of a connection than that. I'm obviously missing something here. Fred
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:hVXOa.2504$Jk5.1579249@feed2.centurytel.net...

(big snip)

> Here is the question:
> Given an arbitrary time-limited function, how do you relate this function
to
> the solutions to differential equations *as such* (in a meaningful way) > without applying some simplifications such as sampling, assumptions of > periodicity, etc? > The "relation" should not simply be that the same mathematical tools are > used in both cases. There needs to be more of a connection than that.
I think there are about as many that disagree as agree with me on this one. To me, a time limited function means that I only care about it for a certain range of time, the length of a concert, for example. Since this is a DSP group, the idea of a Don't Care state in optimizing digital logic should be common. (Karnaugh maps are currently discussed in a different thread.) So, I would ask for the simplest representation that supplied the desired values over the time of interest. In many cases this is the signal that is periodic with a period equal to the time of interest. This allows an arbitrary function over the time period of interest. Next, I would set the function to zero outside the period of interest, which then allows the Fourier series representation to be easily computed. According to Fourier, we have not restricted the arbitrary time limited function at all. (Consider setting it to zero epsilon outside the region, in the limit as epsilon goes to zero. Most concerts start quiet and end quiet, anyway.) I expect some to agree and some to disagree, but that is the way I see it. -- glen
"Rune Allnor" <allnor@tele.ntnu.no> wrote in message
news:f56893ae.0307090535.60d64026@posting.google.com...
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<cKuOa.2486$Jk5.1496296@feed2.centurytel.net>...
> > > Rune, > > > > I feel we've been sucked into a long discussion about nothing. > > > > Glen's assertion: > > > > "Consider a signal, f, sampled over time T, from t=0 to t=T. Assume
that
> > f(0)=f(T)=0 for now. All the components must be sine with periods that
are
> > multiples of 2T. " > > In principle, Glen's assertions make sense, provided "All the components"
means
> "All the Fourier componets". Let's see what this means. For simplicity, I > assume there are N+1 samples in the sequence and that the sampling period
T' is
> > T'=T/N. [1] > > Define the sampling frequency F_s as > > F_s=1/T'=N/T. [2] > > The Fourier basis frequencies fall, under the above conditions, on the > angular frequencies > > w_k=2pi k/N*F_s k=0,1,...,N. [3] > > By [2] > > w_k=2pi k/T k=0,1,...,N. [4] > > This is a very interesting result. Consider two different sampling
frequencies
> F_s1 and F_s2 that yields two different numbers of samples, N_1 and N_2
such
> that N_1 < N_2, when observing the signal *over*the*same*time*window* T. > > What [4] tells me is that no matter the sampling rate, the first N_1 > frequencies are mutual between the two spectra. Because the sampling > frequencies are different, there are more lines in the 2nd series, but > once T is specified, the first non-DC spectrum line falls on w_1=2pi/T. > > So we have stablished that the location of the Fourier components is > independent of sampling, but *does* depend on observation window. > As for the boundary conditions in the ends, the periodic assumption > on the DFT translates to > > x(-1)=x(N) [5] > x(0) =x(N+1) [6] > > where x(-1) and x(N+1) are the first samples in the hypothetical
extentions
> towards left and right, respectively. Of course, the actual values of > the boundary conditions are left unspecified for now, but something must > be left to the data to decide. > > > I disagreed with the assertion above and mostly ignored the introduction
of
> > solutions of differential equations. I disagree that "all of the
components
> > must be sine with periods that are multiples of 2T" - don't you? I would > > rather say that f can be "anything reasonable" and unrelated to T in > > general. That's the situation one has in normal DSP practice. > > I will not unconditially say "multiples of 2T" as opposed to any other > specific number, but I have made an effort to show that the type of
argument
> is correct. The period of the Fourier components are intimately related > to the length of the observation window. > > Since there are limitations to where the Fourier components fall in the > spectrum, there are limitations to what information can be extracted from > the spectrum. That's basically what the uncertainty principle says. > > One can, of course, express a sinusoidal at any frequency below F_s/2 > in terms of these Fourier components. But that's another discussion. > > Rune
Very nice exposition. Maybe I've come around a little: What got me going on this was when Glen said: "Consider a signal, f, sampled over time T, from t=0 to t=T. Assume that f(0)=f(T)=0 for now. All the components must be sine with periods that are multiples of 2T. If it is known to have a maximum frequency component < Fn then the number of possible frequency components is 2 T Fn. A system with 2 T Fn unknowns needs 2 T Fn equations, so 2 T Fn sampling points. 2 T Fn sampling points uniformly distributed over time T are 2 Fn apart." This seems to have a couple of errors - which may have contributed to my misunderstanding: It should say: frequencies that are integer multiples of (1/T) instead of periods that are multiples of 2T. The frequencies need to get larger in integer multiples, not the periods. It should say sampling points (in time) that are 1/(2Fn) apart. Time is divided into seconds not Hz. Then, I missed that Glen was talking about sampling in time - which is only introduced at the end. I was focused on "must be sine with periods". In the mean time, I was talking about "sampled over time T" as in "taking a temporal epoch of length T as a sample". Here's an interesting set of assertions: If you discrete sample in time with interval T', there will be a period in frequency that is 1/NT'. If you sample in frequency with interval F', there will be a period in time that is 1/NF' If there's a period in frequency then the frequency "band limit" or region of uniqueness is 2*(fs/2)=fs=1/T' The frequency function, being periodic, isn't band limited. If there's a period in t, then the "time limited" or region of uniquesness is T=1/F'. The time function, being periodic, isn't time limited. So, we have functions that are neither band limited nor time limited but are periodic - and we can limit our investigation to a limited time span or limited frequency span. This is a very normal analytical framework. Now, if we go back to the analog / continuous representation of some time function and we assume that this time function is time-limited. Then, analytically speaking, the function has no band limit or region of uniqueness as above - unless in the special case that the continuous function happened to have an infinite, periodic description and we happend to capture an integer number of periods in creating a time-limited function. If the function has no band limit, then it can't be discrete time sampled without aliasing. If we go ahead and sample it anyway, incurring the aliasing, then a new function is defined by the sequence we obtain. It's not the same function that was evident in the continuous representation. But, let's not sample it in time. Let's consider the time-limited function to be periodic instead. Then there's a sampled, infinite spectrum and there's no aliasing. In this case, a version of what Glen said is correct: "Consider a continuous signal, f(t), taken over time T, from t=0 to t=T. Assume further that a new function g(t) is periodic with period T and g(t) has values in each period equal to f(T) taken in the region t=0 to t=T. Since g(t) is periodic in time, it is discrete (sampled) in frequency. All the frequency components of G(w)=F[(g(t)] must be integral multiples of 1/T and, except in special cases, G(w) is of infinite extent. This means that g(t)=sum over n [an*cos(n*pi*t/T) + bn*sin(n*pi*t/T)] Now, since Glen introduced it, assume that f(0)=f(T)=0. What does this do? I can't tell that it does anything in particular. I can see if we say something different: for *all* f(t) such that f(0)=f(T)=0 then: f(t)=sum over n [bn*sin(n*pi*t/T) + an*cos(n*pi*t/T)] an=0 for n even and (sum over n of an for n odd)=0 which may be interesting but I'm not tuned in.... At least this isn't what I'd call "all components must be sine" because there are functions with nonzero elements that are cosine. Perhaps that wasn't what was meant. Still doesn't get my attention. Either of these temporal functions are known to have infinite spectral extent, so they *can't* have a maximum frequency component < Fn and we can't say "the number of possible frequency components is 2 T Fn". That would be the case only if there were temporal sampling and a periodic spectrum - which isn't a point we've reached and can't reach without changing the definition of the original function. That doesn't mean that such functions don't exist, it only means that we can't get there starting from an *arbitrary* time-limited function. It bothered me to think that a continous spectrum could be expressed as a discrete spectrum. But, with time limiting, and after considering that the temporal record could be made periodic with no loss of information, I can see how this is OK in an analytical context. Solutions to differential equations don't have to be introduced to take this analytical trip do they? That continues to elude me. Saying that sin(t) is the solution to a particular differential equation is interesting but not compelling - why should it be when discussing an arbitrary f(t)? Fred
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:<hVXOa.2504$Jk5.1579249@feed2.centurytel.net>...
[snip]
> Very nice treatment of a lot of good stuff
Thanks.
> However, in the context of the discussion, we were talking about arbitrary > time-limited signals and not systems - if that distinction matters. So, > even though the exponential form may be handy, and indeed it is, I was > having trouble making the connection between Fourier Series and arbitrary > time-limited functions. Fourier Transform, no trouble. Fourier Series, > yes. > > I hasten to add that I have not been limiting the discussion to discrete > representations because that wasn't the crux of the issue - as I view how > one might parse the problem and help illuminate an answer. > > I will post in response to your next one that gets into this more..... > Don't know what that's going to look like yet! > > Please yes disagree with me. I'm doing this to learn as much as to help. > If I acquiesce then I'm not going to learn anything. > > Here is the question: > Given an arbitrary time-limited function, how do you relate this function to > the solutions to differential equations *as such* (in a meaningful way) > without applying some simplifications such as sampling, assumptions of > periodicity, etc?
Eh... I like to think of these matters as two different questions. The first is "is it possible to express an arbitrary function in terms of an arbitrary set of basis functions in a meaningful way?" and the answer is "yes". Most people know this from linear algebra, as a theorem regarding basis shift matrixes. Any discrete sequence of finite length (vector) can be expressed in any basis that is complete, i.e. that the collection of basis vectors, the basis matrix, is of full rank. The basis vectors need not be orthogonal, it suffices that they are linearly independent. Similar arguments apply to discrete sequences of infinite length, and to both finite and infinite continuous signals. That's what Real Analysis and Hilbert space theory is all about. Now, I find it very hard to imagine that people would start expanding functions or sequences into all sorts of bases just for fun. There is usually some sort of purpose to the exercise. As Glen mentioned, orthogonal bases are easier to handle than merely linearly independent ones, so let's minimize drudgery and start with orthogonal bases. Take, for instance, the naive basis that consists of N-length vectors, e_n, n=1,...,N, such that all coefficients in e_n are 0, except for the n'th which is 1. The basis matyrix is the unity matrix. Easy to handle, isn't it? A sequence x(n) is easily reconstructed as N x(n)= sum x_n*e_n [1] n=1 where x_n is the n'th sample in x(n). But why would anyone use this basis? What does this representation of the sequence tell you that the raw sequence does not? I can't see any benefit from using the formulation [1]. OK, so such claims that "some bases are more useful than others" appear to make sense. Which brings to attention the second question, "which is the more useful basis?". To evaluate usefullness we need a purpose or use. Let's look at history and see if we can find clues to why the Fourier basis and differential equations (DEs )are so closely linked. Until the last couple of decades, "Signal Processing" meant "Analog Signal Processing". The basic tools of the trade were the resistor, capasitor and inductor (I know, the tubes and diodes and transistors, etc... please keep those items out of the picture for now). Their behaviour are described in terms of differential equations, so the properties of the Fourier transform as solutions to DEs are used by means of necessity. In the 60ies/70ies, when Digital Signal Processing emerged as a field, it appears that people transfered their "traditional" ways of thinking from the analog domain, governed by physics (DEs), to the new digital domain, where physics plays a less important role. Physics is out of the picture already with the introduction of FIR filters. The Finite Impulse Response filter, with it's finite time duration, linear phase and symmetrical impulse response, is a purely mathematical construct. It can't be realized in terms of RLC networks, so there is nothing present that strictly demands that the DE-based Fourier basis must or even should be used for analysis. In fact, wavelets is a perfectly acceptable basis for representing FIR filters (or any other filter, for that matter) that is not linked with physical DEs. However, wavelets and Fourier series represent the data in different ways and thus serves different purposes. The Fourier analysis provides all the mental hooks (frequency, amplitude, phase, pass band, stop band,...) that lets the analyst design a filter that does a job that makes "physical sense". Another example is the subspace representations I have worked with. The data are represented in terms of a covariance matrix, but instead of computing a Fourier-based periodogram, I perform an eigen decomposition that represents the data as a complete basis (the eigenvectors) and their coefficients (the eigen values) that does not have any resemplence whatsoever with any differential equations. However, it's a perfectly valid vector space representation of the data, mathematically it's not different from the periodogram, it's only less intuitive. As a matter of fact, I believe intuition is a main factor here. I am old enough to... if not have learned analog SP properly so at least have learned from people who knew anlog SP well. And yet, I'm young enough to have maths-based DSP as part of my basic college/university training. I have had many discussions with people older than me who insisted on applying "analog" mind models to analysis where I was content with some mathemathical operation.
> The "relation" should not simply be that the same mathematical tools are > used in both cases. There needs to be more of a connection than that.
Somehow, I think that may be all there is to it. Fourier methods have long traditions based in the past of analog systems. The "analog" or "physical" concepts and mindsets that are due to the DEs of physics, carry over to the maths of continuous functions as well as discrete sequences. However, a purely mathematical "world", as inside the computer, provides a larger degree of freedom than physical reality, so in the mathemathical setting our intuition based on physics may impose artificial limits in one way or another. People *prefer* Fourier transforms due to ease of use and intuition. But what arbitrary functions and sequences are concerned, Fourier transforms are not the only *possible* transforms.
> I'm obviously missing something here.
Rune
Fred Marshall wrote:
>
...
> > Now, since Glen introduced it, assume that f(0)=f(T)=0. What does this do? > I can't tell that it does anything in particular. > I can see if we say something different: > for *all* f(t) such that f(0)=f(T)=0 then: > f(t)=sum over n [bn*sin(n*pi*t/T) + an*cos(n*pi*t/T)] > an=0 for n even > and > (sum over n of an for n odd)=0 > which may be interesting but I'm not tuned in.... > At least this isn't what I'd call "all components must be sine" because > there are functions with nonzero elements that are cosine. Perhaps that > wasn't what was meant. Still doesn't get my attention.
There could be cosine terms that cancel at f(0) and f(T). If not, if cosine terms were really ruled out, half the sampling rate would suffice to avoid aliasing. After all, half the unknowns would be known a priori! ... Jerry -- Engineering is the art of making what you want from things you can get. &#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;&#2013266095;
"Jerry Avins" <jya@ieee.org> wrote in message
news:3F0CCC33.EE156A30@ieee.org...
> Fred Marshall wrote:
> > Now, since Glen introduced it, assume that f(0)=f(T)=0. What does this
do?
> > I can't tell that it does anything in particular. > > I can see if we say something different: > > for *all* f(t) such that f(0)=f(T)=0 then: > > f(t)=sum over n [bn*sin(n*pi*t/T) + an*cos(n*pi*t/T)] > > an=0 for n even > > and > > (sum over n of an for n odd)=0 > > which may be interesting but I'm not tuned in.... > > At least this isn't what I'd call "all components must be sine" because > > there are functions with nonzero elements that are cosine. Perhaps that > > wasn't what was meant. Still doesn't get my attention. > > There could be cosine terms that cancel at f(0) and f(T). If not, if > cosine terms were really ruled out, half the sampling rate would suffice > to avoid aliasing. After all, half the unknowns would be known a priori!
It is a convenience of setting one of the boundary conditions at t=0. If you use f(1)=f(T+1)=0, the same interval, but shifted in time, then both sin() and cos() terms will be required. Another common case is f'(0)=f'(T)=0, where only cos() terms appear. I was just remembering a demonstration of modes on a coax cable, with closed (shorted) or open ended cable. As some point the demonstrator realized that the results were coming out opposite from the way he was expecting. So the next lecture he came out with a current probe on the oscilloscope so that it would agree with the equation as written. -- glen
Hello Rune,
comments interspersed below:


"Rune Allnor" <allnor@tele.ntnu.no> wrote in message
news:f56893ae.0307091706.4a951d32@posting.google.com...
> "Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message
news:<hVXOa.2504$Jk5.1579249@feed2.centurytel.net>...
> [snip] > > Very nice treatment of a lot of good stuff > > Thanks. > > > However, in the context of the discussion, we were talking about
arbitrary
> > time-limited signals and not systems - if that distinction matters. So, > > even though the exponential form may be handy, and indeed it is, I was > > having trouble making the connection between Fourier Series and
arbitrary
> > time-limited functions. Fourier Transform, no trouble. Fourier Series, > > yes. > > > > I hasten to add that I have not been limiting the discussion to discrete > > representations because that wasn't the crux of the issue - as I view
how
> > one might parse the problem and help illuminate an answer. > > > > I will post in response to your next one that gets into this more..... > > Don't know what that's going to look like yet! > > > > Please yes disagree with me. I'm doing this to learn as much as to
help.
> > If I acquiesce then I'm not going to learn anything. > > > > Here is the question: > > Given an arbitrary time-limited function, how do you relate this
function to
> > the solutions to differential equations *as such* (in a meaningful way) > > without applying some simplifications such as sampling, assumptions of > > periodicity, etc? > > Eh... I like to think of these matters as two different questions. > > The first is "is it possible to express an arbitrary function in terms > of an arbitrary set of basis functions in a meaningful way?" and the > answer is "yes". Most people know this from linear algebra, as a theorem > regarding basis shift matrixes. Any discrete sequence of finite length > (vector) can be expressed in any basis that is complete, i.e. that the > collection of basis vectors, the basis matrix, is of full rank. The > basis vectors need not be orthogonal, it suffices that they are linearly > independent. Similar arguments apply to discrete sequences of infinite > length, and to both finite and infinite continuous signals. That's what > Real Analysis and Hilbert space theory is all about. > > Now, I find it very hard to imagine that people would start expanding > functions or sequences into all sorts of bases just for fun. There is > usually some sort of purpose to the exercise. As Glen mentioned, > orthogonal bases are easier to handle than merely linearly independent > ones, so let's minimize drudgery and start with orthogonal bases.
As mentioned here, orthogonal basis allow for finding of each coefficient in an expansion independent of the others.
> > Take, for instance, the naive basis that consists of N-length vectors, > e_n, n=1,...,N, such that all coefficients in e_n are 0, except for the > n'th which is 1. The basis matyrix is the unity matrix. Easy to handle, > isn't it? A sequence x(n) is easily reconstructed as > > N > x(n)= sum x_n*e_n [1] > n=1 >
The DFT expressed as a matrix works out to be orthogonal, so its inverse just uses the transpose of the matrix followed by a simple scaling. If sqrt(n) is put into the matrix, then it becomes orthonormal which is really convenient from a linear algebra point of view.
> where x_n is the n'th sample in x(n). But why would anyone use this > basis? What does this representation of the sequence tell you that > the raw sequence does not? I can't see any benefit from using the > formulation [1].
In the case of differential equations, the total solution is usually written as a homogeneous part plus a particular part. And the particular part if written in terms of the homogeneous functions allows for physical insight. Remember doing MUC (Method of Undeterminded Coefficients). The theory of Green's functions, for instance, turns a differential equation into an integral equation. And if the DiffEQ is a Sturm-Liouville with appropriate boundary conditions, the set of basis functions will be both orthogonal and complete. The Green's function in this case is simply generated from this set. The advantage here is the solution is in terms of physical solutions.
> > OK, so such claims that "some bases are more useful than others" appear > to make sense. Which brings to attention the second question, "which is > the more useful basis?". To evaluate usefullness we need a purpose or use. > Let's look at history and see if we can find clues to why the Fourier
basis
> and differential equations (DEs )are so closely linked.
Fourier was simply trying a new way to solve LaPlace's heat flow equation. And in the case where equilibrium was acheived, the differential equation is very simple 2nd order, so the basis functions are the sines and cosines.
> > Until the last couple of decades, "Signal Processing" meant "Analog > Signal Processing". The basic tools of the trade were the resistor, > capasitor and inductor (I know, the tubes and diodes and transistors, > etc... please keep those items out of the picture for now). Their > behaviour are described in terms of differential equations, so the > properties of the Fourier transform as solutions to DEs are used by > means of necessity. In the 60ies/70ies, when Digital Signal Processing > emerged as a field, it appears that people transfered their "traditional" > ways of thinking from the analog domain, governed by physics (DEs), > to the new digital domain, where physics plays a less important role. > > Physics is out of the picture already with the introduction of FIR > filters. The Finite Impulse Response filter, with it's finite time > duration, linear phase and symmetrical impulse response, is a purely > mathematical construct.
Actually reflections off of multiple surfaces are well described by FIR methods. The simple case is Bragg diffraction with crystals. But wait there's more, volume holograms are 3-dimensional FIR filters. Not only are they frequency selective (that's why you can see the image in color), they also are direction selective. A simple holographic mirror if viewed in cross section is a series of Bragg planes all equally spaced and their extent covers the thickness of the film's emulsion. Of course they offer some tricks not easily had in DSP. A standard hologram works by modulating the amplitude just like the taps all have scale factors. If you bleach the hologram, it then will modulate the phase of the signal. Imagine a signal that speeds up and slows down over and over as it traverses the filter. The advantage is conservation of energy. The disadvantage, as with angle modulation, is harmonic generation which results in veiling glare.
> It can't be realized in terms of RLC networks, > so there is nothing present that strictly demands that the DE-based > Fourier basis must or even should be used for analysis. In fact, wavelets > is a perfectly acceptable basis for representing FIR filters (or any other > filter, for that matter) that is not linked with physical DEs. However, > wavelets and Fourier series represent the data in different ways and > thus serves different purposes. The Fourier analysis provides all the > mental hooks (frequency, amplitude, phase, pass band, stop band,...) > that lets the analyst design a filter that does a job that makes > "physical sense".
This is a good point. It is very easy for us to understand the answer in terms of these functions. However there is some evidence that suggests that our seeing and hearing are better modelled by wavelets. So this may be an artefact of prior training. But another viewpoint is the Fourier transform allows us to analyze something from two opposite viewpoints. We can think about something in time or in frequency. Yes our "uncertainty thread" keeps alive.
> > Another example is the subspace representations I have worked with. > The data are represented in terms of a covariance matrix, but instead > of computing a Fourier-based periodogram, I perform an eigen decomposition > that represents the data as a complete basis (the eigenvectors) and their > coefficients (the eigen values) that does not have any resemplence > whatsoever with any differential equations. However, it's a perfectly > valid vector space representation of the data, mathematically it's > not different from the periodogram, it's only less intuitive.
This has been used as a basis for data compression. And as you said before, the best representation depends on the use.
> > As a matter of fact, I believe intuition is a main factor here. I am > old enough to... if not have learned analog SP properly so at least > have learned from people who knew anlog SP well. And yet, I'm young enough > to have maths-based DSP as part of my basic college/university training.
I agree with the intuition part. We tend to understand new things in terms of what we already know.
> I have had many discussions with people older than me who insisted > on applying "analog" mind models to analysis where I was content with > some mathemathical operation.
I'm currently working on the opposite problem. I have a physics problem where I have a numerical solution and I'm trying to work out the solution in terms of physically understandable things. The complete physical model seems to escape intuition for now.
> > > The "relation" should not simply be that the same mathematical tools are > > used in both cases. There needs to be more of a connection than that.
Well the deep part of the math theory allows for a connection. It interesting that a great many physical processes are well described by 2nd order differential equations. Earlier in this thread (or maybe another) mention was made of the Wronskian. For 2nd order systems that don't have a 1st derivative, the Wronskian is always constant. This is easily shown and is implicit in Abel's formula for the Wronskian. I remember novel solutions to Schrodinger problems that only used the Wronskian. But the Wronskian makes for a neat connection between functions and differential equations. If a Wronskian for a set of functions is nonzero over an interval, then that set of functions satisfies a differential equation on that interval. This may be the connection you and Fred are looking for.
> > Somehow, I think that may be all there is to it. Fourier methods have long > traditions based in the past of analog systems. The "analog" or "physical" > concepts and mindsets that are due to the DEs of physics, carry over to > the maths of continuous functions as well as discrete sequences. However, > a purely mathematical "world", as inside the computer, provides a larger > degree of freedom than physical reality, so in the mathemathical setting > our intuition based on physics may impose artificial limits in one way or > another.
Well certainly the computer just works with numbers or symbols, and they don't have to be attached to reality. And this extra freedom should yield some neat things.
> > People *prefer* Fourier transforms due to ease of use and intuition. > But what arbitrary functions and sequences are concerned, Fourier > transforms are not the only *possible* transforms.
There are certainly many transforms available. A large family fits into the class of integral transforms, and we all learned to use the simplest one in highschool. Namely the logarithm. Clay
> > > I'm obviously missing something here. > > Rune