DSPRelated.com
Forums

Relationship between z and Fourier transforms

Started by commsignal July 30, 2013
On Monday, August 5, 2013 7:46:32 PM UTC-4, glen herrmannsfeldt wrote:
> clay@claysturner.com wrote: > > > > (snip, I wrote) > > > > >> I don't have anything against those that use the DFT for > > >> non-periodic signals. It can be done. But don't deny that > > >> the transform does have those bounary conditions. > > > > > I look at this in an even simpler way. If you are trying to > > > represent a function that is not oscillatory by a sum of > > > oscillatory functions, then either your fit is not very good > > > or you are going to need a large number of them to force > > > the fit. Either way the suggestion is to use a different > > > basis set since you may be trying to fit round pegs > > > into square holes. > > > > Hmm. In that case, you wouldn't use Tchebychev polynomials > > to approximate monotonic functions, but they are pretty > > good at doing that. > > > > -- glen
The chebyshevs polys are good in that they let you apply the Weierstrauss theorem for approximation by polynomials and that their maxima are well known, so if one uses a finite sum of chebyshevs, then it is relatively easy to bound the approximation error. But some easy to define functions don't lend themselves to nice polynomial approximation. For example the Runge function y=1/(1+x^2) is a classic pain in the butt to get a nice approximation to. You will get a lot of ringing in a finite approximation. This is the polynomial approximation analog of Gibb's phenomenon. Clay This is all about matching the tool to the job.
boy, i dunno what it is, Clay, but yours'es and Bob Adams'es posts come 
out as one big long line of text.  some mail or usenet client ain't 
putting in the LF characters.


On 8/6/13 7:47 AM, clay@claysturner.com wrote:
> > The chebyshevs polys are good in that they let you apply the Weierstrauss theorem for approximation by polynomials and that their maxima are well known, so if one uses a finite sum of chebyshevs, then it is relatively easy to bound the approximation error. But some easy to define functions don't lend themselves to nice polynomial approximation. For example the Runge function y=1/(1+x^2) is a classic pain in the butt to get a nice approximation to. You will get a lot of ringing in a finite approximation. This is the polynomial approximation analog of Gibb's phenomenon. >
i would say that the Tchebyshev polynomials are exactly the mapping of a finite sum of sinusoids to a finite sum of powers (a.k.a. a polynomial). i would say that this *is* Gibbs, not just an approximation to it. it's precisely Gibbs mapped through the Tchebyshev thingie back into the polynomial domain. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Tue, 06 Aug 2013 11:29:53 -0700, robert bristow-johnson
<rbj@audioimagination.com> wrote:

> >boy, i dunno what it is, Clay, but yours'es and Bob Adams'es posts come >out as one big long line of text. some mail or usenet client ain't >putting in the LF characters.
They look fine here. Your reader doesn't believe in applying a window to the data.
> > >On 8/6/13 7:47 AM, clay@claysturner.com wrote: >> >> The chebyshevs polys are good in that they let you apply the Weierstrauss theorem for approximation by polynomials and that their maxima are well known, so if one uses a finite sum of chebyshevs, then it is relatively easy to bound the approximation error. But some easy to define functions don't lend themselves to nice polynomial approximation. For example the Runge function y=1/(1+x^2) is a classic pain in the butt to get a nice approximation to. You will get a lot of ringing in a finite approximation. This is the polynomial approximation analog of Gibb's phenomenon. >> > > >i would say that the Tchebyshev polynomials are exactly the mapping of a >finite sum of sinusoids to a finite sum of powers (a.k.a. a polynomial). > i would say that this *is* Gibbs, not just an approximation to it. >it's precisely Gibbs mapped through the Tchebyshev thingie back into the >polynomial domain. > > > >-- > >r b-j rbj@audioimagination.com > >"Imagination is more important than knowledge." > >
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
On Tuesday, August 6, 2013 2:29:53 PM UTC-4, robert bristow-johnson wrote:
> boy, i dunno what it is, Clay, but yours'es and Bob Adams'es posts come > > out as one big long line of text. some mail or usenet client ain't > > putting in the LF characters. > > > > > > On 8/6/13 7:47 AM, clay@claysturner.com wrote: > > > > > > The chebyshevs polys are good in that they let you apply the Weierstrauss theorem for approximation by polynomials and that their maxima are well known, so if one uses a finite sum of chebyshevs, then it is relatively easy to bound the approximation error. But some easy to define functions don't lend themselves to nice polynomial approximation. For example the Runge function y=1/(1+x^2) is a classic pain in the butt to get a nice approximation to. You will get a lot of ringing in a finite approximation. This is the polynomial approximation analog of Gibb's phenomenon. > > > > > > > > > i would say that the Tchebyshev polynomials are exactly the mapping of a > > finite sum of sinusoids to a finite sum of powers (a.k.a. a polynomial). > > i would say that this *is* Gibbs, not just an approximation to it. > > it's precisely Gibbs mapped through the Tchebyshev thingie back into the > > polynomial domain. > > > > > > > > -- > > > > r b-j rbj@audioimagination.com > > > > "Imagination is more important than knowledge."
Gibbs is different than Runge in that Gibbs is defined in terms of Fourier transforms. Runge comes from polynomial approximation. Another difference is Gibbs results from trying to approximate a nonuniformly continuous function as a finite sum of uniformly continuous functions. In the Runge case both the function and the polynomials used to approximate it are uniformly continuous. Yes in both cases we are operating on finite intervals, but there is a difference. A finite sum of sinusoids can't map to a finite sum of chebyshevs since a sinusoid contains all odd powers (Maclauren series) of theta whereas the chebyshev is a polynomial with a finite highest power. You will need an infinity of chebyshevs to sum to even one sinusoid. Clay
Sorry to all for not reading each and every post here, as there are too
many. However, regarding the assumption of boundary conditions for DFT,
from whatever I have read above, isn't it more evident in the convolution
in time versus product in frequency property? 
Linear convolution is required in continuous time domain, while circular
convolution is required in discrete-time domain, although both are just
products in their respective frequency domains. That essentially tells us
about the inherent *assumed* periodicity in the underlying discrete-time
signals and how we must care about the boundary conditions.
With my limited reading of this discussion, I think that r-b-j is rightly
stressing the mathematical implications underlying this transformation
process.	 

_____________________________		
Posted through www.DSPRelated.com
On Tue, 06 Aug 2013 16:58:32 -0500, "commsignal" <58672@dsprelated>
wrote:

>Sorry to all for not reading each and every post here, as there are too >many. However, regarding the assumption of boundary conditions for DFT, >from whatever I have read above, isn't it more evident in the convolution >in time versus product in frequency property? >Linear convolution is required in continuous time domain, while circular >convolution is required in discrete-time domain, although both are just >products in their respective frequency domains. That essentially tells us >about the inherent *assumed* periodicity in the underlying discrete-time >signals and how we must care about the boundary conditions. >With my limited reading of this discussion, I think that r-b-j is rightly >stressing the mathematical implications underlying this transformation >process.
The repetition of the spectrum in the frequency domain of the time domain signal is a consequence of being sampled in the time domain and makes no assumptions about the nature of the signal that was sampled. The signal does not have to be periodic at all for the spectrum to be replicated in the frequency domain. That spectral replication is what makes multiplication in the time domain circular convolution in the frequency domain. The time-domain signal does not need to be periodic over N or any other period. There's over a decade of history in this argument, so you're coming in a little late. ;)
>_____________________________ >Posted through www.DSPRelated.com
Eric Jacobsen Anchor Hill Communications http://www.anchorhill.com
alright, here is the Fourier Transform and its inverse


                  +inf
    X(f)  =   integral{ x(t)  e^(-j 2 pi f t) dt}
                  -inf


                  +inf
    x(t)  =   integral{ X(f)  e^(+j 2 pi f t) df}
                  -inf


and, using an uncommon scaling convention (that preserves energy), here 
is the DFT and iDFT


                      N-1
    X[k]  =  N^(-1/2) SUM{ x[n] e^(-j 2 pi kn/N) }
                      n=0


                      N-1
    x[n]  =  N^(-1/2) SUM{ X[k] e^(+j 2 pi kn/N) }
                      k=0



note the (undeniable) symmetry, in which the whole duality property 
derives from.  -j and +j have equal claim to being sqrt(-1).  so the 
symmetry is undeniable.

On 8/6/13 3:11 PM, Eric Jacobsen wrote:
> On Tue, 06 Aug 2013 16:58:32 -0500, "commsignal"<58672@dsprelated> > wrote: > >> Sorry to all for not reading each and every post here, as there are too >> many. However, regarding the assumption of boundary conditions for DFT, >>from whatever I have read above, isn't it more evident in the convolution >> in time versus product in frequency property?
Eric, please note the above question...
>> Linear convolution is required in continuous time domain, while circular >> convolution is required in discrete-time domain, although both are just >> products in their respective frequency domains. That essentially tells us >> about the inherent *assumed* periodicity in the underlying discrete-time >> signals and how we must care about the boundary conditions. >> With my limited reading of this discussion, I think that r b-j is rightly >> stressing the mathematical implications underlying this transformation >> process.
i'm just the messenger.
> > The repetition of the spectrum in the frequency domain of the time > domain signal is a consequence of being sampled in the time domain and > makes no assumptions about the nature of the signal that was sampled.
so, reversing the roles of time and frequency, is there no sampling in the frequency domain? what happens as a consequence to the time domain?
> The signal does not have to be periodic at all for the spectrum to be > replicated in the frequency domain.
even if you multiply in the frequency domain (assumably by a non-constant) which is what comm was asking about?
> > That spectral replication is what makes multiplication in the time > domain circular convolution in the frequency domain. The time-domain > signal does not need to be periodic over N or any other period.
he or she (comm, do you have a real name? i ain't gonna presume gender.) asked about multiplication in the frequency domain. what does that mean in the time domain?
> There's over a decade of history in this argument, so you're coming in > a little late. ;)
just a little. i just love burning things down. (but they might not like it in the Malibu hills.) -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
commsignal <58672@dsprelated> wrote:

> Sorry to all for not reading each and every post here, as there are too > many. However, regarding the assumption of boundary conditions for DFT, > from whatever I have read above, isn't it more evident in the convolution > in time versus product in frequency property?
One way to see it as a specific case of the general series solution method for differential equations. The original equation has a complete set of solutions. The boundary conditions restrict the solution set. As another example, the Bessel functions are the solutions to Bessel's equation, which comes up in many problems in cylindrical coordinates. For example, the vibrating modes of circular drum heads are Bessel functions (radial) and sinusoids (the other coordinate, usually theta). So I see the boundary conditions as pretty fundamental to the DFT. Note also for the DST and DCT one has choices over where the boundaries are, either on or between sample points, and with the boundary conditions of zero on the boundary (DST) or derivative zero (DCT). Well, some years ago I was trying to understand why some use the DCT. That is, how one chooses. The explanation in "Numerical Recipes" explained the difference in boundary conditions. They have a pretty good explanation of the different transforms.
> Linear convolution is required in continuous time domain, while circular > convolution is required in discrete-time domain, although both are just > products in their respective frequency domains. That essentially tells us > about the inherent *assumed* periodicity in the underlying discrete-time > signals and how we must care about the boundary conditions.
> With my limited reading of this discussion, I think that r-b-j > is rightly stressing the mathematical implications underlying > this transformation process.
-- glen
Both Eric's and r-b-j's responses above are very very interesting. Now I
can see why you guys are caught in an infinite loop regarding this
discussion. I can think of this concept as an object in an unstable
equilibrium, like a ball on the top of the hill. With a hint of wind on
either side, it just rolls down either side and never comes back. I guess
the two positions taken over this concept must be the same. 

So, are there any 'conversions' over this topic in the past decade or so?
If yes, what argument convinced him/her?

r-b-j, I'm a he, and the reason I don't use my real name is just to avoid
labelling the kind of stupidity I ask in questions to my real self.

Finally, for the 'other' group, even practically speaking, we admit this
inherent periodcity in doing DFT in, say, OFDM or multicarrier systems,
when we repeat the last part of the sequence in *time-domain* before taking
the DFT to make it *look* periodic. What does that teach us? What is that
DFT assuming for that time-domain signal?

	 

_____________________________		
Posted through www.DSPRelated.com
>r-b-j, I'm a he, and the reason I don't use my real name is just to avoid >labelling the kind of stupidity I ask in questions to my real self.
Another reason is that I've seen some humiliating comments (e.g., by Vladimir) on questions posted here and many people I know visit this website. Using real name becomes embarassing then in front of them. _____________________________ Posted through www.DSPRelated.com