I understand that providing real inputs to a DFT yields symmetry about the N/2 bin where N = number of samples. I am now working with a system that will be providing complex samples to a DFT. Some of the documentation that I have read indicates that this symmetry no longer exists, but it does not go into any further explanation. Can anyone provide any insight into why this is the case? Does the lack of symmetry mean that the highest frequency component contained in the DFT output is the sample rate itself? Also, does the same bin resolution that applies to real input apply to complex input (e.g. sample rate = 2000 Hz, number of samples = 512, so bin1 =1*2000/512, bin2 = 2*2000/512, bin3 = 3*2000/512, etc.)? Any help understanding this would be greatly appreciated. Thanks, Doug Allen This message was sent using the Comp.DSP web interface on www.DSPRelated.com
DFT Symmetry with Complex Input
Started by ●February 16, 2005
Reply by ●February 16, 20052005-02-16
allendm2001 wrote:> I understand that providing real inputs to a DFT yields symmetry about the > N/2 bin where N = number of samples. I am now working with a system that > will be providing complex samples to a DFT. Some of the documentation > that I have read indicates that this symmetry no longer exists, but it > does not go into any further explanation. Can anyone provide any insight > into why this is the case?If you understand the Hartley transform, you might understand the answer to this question. For DFT (or any exponential FT) a real even function generates cos() terms at symmetric positive and negative frequencies, or, as you say, symmetric around N/2. Real odd functions generate sin() terms, imaginary and antisymmetric around N/2. Imaginary even functions generate symmetric imaginary terms, and imaginary odd functions generate antisymmetric real terms. The symmetries of exponential transforms: even real <--> real symmetric odd real <--> imaginary antisymmetric even imaginary <--> imaginary symmetric odd imaginary <--> real antisymmetric (If you consider the terms above N/2 as negative frequencies by subtracting N, symmetric means even, and antisymmetric means odd.) As Fourier transforms are linear, the transform of a sum is equal to the sum of transforms, and any function can be considered a sum of real and imaginary, or of even and odd, functions, you can then figure out the symmetry of any function. -- glen
Reply by ●February 17, 20052005-02-17
allendm2001 wrote:> I understand that providing real inputs to a DFT yields symmetryabout the> N/2 bin where N = number of samples. I am now working with a systemthat> will be providing complex samples to a DFT. Some of thedocumentation> that I have read indicates that this symmetry no longer exists, butit> does not go into any further explanation. Can anyone provide anyinsight> into why this is the case? Does the lack of symmetry mean that the > highest frequency component contained in the DFT output is the samplerate> itself? Also, does the same bin resolution that applies to realinput> apply to complex input (e.g. sample rate = 2000 Hz, number of samples=> 512, so bin1 =1*2000/512, bin2 = 2*2000/512, bin3 = 3*2000/512,etc.)?> > Any help understanding this would be greatly appreciated.Start from Eulers's equation, 2 cos(wn)= exp(jwn) + exp(-jwn) [1] and the spectrum repetition property, X(w + n*2pi) = X(w). [2] Equation [1] shows that any real-valued sinusoidal is composed by two complex-valued sinusoidals. These have the very convenient properties that the imaginary components cancel each other. No such property holds for complex-valued signals. The key to answer your question is to understand that the Nyquist limit ensures that we do not confuse the two components. While the basic form [1] of Eulesr equations is easy enough, there are problems when you sample the signal and a mirror of the negative frequency sinusoidal senteres at w = w_s is introduced: [Use fixed-width font to view ASCII art] ^ : | : : | : : : : | | | : : : : | | | : - - - - - - - - - - - - - - -+------------------------ - - -> -w_s B' A' A B w_s w Assume we have samples a real-valued sinusoidal with sampling frequency w_s. We find two spectral lines, A and B above, at frequencies w_B = w_s - w_A [3] The problem here is that we do not know from the above information what the frequency of the analog sampled signal was. We usually assume that the signal is base-band, i.e. that w=w_A, that the line A' at -w_A is the "true" negative-frequency image, that this image was mirrored at B due to the sampling theorem, and that the negative-frequency component B' at -w_s + w_A is a purely "artificial" component. But all that is solely due to convention, since base-band sampling is what most of us use. We are so "indoctrinated" with this way of thinking that we most of the time forget that there are other possibilities. It could equally well be that the analog frequency was w_s - w_A, so that the "true" negative-frequency component is the one at B', which in turn is repeated at A due to sampling, and that the component at A' is the "artificial" one. One have to require the Nyquist limit to be respected, for the problem to have a unique solution. Remove the negative-frequency component from the signal, and you will find that the only problem you will have to worry about is the repeated spectrum due to sampling. Which means you have the whole frequency band [0,w_s] to play with. Rune
Reply by ●February 17, 20052005-02-17
Rune Allnor wrote: ...> Start from Eulers's equation, > > 2 cos(wn)= exp(jwn) + exp(-jwn) [1]...> Equation [1] shows that any real-valued sinusoidal is composed by two > complex-valued sinusoidals. These have the very convenient properties > that the imaginary components cancel each other. No such property > holds for complex-valued signals.... I snipped a cogent explanation in order to ride my hobby horse over a bit of semantics. Real-valued sinusoids can certainly be decomposed to two complex-valued sinusoids, but they can be decomposed in other ways as well. It is a mistake to imply that any one decomposition reveals their nature. They are what they are, and we manipulate their representation to suit our convenience. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●February 19, 20052005-02-19
Jerry Avins wrote:> Rune Allnor wrote: > > ... > > > Start from Eulers's equation, > > > > 2 cos(wn)=3D exp(jwn) + exp(-jwn) [1] > > ... > > > Equation [1] shows that any real-valued sinusoidal is composed bytwo> > complex-valued sinusoidals. These have the very convenientproperties> > that the imaginary components cancel each other. No such property > > holds for complex-valued signals. > > ... > > I snipped a cogent explanation in order to ride my hobby horse over a> bit of semantics. > > Real-valued sinusoids can certainly be decomposed to twocomplex-valued> sinusoids, but they can be decomposed in other ways as well. It is a > mistake to imply that any one decomposition reveals their nature.They> are what they are, and we manipulate their representation to suit our> convenience.Humm... I agree with you in your general comment about signal representations, but I think my first statement can be defended. The complex exponentials _are_ the basis functions of the Fourier transform; indeed, the whole point of applynig the FT is to express the signal in terms of complex exponentials. So I stand my ground and maintain that there is no problem in writing as pointed as I did. Having said that, it would perhaps have been better to write "any real-valued sinusoidal _can_be_represented_...". One of the very convenient uses of Euler's equations is that they very nicely explain why Nyquist's theorem doesn't apply to complex- valued signals. Rune> Jerry > -- > Engineering is the art of making what you want from things you canget.>=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF= =AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF=AF
Reply by ●February 19, 20052005-02-19
Rune Allnor wrote: ...> Humm... I agree with you in your general comment about signal > representations, but I think my first statement can be defended. > The complex exponentials _are_ the basis functions of the Fourier > transform; indeed, the whole point of applynig the FT is to express > the signal in terms of complex exponentials. So I stand my ground > and maintain that there is no problem in writing as pointed as I did._The_ basis functions, or _one_possible_set_ of basis functions? Fourier himself used sines and cosines. So did Maxwell, Kelvin, and Michaelson. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●February 20, 20052005-02-20
Jerry Avins wrote:> Rune Allnor wrote: > > ... > > > Humm... I agree with you in your general comment about signal > > representations, but I think my first statement can be defended. > > The complex exponentials _are_ the basis functions of the Fourier > > transform; indeed, the whole point of applynig the FT is to express > > the signal in terms of complex exponentials. So I stand my ground > > and maintain that there is no problem in writing as pointed as Idid.> > _The_ basis functions, or _one_possible_set_ of basis functions?Fourier> himself used sines and cosines. So did Maxwell, Kelvin, andMichaelson. The sine/cosine basis or equivalently, the complex exponential function (thanks to our friend Euler) are _the_ basis functions for the Fourier transform. The Fourier transform is not the only possible transform, though, there are plenty of transforms that work the same way but use other basis functions. The Hankel transform uses some combination of Bessel functions. I know the Karhunen-Loeve transform (KHT) works that way. The main difference between the KHT and other transforms is that the KHT represents the signal autocorrelation matrix in terms of its own eigenvectors, and not predetermined vectors. Wavelet transforms work this way, and I *believe* (I may be wrong) the Hadamard transform is similar, too. The general framework of the Fourier transform does not require the sine/cosine or complex exponential basis to be used. Any linearly independent basis will do, although orthogonal bases are preferred. One does some times speak of the "generalized Fourier analysis" when other bases are used. However, the term "Fourier transform" is used only with the sine/cosine or the complex exponential bases. Which are equivalent, by Euler's equations. Rune
Reply by ●February 20, 20052005-02-20
Rune Allnor wrote:> Jerry Avins wrote: > >>Rune Allnor wrote: >> >> ... >> >> >>>Humm... I agree with you in your general comment about signal >>>representations, but I think my first statement can be defended. >>>The complex exponentials _are_ the basis functions of the Fourier >>>transform; indeed, the whole point of applynig the FT is to express >>>the signal in terms of complex exponentials. So I stand my ground >>>and maintain that there is no problem in writing as pointed as I > > did. > >>_The_ basis functions, or _one_possible_set_ of basis functions? > > Fourier > >>himself used sines and cosines. So did Maxwell, Kelvin, and > > Michaelson. > > The sine/cosine basis or equivalently, the complex exponential > function (thanks to our friend Euler) are _the_ basis functions for > the Fourier transform. > > The Fourier transform is not the only possible transform, though, > there are plenty of transforms that work the same way but use other > basis functions. The Hankel transform uses some combination of Bessel > functions. I know the Karhunen-Loeve transform (KHT) works that way. > The main difference between the KHT and other transforms is that the > KHT represents the signal autocorrelation matrix in terms of its own > eigenvectors, and not predetermined vectors. Wavelet transforms work > this way, and I *believe* (I may be wrong) the Hadamard transform > is similar, too.The Fourier transform diagonalizes circulants, in the case of finite discrete time. Otherwise known as time invariance. None of the other Fourier-Bessel, or orthonormal to use some other common mathematical terminology, transforms do. Superposition is a common property of many transforms. The particular invariance known as time invariance leads to Fourier analysis. Hadamard has dyadic addition (exclusive oring in time of extent 2^k) for example. The common orthonormal transforms are motivated by some invariance property so that the achieve a diagonal representation for that invariance.> The general framework of the Fourier transform does not require > the sine/cosine or complex exponential basis to be used. Any linearly > independent basis will do, although orthogonal bases are preferred. > One does some times speak of the "generalized Fourier analysis" > when other bases are used. > > However, the term "Fourier transform" is used only with the sine/cosine > or the complex exponential bases. Which are equivalent, by Euler's > equations. > > Rune >
Reply by ●February 22, 20052005-02-22
Rune Allnor wrote: (snip)> The sine/cosine basis or equivalently, the complex exponential > function (thanks to our friend Euler) are _the_ basis functions for > the Fourier transform.and are the appropriate basis functions for many popular problems. In rectangular coordinates in uniform media they are the appropriate basis functions, the solutions to the applicable partial differential equation. In cylindrical or spherical coordinates, or with non-uniform media (differential equations without constant coefficients) other basis functions are needed.> The Fourier transform is not the only possible transform, though, > there are plenty of transforms that work the same way but use other > basis functions. The Hankel transform uses some combination of Bessel > functions. I know the Karhunen-Loeve transform (KHT) works that way. > The main difference between the KHT and other transforms is that the > KHT represents the signal autocorrelation matrix in terms of its own > eigenvectors, and not predetermined vectors. Wavelet transforms work > this way, and I *believe* (I may be wrong) the Hadamard transform > is similar, too.> The general framework of the Fourier transform does not require > the sine/cosine or complex exponential basis to be used. Any linearly > independent basis will do, although orthogonal bases are preferred. > One does some times speak of the "generalized Fourier analysis" > when other bases are used.Sine, cos, (and exp) are the "normal modes" for many linear systems. For systems where the restoring force is proportional to displacement, (springs, air columns, stretched strings), for RLC circuits, the response to a sin (or cos) is a sin with a possibly different amplitude and phase. The vibrational modes of a drum head require bessel functions to describe. That is, the appropriate basis functions for the radial differential equation are bessel functions. The modes for air in a cone (such as an oboe) are described in spherical coordinates, and so spherical bessel functions are used.> However, the term "Fourier transform" is used only with the sine/cosine > or the complex exponential bases. Which are equivalent, by Euler's > equations.There are mixed systems, such as the Fourier-Bessel transform. -- glen
Reply by ●February 22, 20052005-02-22
Jerry Avins wrote:> Rune Allnor wrote:>> Humm... I agree with you in your general comment about signal >> representations, but I think my first statement can be defended. >> The complex exponentials _are_ the basis functions of the Fourier >> transform; indeed, the whole point of applynig the FT is to express >> the signal in terms of complex exponentials. So I stand my ground >> and maintain that there is no problem in writing as pointed as I did.> _The_ basis functions, or _one_possible_set_ of basis functions? Fourier > himself used sines and cosines. So did Maxwell, Kelvin, and Michaelson.For any differential equation with more than one solution, the solutions, or any linear combination of them, are appropriate basis functions. Orthogonal, or orthonormal basis functions are often easier to use, but are not required by the math. In many cases the basis functions of the physical universe are not the obvious ones. The neutrinos from many nuclear reactions turn out not to be the basis states for the neutrino, which are instead a mixture of electron neutrino, mu-neutrino, and tau-neutrino. The angle between the obvious and true basis functions, much like a phase shift, is called the "mixing angle". -- glen






