DSPRelated.com
Forums

Difference between real data and complex FFT

Started by brent April 29, 2011
On 4/29/2011 7:24 AM, brent wrote:
> I am working on a tutorial about IQ modulation and demodulation. I > have been thinking about this topic for a long time and have begun > putting together some stuff. Here is an interactive page (exclusively > for comp.dsp to look at :-) > > This shows the difference between an FFT that is stuffed with real > data and an FFT that is stuffed with complex data. This has been real > eye opening for me to see that the Nyquist sampling rate is blown away > with complex data :-) > > > > http://www.fourier-series.com/fun/IQdemod.html
I think it's very important to get the context down. If I start with a real signal then all I can get are real samples. End of story. Nyquist rests. If I start with a real signal and stuff half of it into the complex sample side of the input of an FFT then the "array" is half as long and looks to be complex. The number of samples remains. (There are methods for doing this sensibly - i.e. to unwind the results). Nyquist rests. If I start with a complex signal then I have to sample it according to ... who? ... oh yeah, Nyquist! That means that each part has to be sampled that way - so one gets twice the samples because there are, if you will, two "channels". Nyquist rests. Try this little thought experiment: Start with a real signal - that does not happen to be purely even or purely odd. Just a general signal. Sample it according to our hero !Nyquist! Compute the DFT. Note that the DFT samples are just as many as before but are complex ... so really there are now twice as many samples as before. Hmmmm...... How can this be? One way to look at it is the DFT computes amplitude (which starts from the original bandwidth) and phase (which also starts from the original bandwidth). So now we have twice as much "information" as we seemed to start with. But that's only because we started with an "artificial" [don't scream Jerry!] "purely real" function. The imaginary part is still there but it's zero. So, if you want, you can do a complex DFT with zeros plugged into the imaginary part just to keep the sample count clear and let Nyquist rest, eh? Fred
On 04/29/2011 07:24 AM, brent wrote:
> I am working on a tutorial about IQ modulation and demodulation. I > have been thinking about this topic for a long time and have begun > putting together some stuff. Here is an interactive page (exclusively > for comp.dsp to look at :-) > > This shows the difference between an FFT that is stuffed with real > data and an FFT that is stuffed with complex data. This has been real > eye opening for me to see that the Nyquist sampling rate is blown away > with complex data :-) > > > > http://www.fourier-series.com/fun/IQdemod.html
In amongst all of the criticism of your misinformed statement about the Nyquist rate, no one's mentioned that it's a nice looking app. It's a nice looking app. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" was written for you. See details at http://www.wescottdesign.com/actfes/actfes.html
steve <bungalow_steve@yahoo.com> wrote:

(snip, I wrote)
>> You have to be pretty careful when defining the sampling theorem >> over a finite interval. That is usually ignored, and for a sufficiently >> long interval, usually close enough.
>> Otherwise, you need N >= 2BT in the limit as N and T go to infinity.
> don't understand what you are getting at, it is >2BT for any size T, > large or small
You need at least some other constraint. Periodic boundary conditions will do it, though that seems to start long discussions here. Consider the case for N=1, and any B (except 0). You will find that you don't know the original value for any other time.
> practically speaking your correct because for real analog signals B is > always infinite
Or with some other conditions. Well, if, for example, you ask for a signal which goes exactly through a given N sample points and is band limited (so don't-care outside the sample points) you find that the solution is not unique. Well, the wikipedia page Nyquist-Shannon_sampling_theorem explains it better than I can. However, if you have an infinite number of sample points, non-uniformly spaced, you can reconstruct the original, unless you are very unlucky in your selection of points. Otherwise, non-uniform sampling of quantized values increases the quantization noise with less uniform sampling. -- glen
On Apr 30, 2:24&#4294967295;am, brent <buleg...@columbus.rr.com> wrote:
> I am working on a tutorial about IQ modulation and demodulation. &#4294967295;I > have been thinking about this topic for a long time and have begun > putting together some stuff. &#4294967295;Here is an interactive page (exclusively > for comp.dsp to look at :-) > > This shows the difference between an FFT that is stuffed with real > data and an FFT that is stuffed with complex data. &#4294967295;This has been real > eye opening for me to see that the Nyquist sampling rate is blown away > with complex data :-) > > http://www.fourier-series.com/fun/IQdemod.html
Nice programming bro. How is it done. Java script? Hardy
On Apr 29, 4:22&#4294967295;pm, Tim Wescott <t...@seemywebsite.com> wrote:
> On 04/29/2011 07:24 AM, brent wrote: > > > I am working on a tutorial about IQ modulation and demodulation. &#4294967295;I > > have been thinking about this topic for a long time and have begun > > putting together some stuff. &#4294967295;Here is an interactive page (exclusively > > for comp.dsp to look at :-) > > > This shows the difference between an FFT that is stuffed with real > > data and an FFT that is stuffed with complex data. &#4294967295;This has been real > > eye opening for me to see that the Nyquist sampling rate is blown away > > with complex data :-) > > >http://www.fourier-series.com/fun/IQdemod.html > > In amongst all of the criticism of your misinformed statement about the > Nyquist rate, no one's mentioned that it's a nice looking app. > > It's a nice looking app. > > -- > > Tim Wescott > Wescott Design Serviceshttp://www.wescottdesign.com > > Do you need to implement control loops in software? > "Applied Control Theory for Embedded Systems" was written for you. > See details athttp://www.wescottdesign.com/actfes/actfes.html
Thanks
On Apr 29, 6:59&#4294967295;pm, HardySpicer <gyansor...@gmail.com> wrote:
> On Apr 30, 2:24&#4294967295;am, brent <buleg...@columbus.rr.com> wrote: > > > I am working on a tutorial about IQ modulation and demodulation. &#4294967295;I > > have been thinking about this topic for a long time and have begun > > putting together some stuff. &#4294967295;Here is an interactive page (exclusively > > for comp.dsp to look at :-) > > > This shows the difference between an FFT that is stuffed with real > > data and an FFT that is stuffed with complex data. &#4294967295;This has been real > > eye opening for me to see that the Nyquist sampling rate is blown away > > with complex data :-) > > >http://www.fourier-series.com/fun/IQdemod.html > > Nice programming bro. How is it done. Java script? > > Hardy
It is done in flash. I am getting a little worried that flash might have a short life from this point forward, with apple not supporting it, but I don't think there is a better platform for doing this stuff than flash.
If we change our wording of Nyquist requirement from minimum samples per
cycle to "its original, I believe" of minimum sampling frequency then I
wouldn't worry about number of samples per cycle. Sampling frequency stays
same whether you have one channel(I) or a pair(I/Q) representing the same
signal. The pair case gives info on both sidebands.

A post here mentioned a more general form of Nyquist (=> Shannon). This is
intuitive and interesting and in fact it is a well known design methodolgy
in my work place referred to as undersampling.

http://www.national.com/vcm/national3/en_US/products/data_conversion/files/Undersampling.pdf

kadhiem Ayob
 


On Apr 30, 6:47&#4294967295;am, "kaz" <kadhiem_ayob@n_o_s_p_a_m.yahoo.co.uk>
wrote:

  ...

> http://www.national.com/vcm/national3/en_US/products/data_conversion/...
Sub-band sampling is well known and widely practiced. The claim (which I also recently made) that two samples are needed during the time of the highest frequency present implicitly assumes that information all the way down to DC. The more accurate statement is that for information in a band B cycles/second wide, 2B samples/second are needed to avoid aliasing and permit good reconstruction. Widely practiced sampling procedures violate even that criterion. Not every sampling application needs to provide information between the actual sampling instants. Daily stock closings and the level of Lake Champlain are examples. Jerry -- Engineering is the art of making what you want from things you can get.
On 4/30/2011 6:31 AM, Jerry Avins wrote:
> On Apr 30, 6:47 am, "kaz"<kadhiem_ayob@n_o_s_p_a_m.yahoo.co.uk> > wrote: > > ... > >> http://www.national.com/vcm/national3/en_US/products/data_conversion/... > > Sub-band sampling is well known and widely practiced. The claim (which > I also recently made) that two samples are needed during the time of > the highest frequency present implicitly assumes that information all > the way down to DC. The more accurate statement is that for > information in a band B cycles/second wide, 2B samples/second are > needed to avoid aliasing and permit good reconstruction. > > Widely practiced sampling procedures violate even that criterion. Not > every sampling application needs to provide information between the > actual sampling instants. Daily stock closings and the level of Lake > Champlain are examples. > > Jerry > -- > Engineering is the art of making what you want from things you can get.
Jerry, I think I understand what you mean in the 2nd paragraph - that while the actual situation changes between samples, the sparser sampling is "good enough" for your purposes? We don't have the lingo, mindset, etc. to deal with that *here*. A story I know you'll appreciate and one that I think we've discussed before: We have a wastewater sampling station getting 24-hour flow (the integral of flow rate) and a multi-sample 24-hour aggregate for concentration analysis. In the end, measured flow times measured concentration gives a measure of pounds of BOD or TSS, etc. So, one might say that the 24-hour measure is a reasonable average for that one day. But, this is only done once a week and, as you know, there are diurnal variations as well as day-to-day variations (that may have certain semanal periodicities). So, one is motivated to say that the data is "undersampled" and might fret over that just a bit as I did. I was looking at the data and was interested in the loading statistics. That's when a little light went on: If we look at the distribution of measured loading we can see the character of the loading just like looking at random noise or some signal. In this case, the temporal alignment of the samples isn't even used. But we can say things like: - "10% of the time the load is above capacity" .. and actually believe it. - "the mean or modal loading is xxxx" .. and actually believe it. I think all that's necessary is that there be enough samples to get a reasonable distribution. Once that's done, adding samples doesn't help much unless some time frame like 3 months or 6 months is compared to the next; or you use a sliding window to generate a family of distributions, etc. Of course, if one measures on the lowest or the highest day of the week then the distribution will be "skewed" and the conclusions reached from it perhaps as well. There must be a name for this tying it back to sampling theory in either signal processing or in statistics but I don't know what it's called. Obviously the objectives are a little or a lot different. Fred
Fred Marshall <fmarshallxremove_the_x@acm.org> wrote:
> On 4/30/2011 6:31 AM, Jerry Avins wrote:
(snip on sampling rate requirements)
>> Widely practiced sampling procedures violate even that criterion. Not >> every sampling application needs to provide information between the >> actual sampling instants. Daily stock closings and the level of Lake >> Champlain are examples.
For stocks, one can hope that the changes have enough randomness that no high-amplitude components appear. If people systematically bought or sold just before the hour, I could see an unusual peak at f=1/hour. It seems that much of the economic meltdown came from assuming more randomness in the market than actually existed. (Though, as far as I know, not related to sampling.)
> I think I understand what you mean in the 2nd paragraph - that while the > actual situation changes between samples, the sparser sampling is "good > enough" for your purposes? We don't have the lingo, mindset, etc. to > deal with that *here*.
> A story I know you'll appreciate and one that I think we've discussed > before:
> We have a wastewater sampling station getting 24-hour flow (the integral > of flow rate) and a multi-sample 24-hour aggregate for concentration > analysis. In the end, measured flow times measured concentration gives > a measure of pounds of BOD or TSS, etc. So, one might say that the > 24-hour measure is a reasonable average for that one day. But, this is > only done once a week and, as you know, there are diurnal variations as > well as day-to-day variations (that may have certain semanal > periodicities). So, one is motivated to say that the data is > "undersampled" and might fret over that just a bit as I did.
Yes, in this case it does seem that you could miss some important frequency components. A recent discussion brought up the dither in sample position discussion, which might help here. Reminds me of the discussion about New York and superbowl half time flush rates.
> I was looking at the data and was interested in the loading statistics. > That's when a little light went on: > If we look at the distribution of measured loading we can see the > character of the loading just like looking at random noise or some > signal. In this case, the temporal alignment of the samples isn't even > used. But we can say things like: > - "10% of the time the load is above capacity" .. and actually believe it. > - "the mean or modal loading is xxxx" .. and actually believe it.
It would seems that there would be some natural low-pass filtering in the pipes, but maybe not enough. If you sample only at midnight, you miss some daily peaks.
> I think all that's necessary is that there be enough samples to get a > reasonable distribution. Once that's done, adding samples doesn't help > much unless some time frame like 3 months or 6 months is compared to the > next; or you use a sliding window to generate a family of distributions, > etc.
> Of course, if one measures on the lowest or the highest day of the week > then the distribution will be "skewed" and the conclusions reached from > it perhaps as well.
> There must be a name for this tying it back to sampling theory in either > signal processing or in statistics but I don't know what it's called. > Obviously the objectives are a little or a lot different.
-- glen