Reply by brent May 3, 20112011-05-03
On May 2, 11:18&#4294967295;pm, Rune Allnor <all...@tele.ntnu.no> wrote:
> On Apr 29, 4:24&#4294967295;pm, brent <buleg...@columbus.rr.com> wrote: > > > I am working on a tutorial about IQ modulation and demodulation. &#4294967295;I > > have been thinking about this topic for a long time and have begun > > putting together some stuff. &#4294967295;Here is an interactive page (exclusively > > for comp.dsp to look at :-) > > > This shows the difference between an FFT that is stuffed with real > > data and an FFT that is stuffed with complex data. &#4294967295;This has been real > > eye opening for me to see that the Nyquist sampling rate is blown away > > with complex data :-) > > Try looking at sampling of 2D real-valued data. > That's where things get really complicated. > > Rune
You have given me something new to look at
Reply by Rune Allnor May 3, 20112011-05-03
On Apr 29, 4:24&#4294967295;pm, brent <buleg...@columbus.rr.com> wrote:
> I am working on a tutorial about IQ modulation and demodulation. &#4294967295;I > have been thinking about this topic for a long time and have begun > putting together some stuff. &#4294967295;Here is an interactive page (exclusively > for comp.dsp to look at :-) > > This shows the difference between an FFT that is stuffed with real > data and an FFT that is stuffed with complex data. &#4294967295;This has been real > eye opening for me to see that the Nyquist sampling rate is blown away > with complex data :-)
Try looking at sampling of 2D real-valued data. That's where things get really complicated. Rune
Reply by glen herrmannsfeldt April 30, 20112011-04-30
Jerry Avins <jya@ieee.org> wrote:

(really big snip)
>> There must be a name for this tying it back to sampling theory in either >> signal processing or in statistics but I don't know what it's called. >> Obviously the objectives are a little or a lot different.
> There are many examples, as close to classical signal processing as > control systems. Many servos work better without an anti-alias filter, > even when it is clear that some aliasing affects them. After all, the > purpose of a servo is keeping a process within limits, not providing a > record of its limited wanderings. A thesis subject, perhaps?
As with the others, it seems like that might require some randomness to the input. That there might be some inputs, maybe only intentionally selected, that would cause undesirable resutls. It does seem that there might be an interesting thesis there. -- glen
Reply by Jerry Avins April 30, 20112011-04-30
On Apr 30, 2:01&#4294967295;pm, Fred Marshall <fmarshallxremove_th...@acm.org>
wrote:
> On 4/30/2011 6:31 AM, Jerry Avins wrote: > > > > > > > > > > > On Apr 30, 6:47 am, "kaz"<kadhiem_ayob@n_o_s_p_a_m.yahoo.co.uk> > > wrote: > > > &#4294967295; &#4294967295;... > > >>http://www.national.com/vcm/national3/en_US/products/data_conversion/... > > > Sub-band sampling is well known and widely practiced. The claim (which > > I also recently made) that two samples are needed during the time of > > the highest frequency present implicitly assumes that information all > > the way down to DC. The more accurate statement is that for > > information in a band B cycles/second wide, 2B samples/second are > > needed to avoid aliasing and permit good reconstruction. > > > Widely practiced sampling procedures violate even that criterion. Not > > every sampling application needs to provide information between the > > actual sampling instants. Daily stock closings and the level of Lake > > Champlain are examples. > > > Jerry > > -- > > Engineering is the art of making what you want from things you can get. > > Jerry, > > I think I understand what you mean in the 2nd paragraph - that while the > actual situation changes between samples, the sparser sampling is "good > enough" for your purposes? &#4294967295;We don't have the lingo, mindset, etc. to > deal with that *here*. > > A story I know you'll appreciate and one that I think we've discussed > before: > > We have a wastewater sampling station getting 24-hour flow (the integral > of flow rate) and a multi-sample 24-hour aggregate for concentration > analysis. &#4294967295;In the end, measured flow times measured concentration gives > a measure of pounds of BOD or TSS, etc. &#4294967295;So, one might say that the > 24-hour measure is a reasonable average for that one day. &#4294967295;But, this is > only done once a week and, as you know, there are diurnal variations as > well as day-to-day variations (that may have certain semanal > periodicities). &#4294967295;So, one is motivated to say that the data is > "undersampled" and might fret over that just a bit as I did. > > I was looking at the data and was interested in the loading statistics. > &#4294967295; That's when a little light went on: > If we look at the distribution of measured loading we can see the > character of the loading just like looking at random noise or some > signal. &#4294967295;In this case, the temporal alignment of the samples isn't even > used. &#4294967295;But we can say things like: > - "10% of the time the load is above capacity" .. and actually believe it. > - "the mean or modal loading is xxxx" &#4294967295;.. and actually believe it. > > I think all that's necessary is that there be enough samples to get a > reasonable distribution. &#4294967295;Once that's done, adding samples doesn't help > much unless some time frame like 3 months or 6 months is compared to the > next; or you use a sliding window to generate a family of distributions, > etc. > > Of course, if one measures on the lowest or the highest day of the week > then the distribution will be "skewed" and the conclusions reached from > it perhaps as well. > > There must be a name for this tying it back to sampling theory in either > signal processing or in statistics but I don't know what it's called. > Obviously the objectives are a little or a lot different.
There are many examples, as close to classical signal processing as control systems. Many servos work better without an anti-alias filter, even when it is clear that some aliasing affects them. After all, the purpose of a servo is keeping a process within limits, not providing a record of its limited wanderings. A thesis subject, perhaps? Jerry -- Engineering is the art of making what you want from things you can get.
Reply by glen herrmannsfeldt April 30, 20112011-04-30
Fred Marshall <fmarshallxremove_the_x@acm.org> wrote:

(snip, I wrote)
>> Yes, in this case it does seem that you could miss some important >> frequency components. A recent discussion brought up the dither >> in sample position discussion, which might help here.
>> Reminds me of the discussion about New York and superbowl half >> time flush rates.
(big snip)
> Well, the "daily" measurements aren't snapshots but are averaged over 24 > hours. So the "normal" diurnal stuff should be accounted for as they're > lowpassed. And, I don't care about "frequency components" so much.
> I'm coming to realize that "loading" is more important averaged over 3 > months time so there should be a fair amount of averaging done to the data.
> The samples aren't dithered but I had been thinking that the data > dithers all by itself. But, I could be wrong in thinking that. > Probably am... To the extent that there are real semanal periodicities > (weekend variations) it would likely be better to sample at regular > 6-day or 8-day intervals (instead of 7-day intervals) to average those > out .. a variaton on dithering that fits a workforce better.
I almost wrote 2pi days...
> Kinda funny.. a "beat" frequency between human residency habits in the > general population and human work habits of the utility workers.
When I wrote the one about stocks, I was almost considering a resonance in stock prices. It would seem possible.
> I still don't know the words that relate the two ideas. > There must be something.
-- glen
Reply by Fred Marshall April 30, 20112011-04-30
On 4/30/2011 12:16 PM, glen herrmannsfeldt wrote:
> Fred Marshall<fmarshallxremove_the_x@acm.org> wrote: >> On 4/30/2011 6:31 AM, Jerry Avins wrote: > > (snip on sampling rate requirements) >>> Widely practiced sampling procedures violate even that criterion. Not >>> every sampling application needs to provide information between the >>> actual sampling instants. Daily stock closings and the level of Lake >>> Champlain are examples. > > For stocks, one can hope that the changes have enough randomness > that no high-amplitude components appear. If people systematically > bought or sold just before the hour, I could see an unusual peak > at f=1/hour. > > It seems that much of the economic meltdown came from assuming more > randomness in the market than actually existed. (Though, as far > as I know, not related to sampling.) > >> I think I understand what you mean in the 2nd paragraph - that while the >> actual situation changes between samples, the sparser sampling is "good >> enough" for your purposes? We don't have the lingo, mindset, etc. to >> deal with that *here*. > >> A story I know you'll appreciate and one that I think we've discussed >> before: > >> We have a wastewater sampling station getting 24-hour flow (the integral >> of flow rate) and a multi-sample 24-hour aggregate for concentration >> analysis. In the end, measured flow times measured concentration gives >> a measure of pounds of BOD or TSS, etc. So, one might say that the >> 24-hour measure is a reasonable average for that one day. But, this is >> only done once a week and, as you know, there are diurnal variations as >> well as day-to-day variations (that may have certain semanal >> periodicities). So, one is motivated to say that the data is >> "undersampled" and might fret over that just a bit as I did. > > Yes, in this case it does seem that you could miss some important > frequency components. A recent discussion brought up the dither > in sample position discussion, which might help here. > > Reminds me of the discussion about New York and superbowl half > time flush rates. > >> I was looking at the data and was interested in the loading statistics. >> That's when a little light went on: >> If we look at the distribution of measured loading we can see the >> character of the loading just like looking at random noise or some >> signal. In this case, the temporal alignment of the samples isn't even >> used. But we can say things like: >> - "10% of the time the load is above capacity" .. and actually believe it. >> - "the mean or modal loading is xxxx" .. and actually believe it. > > It would seems that there would be some natural low-pass filtering > in the pipes, but maybe not enough. If you sample only at midnight, > you miss some daily peaks. > >> I think all that's necessary is that there be enough samples to get a >> reasonable distribution. Once that's done, adding samples doesn't help >> much unless some time frame like 3 months or 6 months is compared to the >> next; or you use a sliding window to generate a family of distributions, >> etc. > >> Of course, if one measures on the lowest or the highest day of the week >> then the distribution will be "skewed" and the conclusions reached from >> it perhaps as well. > >> There must be a name for this tying it back to sampling theory in either >> signal processing or in statistics but I don't know what it's called. >> Obviously the objectives are a little or a lot different. > > -- glen
Well, the "daily" measurements aren't snapshots but are averaged over 24 hours. So the "normal" diurnal stuff should be accounted for as they're lowpassed. And, I don't care about "frequency components" so much. I'm coming to realize that "loading" is more important averaged over 3 months time so there should be a fair amount of averaging done to the data. The samples aren't dithered but I had been thinking that the data dithers all by itself. But, I could be wrong in thinking that. Probably am... To the extent that there are real semanal periodicities (weekend variations) it would likely be better to sample at regular 6-day or 8-day intervals (instead of 7-day intervals) to average those out .. a variaton on dithering that fits a workforce better. Kinda funny.. a "beat" frequency between human residency habits in the general population and human work habits of the utility workers. I still don't know the words that relate the two ideas. There must be something. Fred
Reply by glen herrmannsfeldt April 30, 20112011-04-30
Fred Marshall <fmarshallxremove_the_x@acm.org> wrote:
> On 4/30/2011 6:31 AM, Jerry Avins wrote:
(snip on sampling rate requirements)
>> Widely practiced sampling procedures violate even that criterion. Not >> every sampling application needs to provide information between the >> actual sampling instants. Daily stock closings and the level of Lake >> Champlain are examples.
For stocks, one can hope that the changes have enough randomness that no high-amplitude components appear. If people systematically bought or sold just before the hour, I could see an unusual peak at f=1/hour. It seems that much of the economic meltdown came from assuming more randomness in the market than actually existed. (Though, as far as I know, not related to sampling.)
> I think I understand what you mean in the 2nd paragraph - that while the > actual situation changes between samples, the sparser sampling is "good > enough" for your purposes? We don't have the lingo, mindset, etc. to > deal with that *here*.
> A story I know you'll appreciate and one that I think we've discussed > before:
> We have a wastewater sampling station getting 24-hour flow (the integral > of flow rate) and a multi-sample 24-hour aggregate for concentration > analysis. In the end, measured flow times measured concentration gives > a measure of pounds of BOD or TSS, etc. So, one might say that the > 24-hour measure is a reasonable average for that one day. But, this is > only done once a week and, as you know, there are diurnal variations as > well as day-to-day variations (that may have certain semanal > periodicities). So, one is motivated to say that the data is > "undersampled" and might fret over that just a bit as I did.
Yes, in this case it does seem that you could miss some important frequency components. A recent discussion brought up the dither in sample position discussion, which might help here. Reminds me of the discussion about New York and superbowl half time flush rates.
> I was looking at the data and was interested in the loading statistics. > That's when a little light went on: > If we look at the distribution of measured loading we can see the > character of the loading just like looking at random noise or some > signal. In this case, the temporal alignment of the samples isn't even > used. But we can say things like: > - "10% of the time the load is above capacity" .. and actually believe it. > - "the mean or modal loading is xxxx" .. and actually believe it.
It would seems that there would be some natural low-pass filtering in the pipes, but maybe not enough. If you sample only at midnight, you miss some daily peaks.
> I think all that's necessary is that there be enough samples to get a > reasonable distribution. Once that's done, adding samples doesn't help > much unless some time frame like 3 months or 6 months is compared to the > next; or you use a sliding window to generate a family of distributions, > etc.
> Of course, if one measures on the lowest or the highest day of the week > then the distribution will be "skewed" and the conclusions reached from > it perhaps as well.
> There must be a name for this tying it back to sampling theory in either > signal processing or in statistics but I don't know what it's called. > Obviously the objectives are a little or a lot different.
-- glen
Reply by Fred Marshall April 30, 20112011-04-30
On 4/30/2011 6:31 AM, Jerry Avins wrote:
> On Apr 30, 6:47 am, "kaz"<kadhiem_ayob@n_o_s_p_a_m.yahoo.co.uk> > wrote: > > ... > >> http://www.national.com/vcm/national3/en_US/products/data_conversion/... > > Sub-band sampling is well known and widely practiced. The claim (which > I also recently made) that two samples are needed during the time of > the highest frequency present implicitly assumes that information all > the way down to DC. The more accurate statement is that for > information in a band B cycles/second wide, 2B samples/second are > needed to avoid aliasing and permit good reconstruction. > > Widely practiced sampling procedures violate even that criterion. Not > every sampling application needs to provide information between the > actual sampling instants. Daily stock closings and the level of Lake > Champlain are examples. > > Jerry > -- > Engineering is the art of making what you want from things you can get.
Jerry, I think I understand what you mean in the 2nd paragraph - that while the actual situation changes between samples, the sparser sampling is "good enough" for your purposes? We don't have the lingo, mindset, etc. to deal with that *here*. A story I know you'll appreciate and one that I think we've discussed before: We have a wastewater sampling station getting 24-hour flow (the integral of flow rate) and a multi-sample 24-hour aggregate for concentration analysis. In the end, measured flow times measured concentration gives a measure of pounds of BOD or TSS, etc. So, one might say that the 24-hour measure is a reasonable average for that one day. But, this is only done once a week and, as you know, there are diurnal variations as well as day-to-day variations (that may have certain semanal periodicities). So, one is motivated to say that the data is "undersampled" and might fret over that just a bit as I did. I was looking at the data and was interested in the loading statistics. That's when a little light went on: If we look at the distribution of measured loading we can see the character of the loading just like looking at random noise or some signal. In this case, the temporal alignment of the samples isn't even used. But we can say things like: - "10% of the time the load is above capacity" .. and actually believe it. - "the mean or modal loading is xxxx" .. and actually believe it. I think all that's necessary is that there be enough samples to get a reasonable distribution. Once that's done, adding samples doesn't help much unless some time frame like 3 months or 6 months is compared to the next; or you use a sliding window to generate a family of distributions, etc. Of course, if one measures on the lowest or the highest day of the week then the distribution will be "skewed" and the conclusions reached from it perhaps as well. There must be a name for this tying it back to sampling theory in either signal processing or in statistics but I don't know what it's called. Obviously the objectives are a little or a lot different. Fred
Reply by Jerry Avins April 30, 20112011-04-30
On Apr 30, 6:47&#4294967295;am, "kaz" <kadhiem_ayob@n_o_s_p_a_m.yahoo.co.uk>
wrote:

  ...

> http://www.national.com/vcm/national3/en_US/products/data_conversion/...
Sub-band sampling is well known and widely practiced. The claim (which I also recently made) that two samples are needed during the time of the highest frequency present implicitly assumes that information all the way down to DC. The more accurate statement is that for information in a band B cycles/second wide, 2B samples/second are needed to avoid aliasing and permit good reconstruction. Widely practiced sampling procedures violate even that criterion. Not every sampling application needs to provide information between the actual sampling instants. Daily stock closings and the level of Lake Champlain are examples. Jerry -- Engineering is the art of making what you want from things you can get.
Reply by kaz April 30, 20112011-04-30
If we change our wording of Nyquist requirement from minimum samples per
cycle to "its original, I believe" of minimum sampling frequency then I
wouldn't worry about number of samples per cycle. Sampling frequency stays
same whether you have one channel(I) or a pair(I/Q) representing the same
signal. The pair case gives info on both sidebands.

A post here mentioned a more general form of Nyquist (=> Shannon). This is
intuitive and interesting and in fact it is a well known design methodolgy
in my work place referred to as undersampling.

http://www.national.com/vcm/national3/en_US/products/data_conversion/files/Undersampling.pdf

kadhiem Ayob