DSPRelated.com
Forums

Down Sampling Questions: Theoretical vs Practical

Started by gmcauley August 14, 2007
I am new to the wonderful world of digital signal processing and have
questions about down sampling.

We have a biological signal originally sampled at 200 samples/sec which is
producing huge amounts of data. We are only interested in frequencies 20Hz
and below.  So, I know *theoretically* I can low-pass filter the data with
a cut-off frequency of 20Hz and then down sample at 40 sample/sec, without
signal loss and aliasing artifacts.  However, as I understand it, the
Nyquist criterion depends on some idealized assumptions, so that this is
not true in practice.

So, my questions are:

1) Is there a rule of thumb or rationale for choosing a *practical* or
*real world* down sample rate based on your target bandwidth (20Hz in my
case)?

2) Since I do not want significant attenuation at 20Hz, for real world
filters am I safer choosing a higher cut-off frequency to avoid this (say
30Hz)?

If left on my own, in an attempt to cut data size, preserve 20Hz-and-below
fidelity, and reduce aliasing, I might try a cut-off frequency of 30Hz and
resampling at 80 samples/sec.  Is 80 samples/sec high enough?

Please enlighten me on my thinking and knowledge about this - it would be
much appreciated.



"gmcauley" <gmcauley@llu.edu> wrote in message
news:5YidndC5T7VePVzbnZ2dnUVZ_gidnZ2d@giganews.com...
> I am new to the wonderful world of digital signal processing and have > questions about down sampling. > > We have a biological signal originally sampled at 200 samples/sec which is > producing huge amounts of data. We are only interested in frequencies 20Hz > and below. So, I know *theoretically* I can low-pass filter the data with > a cut-off frequency of 20Hz and then down sample at 40 sample/sec, without > signal loss and aliasing artifacts. However, as I understand it, the > Nyquist criterion depends on some idealized assumptions, so that this is > not true in practice. > > So, my questions are: > > 1) Is there a rule of thumb or rationale for choosing a *practical* or > *real world* down sample rate based on your target bandwidth (20Hz in my > case)? > > 2) Since I do not want significant attenuation at 20Hz, for real world > filters am I safer choosing a higher cut-off frequency to avoid this (say > 30Hz)? > > If left on my own, in an attempt to cut data size, preserve 20Hz-and-below > fidelity, and reduce aliasing, I might try a cut-off frequency of 30Hz and > resampling at 80 samples/sec. Is 80 samples/sec high enough? > > Please enlighten me on my thinking and knowledge about this - it would be > much appreciated.
It is all about how much of attenuation of aliases do you need, what is the required flatness in the passband, and how much of the processing power do you have. Tim Wescott (www.wescottdesigns.com) has an article explaining those basic requirements. For the DSP downsampling, the passband up to 0.8...0.9 of Nyquist is usually a good compromise value. So, for a passband of 20Hz you will need a sample rate of 50Hz, which makes nice even values. Vladimir Vassilevsky DSP and Mixed Signal Consultant www.abvolt.com
On Tue, 14 Aug 2007 07:32:03 -0500, gmcauley wrote:

> I am new to the wonderful world of digital signal processing and have > > questions about down sampling. > > We have a biological signal originally sampled at 200 samples/sec which is > > producing huge amounts of data. We are only interested in frequencies 20Hz > > and below. So, I know *theoretically* I can low-pass filter the data with > > a cut-off frequency of 20Hz and then down sample at 40 sample/sec, without > > signal loss and aliasing artifacts. However, as I understand it, the > > Nyquist criterion depends on some idealized assumptions, so that this is > > not true in practice. > > So, my questions are: > > 1) Is there a rule of thumb or rationale for choosing a *practical* or > > *real world* down sample rate based on your target bandwidth (20Hz in my > > case)? > > 2) Since I do not want significant attenuation at 20Hz, for real world > > filters am I safer choosing a higher cut-off frequency to avoid this (say > > 30Hz)? > > If left on my own, in an attempt to cut data size, preserve 20Hz-and-below > > fidelity, and reduce aliasing, I might try a cut-off frequency of 30Hz and > > resampling at 80 samples/sec. Is 80 samples/sec high enough? > > Please enlighten me on my thinking and knowledge about this - it would be > > much appreciated.
You can develop a rule of thumb for each and every downsampling problem -- unfortunately, it'll be different from the rule of thumb for the next one. For setting sampling rates I recommend that you leave the rules of thumb on your thumb. You can read my take on this here: http://www.wescottdesign.com/articles/Sampling/sampling.html. If you truly can get by with frequencies no higher than 20Hz you can digitally filter your signal and decimate to good effect. Even if you can get by with your 40 samples/second, though, "huge"/5 would still equal "pretty darn big" -- perhaps you need to find some other method of compressing the information? -- Tim Wescott Control systems and communications consulting http://www.wescottdesign.com Need to learn how to apply control theory in your embedded system? "Applied Control Theory for Embedded Systems" by Tim Wescott Elsevier/Newnes, http://www.wescottdesign.com/actfes/actfes.html
Tim Wescott <tim@seemywebsite.com> writes:
> [...] > Even if you can get by with your 40 samples/second, though, "huge"/5 > would still equal "pretty darn big" -- perhaps you need to find some > other method of compressing the information?
Sampling at 200 16-bit samples/second every hour of every day for a year produces 12.6 GBytes. That's not even 5 percent of an average hard disk these days. I'm not sure what's prompting the OP's dilemma. -- % Randy Yates % "Midnight, on the water... %% Fuquay-Varina, NC % I saw... the ocean's daughter." %%% 919-577-9882 % 'Can't Get It Out Of My Head' %%%% <yates@ieee.org> % *El Dorado*, Electric Light Orchestra http://home.earthlink.net/~yatescr
Randy Yates wrote:

(snip)

> Sampling at 200 16-bit samples/second every hour of every day for > a year produces 12.6 GBytes. That's not even 5 percent of an average > hard disk these days.
Arithmetic mean, median, geometric mean, harmonic mean, or even arithmetic geometric mean? -- glen
Thank you Vladimir, Tim and Randy for your replies.

I will take a look at the article referenced.

The issue with the data size is that waveforms of several different types
of data must be inspected and categorized manually.  So, a factor of five
reduction eg (or more) would gratefully be received.

gmcauley wrote:
> I am new to the wonderful world of digital signal processing and have > questions about down sampling. > > We have a biological signal originally sampled at 200 samples/sec which is > producing huge amounts of data. We are only interested in frequencies 20Hz > and below. So, I know *theoretically* I can low-pass filter the data with > a cut-off frequency of 20Hz and then down sample at 40 sample/sec, without > signal loss and aliasing artifacts. ...
gmcauley LATER wrote: > The issue with the data size is that waveforms of several different > types of data must be inspected and categorized manually. So, a > factor of five reduction eg (or more) would gratefully be received. Are you sure that 20 Hz is actually the highest frequency of interest? You state that you categorizing traces visually which suggests shape is important. That suggests that at least the 5th harmonic of your fundamental is significant. Another question to ask if your sampling system was particularly design/chosen to measure the process of current interest. Why did original designer/specifier chose 200 samples/sec?
glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:

> Randy Yates wrote: > > (snip) > >> Sampling at 200 16-bit samples/second every hour of every day for >> a year produces 12.6 GBytes. That's not even 5 percent of an average >> hard disk these days. > > Arithmetic mean, median, geometric mean, harmonic mean, > or even arithmetic geometric mean?
You mean of the hard disk size?!? -- % Randy Yates % "So now it's getting late, %% Fuquay-Varina, NC % and those who hesitate %%% 919-577-9882 % got no one..." %%%% <yates@ieee.org> % 'Waterfall', *Face The Music*, ELO http://home.earthlink.net/~yatescr
Thank you for your reply.


>You state that you categorizing traces visually which suggests shape is >important. That suggests that at least the 5th harmonic of your >fundamental is significant.
I am not exactly sure about the relationship of the harmonics to the shape - please enlighten. There are perhaps two points related to the visual analysis to help clarify our case: Much of EEG data is historically evaluated visually, and there are other data traces (eg, tissue oxygenation) that need to be visually (at least initially) related to the EEG.
>Another question to ask if your sampling system was particularly >design/chosen to measure the process of current interest. > >Why did original designer/specifier chose 200 samples/sec? >
One of the data types (my involvement) is EEG. The so called gamma band ranges from 26-100Hz. So I guess this is why 200Hz seems to be used often.
> >Are you sure that 20 Hz is actually the highest frequency of interest?
In our case, we are not really interested in the gamma band. I will probably choose 25Hz - 30Hz as the maximum frequency to be sure to get good fidelity of the beta band with max 20-30Hz (it seems that people define it differently???) So I am really wanting to know I guess for a low-pass cut off of 30Hz, what is a reasonable resample rate.
Vladimir,

Could you please explain how you arrive at 50Hz?

>For the DSP downsampling, the passband up to 0.8...0.9 of Nyquist is
usually
>a good compromise value. So, for a passband of 20Hz you will need a
sample
>rate of 50Hz, which makes nice even values. >
Pardon me if I am missing something obvious, but I don't see the '20' in the calculation. Is it that 2*30*0.8 ~= 50Hz (in round numbers, and where the '30' is chosen as described in my original post)?