> Jerry,
>
> Agreed. All good points. Decimating in order to interpolate makes no sense.
> 3-point interpolation to get a peak might... depends on how much it's worth
> and if there are too may "bad" cases to deal with .... e.g. if the peaks are
> in the noise.
>
> Fred
About noise: if it is limited to the signal band, it necessarily affects
adjacent oversampled points nearly equally. If it is limited only by the
anti-alias filter at the higher rate, then low-pass filtering will
smooth the points. That filter would anyway have to be applied before
decimating, even if it were known that the signal itself is bandlimited.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by Fred Marshall●February 13, 20042004-02-13
"Jerry Avins" <jya@ieee.org> wrote in message
news:402bf6d7$0$3136$61fed72c@news.rcn.com...
> Fred Marshall wrote:
>
> ...
>
>
> > The questions Rune and Andor have asked about "why decimate at all?" are
> > very pertinent. It would seem only useful if you must reduce the
compute
> > load to get average, peak-peak, etc. Also, to expand on peak-peak
measures:
> > how accurate are you trying to be and over what span of cycles at 100Hz?
> > The longer the measure, the more accurate as an average and the less
> > suceptible to noise. The shorter the measure, the opposite. In either
case
> > you will need to detect the apparent peaks (which could include
> > interpolation). What you do with the detected peaks is an algorithmic
> > question that you've not asked.....
> >
> > Fred
>
> I timidly point out that decimating might not save computation at all.
> Aside from the time spent doing it, subsequent interpolation to find the
> true peak amounts to recreating some of the discarded samples, work that
> is unnecessary if they are retained. Linear interpolation serves to find
> zero crossings, but not peaks, but with 12x oversampling, selecting the
> largest sample may be adequate. After all, the slope near the peak is
> very small.
>
> Jerry
Jerry,
Agreed. All good points. Decimating in order to interpolate makes no sense.
3-point interpolation to get a peak might... depends on how much it's worth
and if there are too may "bad" cases to deal with .... e.g. if the peaks are
in the noise.
Fred
Reply by Jerry Avins●February 12, 20042004-02-12
Fred Marshall wrote:
...
> The questions Rune and Andor have asked about "why decimate at all?" are
> very pertinent. It would seem only useful if you must reduce the compute
> load to get average, peak-peak, etc. Also, to expand on peak-peak measures:
> how accurate are you trying to be and over what span of cycles at 100Hz?
> The longer the measure, the more accurate as an average and the less
> suceptible to noise. The shorter the measure, the opposite. In either case
> you will need to detect the apparent peaks (which could include
> interpolation). What you do with the detected peaks is an algorithmic
> question that you've not asked.....
>
> Fred
I timidly point out that decimating might not save computation at all.
Aside from the time spent doing it, subsequent interpolation to find the
true peak amounts to recreating some of the discarded samples, work that
is unnecessary if they are retained. Linear interpolation serves to find
zero crossings, but not peaks, but with 12x oversampling, selecting the
largest sample may be adequate. After all, the slope near the peak is
very small.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by Fred Marshall●February 12, 20042004-02-12
"seb" <germain1_fr@yahoo.fr> wrote in message
news:23925133.0402112001.23fc1b83@posting.google.com...
> Hello,
>
> i feel confuse with sampling theorem.
> I know that if i respect sampling theorem : using sampling frequency
> above 2*the lowest frequency inside the incoming signal then the
> signal can be reconstructed from the sequence of values (excuse my
> poor english). In other word no information is lost.
> But, i have to resolve another case. Some computation must be done on
> a signal at 100 Hz like average, V peak-peak (just in the time domain)
> and so on.... but the incoming signal is sampled at 2400 Hz so i need
> to make decimation and my question is :
> As only time domaine operation must be done on the signal, is a
> prefiltering is necessary before decimation (some component above 50
> Hz are present) ?
> The feeling i had is than i just have more point of the same signal
> !!!!
If you are sure that the signal in its entirety (not just the part you're
interested in) is limited to 100Hz bandwidth, then you can decimate without
prefiltering. If there's noise added to the signal then you might still do
the same but, I believe, with degradation in the signal to noise ratio.
Similarly, if the signal level below 100Hz is very high indeed, with respect
to signal levels above 100Hz, then you might also decimate without
prefiltering and not suffer much error.
The questions Rune and Andor have asked about "why decimate at all?" are
very pertinent. It would seem only useful if you must reduce the compute
load to get average, peak-peak, etc. Also, to expand on peak-peak measures:
how accurate are you trying to be and over what span of cycles at 100Hz?
The longer the measure, the more accurate as an average and the less
suceptible to noise. The shorter the measure, the opposite. In either case
you will need to detect the apparent peaks (which could include
interpolation). What you do with the detected peaks is an algorithmic
question that you've not asked.....
Fred
Reply by Rune Allnor●February 12, 20042004-02-12
germain1_fr@yahoo.fr (seb) wrote in message news:<23925133.0402112001.23fc1b83@posting.google.com>...
> Hello,
>
> i feel confuse with sampling theorem.
> I know that if i respect sampling theorem : using sampling frequency
> above 2*the lowest frequency inside the incoming signal then the
> signal can be reconstructed from the sequence of values (excuse my
> poor english). In other word no information is lost.
> But, i have to resolve another case. Some computation must be done on
> a signal at 100 Hz like average, V peak-peak (just in the time domain)
> and so on.... but the incoming signal is sampled at 2400 Hz so i need
> to make decimation and my question is :
> As only time domaine operation must be done on the signal, is a
> prefiltering is necessary before decimation (some component above 50
> Hz are present) ?
> The feeling i had is than i just have more point of the same signal
> !!!!
I am not sure why you want to do a decimation. How many points are
there in your time sequence? If you want to process a signal at 100 Hz
you can sample as low as just above 200 Hz. In which case you have 12x
oversampling, which doesn't seem to be a lot.
Remember, you don't *have* to sample at the minimum frequency. You can
sample at any frequency you want, as long as it is higher than 2*f_Nyquist.
Rune
Reply by Andor●February 12, 20042004-02-12
seb wrote:
> Hello,
>
> i feel confuse with sampling theorem.
> I know that if i respect sampling theorem : using sampling frequency
> above 2*the lowest frequency inside the incoming signal then the
> signal can be reconstructed from the sequence of values (excuse my
> poor english). In other word no information is lost.
> But, i have to resolve another case. Some computation must be done on
> a signal at 100 Hz like average, V peak-peak (just in the time domain)
> and so on.... but the incoming signal is sampled at 2400 Hz so i need
> to make decimation and my question is :
> As only time domaine operation must be done on the signal, is a
> prefiltering is necessary before decimation (some component above 50
> Hz are present) ?
You confused me with that last paragraph, but I'll give it a shot:
1. Are you sure you _know_ that the input signal is a sinusoid at 100
Hz with no harmonics and noise? Then, if you want to decimate, you can
just subsample to 218 Hz (by taking every 11. sample and droping the
others). No filtering required.
2. Are you sure you want to decimate? To find the peak-peak amplitude
of the sinusoid is simpler if it is oversampled. Also, 2400 Hz does
not seem like an awfully high sampling rate - usually one decimates
when the sampling rate is too high for processing.
Regards,
Andor
Reply by seb●February 12, 20042004-02-12
Hello,
i feel confuse with sampling theorem.
I know that if i respect sampling theorem : using sampling frequency
above 2*the lowest frequency inside the incoming signal then the
signal can be reconstructed from the sequence of values (excuse my
poor english). In other word no information is lost.
But, i have to resolve another case. Some computation must be done on
a signal at 100 Hz like average, V peak-peak (just in the time domain)
and so on.... but the incoming signal is sampled at 2400 Hz so i need
to make decimation and my question is :
As only time domaine operation must be done on the signal, is a
prefiltering is necessary before decimation (some component above 50
Hz are present) ?
The feeling i had is than i just have more point of the same signal
!!!!