Hi all,
I've got a data rate of about 20 kHz and a desired LPF cutoff of about
2.5 Hz. For starters, I was unable to design an appropriate FIR to
manage this. Even if I was successful, I expect the # of
taps/computation required would have made the filter impractical. I am
using a 16-bit fixed point processor.
Fortunately, a simple moving average is adequate for my purposes. The
intent is to do the following in a circular buffer:
y = y + x[n] - x[n-m]
Where y is the accumulated result, x[n] is the current input sample,
and x[n-m] is the oldest input sample. I require an output at 200ms
intervals (every 4000 samples), and end up with:
z = y/m (z is the result I'm looking for)
>From a computational standpoint, I can't imagine anything easier,
unless m = 2^k, in which case the divide can be replaced by a simple
right shift. Great.
BUT, the application has a number of channels (and therefore this
filter uses significant memory). Also, there is no guarantee that m
will be a power of 2, as my input data is only valid under certain
conditions.
To reduce memory, I have been considering a block filter that simply
accumulates the last 400ms worth of data and computes the result. Two
overlapping filters of this sort would be required to achieve the 200ms
output interval....
ALTERNATIVELY, I could reduce the block size (to 200 ms, or even less).
Each block would be averaged and the result stored. After enough data
was collected, the block averages could be averaged to generate an
overal average. If I really wanted to be perverse I could set my block
size to a power of 2 to reduce the block average/division to a simple
shift.
Questions:
1) I'm not sure about the terminology of these things, any help?
(specifically, with the taking an average of the already-averaged
blocks).
2) As I see it, I am getting a vast reduction in memory usage with no
penalty in terms of computational requirements. Something seems fishy
to me...have I missed anything?
Cheers,
Dave
Large moving average
Started by ●August 30, 2006
Reply by ●August 30, 20062006-08-30
dave_bonnell@hotmail.com wrote:> Hi all, > > I've got a data rate of about 20 kHz and a desired LPF cutoff of about > 2.5 Hz. For starters, I was unable to design an appropriate FIR to > manage this. Even if I was successful, I expect the # of > taps/computation required would have made the filter impractical. I am > using a 16-bit fixed point processor. > > Fortunately, a simple moving average is adequate for my purposes. The > intent is to do the following in a circular buffer: > > y = y + x[n] - x[n-m] > > Where y is the accumulated result, x[n] is the current input sample, > and x[n-m] is the oldest input sample. I require an output at 200ms > intervals (every 4000 samples), and end up with: > > z = y/m (z is the result I'm looking for) > >>From a computational standpoint, I can't imagine anything easier, > unless m = 2^k, in which case the divide can be replaced by a simple > right shift. Great. > > BUT, the application has a number of channels (and therefore this > filter uses significant memory). Also, there is no guarantee that m > will be a power of 2, as my input data is only valid under certain > conditions. > > To reduce memory, I have been considering a block filter that simply > accumulates the last 400ms worth of data and computes the result. Two > overlapping filters of this sort would be required to achieve the 200ms > output interval.... > > ALTERNATIVELY, I could reduce the block size (to 200 ms, or even less). > Each block would be averaged and the result stored. After enough data > was collected, the block averages could be averaged to generate an > overal average. If I really wanted to be perverse I could set my block > size to a power of 2 to reduce the block average/division to a simple > shift. > > Questions: > 1) I'm not sure about the terminology of these things, any help? > (specifically, with the taking an average of the already-averaged > blocks). > > 2) As I see it, I am getting a vast reduction in memory usage with no > penalty in terms of computational requirements. Something seems fishy > to me...have I missed anything? > > Cheers, > Dave >1. When you average over N samples you are filtering (you can be fancy and call it a comb filter if you want). When you only keep every N'th sample you are decimating by N. Now you're doing multi-rate signal processing! 2. When you decimate by N you are losing some information. You are, in fact, going to experience some aliasing in your data. But if you already know that large running averages are OK for your application, then you aren't losing anything that you weren't going to ignore anyway, so no, nothing is particularly fishy. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Posting from Google? See http://cfaj.freeshell.org/google/ "Applied Control Theory for Embedded Systems" came out in April. See details at http://www.wescottdesign.com/actfes/actfes.html
Reply by ●August 30, 20062006-08-30
Tim Wescott <tim@seemywebsite.com> wrote in news:3IqdnR-Tkp5GQ2jZnZ2dnUVZ_r6dnZ2d@web-ster.com:> 1. When you average over N samples you are filtering (you can be > fancy and call it a comb filter if you want). When you only keep > every N'th sample you are decimating by N. Now you're doing > multi-rate signal processing! > > 2. When you decimate by N you are losing some information. You are, > in fact, going to experience some aliasing in your data.This would depend on how good the filter of the Moving Average is. To avoid aliasing, the decimation might have to be narrower than the filter, like average over 20N, but decimate by N. It might be better to build a low-order pre-decimation filter, decimate, and then apply a 2.5Hz low pass filter.> But if you > already know that large running averages are OK for your application, > then you aren't losing anything that you weren't going to ignore > anyway, so no, nothing is particularly fishy. >True enough. Like so many things, the technique is dictated by the needs, so without knowing much more about the system, its hard to say what would be "good enough" and what's the best approach. One approach not even mentioned would be to sample much slower than 20K (of course, popping appropriate analog filters on the inputs), but again, this depends on whether you need 20K sample rates for another reason. It might even be best to actually sample at both rates--hardware seems to be the cheapest part of the design these days. -- Scott Reverse name to reply
Reply by ●August 30, 20062006-08-30
Dave wrote:> Hi all, > > I've got a data rate of about 20 kHz and a desired LPF cutoff of about > 2.5 Hz. For starters, I was unable to design an appropriate FIR to > manage this. Even if I was successful, I expect the # of > taps/computation required would have made the filter impractical.Not necessarily. You have a tremendous data overhead. The most efficient way (computationally and with regards to memory) to get rid of all that data is to use a cascade of lowpass filter / downsampling / lowpass / downsampling ... (iterate until happy). This is called multi-rate filtering (Google).That way, you get a signal with low sampling frequency (just as your approach of using averaged averages), but without aliasing. Regards, Andor
Reply by ●August 30, 20062006-08-30
"dave_bonnell@hotmail.com" wrote:> > Hi all, > > I've got a data rate of about 20 kHz and a desired LPF cutoff of about > 2.5 Hz. For starters, I was unable to design an appropriate FIR to > manage this. Even if I was successful, I expect the # of > taps/computation required would have made the filter impractical. I am > using a 16-bit fixed point processor. > > Fortunately, a simple moving average is adequate for my purposes. The > intent is to do the following in a circular buffer: > > y = y + x[n] - x[n-m] > > Where y is the accumulated result, x[n] is the current input sample, > and x[n-m] is the oldest input sample. I require an output at 200ms > intervals (every 4000 samples), and end up with: > > z = y/m (z is the result I'm looking for) >I must be misunderstanding what you say you are doing. If I understand you are computing the mean of each successive 4000 sample block. Your method of doing that requires a buffer of length 4000 samples. But all you really need is a buffer of one sample length. y = y +x[n] for 4000 samples and then output y. Then set y to zero and do it again for another 4000 samples. Why are you computing all the 3999 intermediate values that you never use? -Jim> >From a computational standpoint, I can't imagine anything easier, > unless m = 2^k, in which case the divide can be replaced by a simple > right shift. Great. > > BUT, the application has a number of channels (and therefore this > filter uses significant memory). Also, there is no guarantee that m > will be a power of 2, as my input data is only valid under certain > conditions. > > To reduce memory, I have been considering a block filter that simply > accumulates the last 400ms worth of data and computes the result. Two > overlapping filters of this sort would be required to achieve the 200ms > output interval.... > > ALTERNATIVELY, I could reduce the block size (to 200 ms, or even less). > Each block would be averaged and the result stored. After enough data > was collected, the block averages could be averaged to generate an > overal average. If I really wanted to be perverse I could set my block > size to a power of 2 to reduce the block average/division to a simple > shift. > > Questions: > 1) I'm not sure about the terminology of these things, any help? > (specifically, with the taking an average of the already-averaged > blocks). > > 2) As I see it, I am getting a vast reduction in memory usage with no > penalty in terms of computational requirements. Something seems fishy > to me...have I missed anything? > > Cheers, > Dave----== Posted via Newsfeeds.Com - Unlimited-Unrestricted-Secure Usenet News==---- http://www.newsfeeds.com The #1 Newsgroup Service in the World! 120,000+ Newsgroups ----= East and West-Coast Server Farms - Total Privacy via Encryption =----
Reply by ●August 30, 20062006-08-30
dave_bonnell@hotmail.com wrote:> Hi all, > > I've got a data rate of about 20 kHz and a desired LPF cutoff of about > 2.5 Hz. For starters, I was unable to design an appropriate FIR to > manage this. Even if I was successful, I expect the # of > taps/computation required would have made the filter impractical. I am > using a 16-bit fixed point processor. > > Fortunately, a simple moving average is adequate for my purposes. The > intent is to do the following in a circular buffer: > > y = y + x[n] - x[n-m]That is an FIR filter. It's not suitable for floating-point implementation.> Where y is the accumulated result, x[n] is the current input sample, > and x[n-m] is the oldest input sample. I require an output at 200ms > intervals (every 4000 samples), and end up with: > > z = y/m (z is the result I'm looking for) > >>From a computational standpoint, I can't imagine anything easier, > unless m = 2^k, in which case the divide can be replaced by a simple > right shift. Great. > > BUT, the application has a number of channels (and therefore this > filter uses significant memory). Also, there is no guarantee that m > will be a power of 2, as my input data is only valid under certain > conditions. > > To reduce memory, I have been considering a block filter that simply > accumulates the last 400ms worth of data and computes the result. Two > overlapping filters of this sort would be required to achieve the 200ms > output interval.... > > ALTERNATIVELY, I could reduce the block size (to 200 ms, or even less). > Each block would be averaged and the result stored. After enough data > was collected, the block averages could be averaged to generate an > overal average. If I really wanted to be perverse I could set my block > size to a power of 2 to reduce the block average/division to a simple > shift. > > Questions: > 1) I'm not sure about the terminology of these things, any help? > (specifically, with the taking an average of the already-averaged > blocks). > > 2) As I see it, I am getting a vast reduction in memory usage with no > penalty in terms of computational requirements. Something seems fishy > to me...have I missed anything?Beware of averaging averages. Have you considered an exponential averager? A more sophisticated IIR filter would still need much less memory and less computation than a correspondingly effective FIR. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●August 30, 20062006-08-30
Jerry Avins wrote:> dave_bonnell@hotmail.com wrote:...>> Fortunately, a simple moving average is adequate for my purposes. The >> intent is to do the following in a circular buffer: >> >> y = y + x[n] - x[n-m] > > That is an FIR filter. It's not suitable for floating-point implementation.I did a double-take when I saw this. I'm sure Jerry didn't mean to indicate that all FIR filters are unsuitable for floating-point calculations. In case I'm not the only one who misunderstood, I will clarify. Using feedback in the above manner described leads to a potentially unstable situation -- a pole on the unit circle. e.g. consider a floating point moving average of three samples across the sequence. For simplicity sake, ignore the division by 3. input: [ 1e20 2e20 3e20 4e20 5 0 0 0 0 0 ] step 1: 1e20 + 2e20 + 3e20 = 6e20. Correct step 2: 6e20 + 4e20 - 1e20 = 9e20. Correct step 3: 9e20 + 5 - 2e20 = 7e20. Sort of correct step 4: 7e20 + 0 - 3e20 = 4e20. Sort of correct step 5: 4e20 + 0 - 4e20 = 0 . Wrongo!! should be 5 step 6: 0 + 0 - 5 =-5 . Wrongo!! should be 0 The problem is that the quantization error depends on the amount currently in the sum. A constant quantization unit, such as with fixed-point processing, prevents drift. -- Mark Borgerding
Reply by ●August 30, 20062006-08-30
Mark Borgerding wrote:> Jerry Avins wrote: >> dave_bonnell@hotmail.com wrote: > ... >>> Fortunately, a simple moving average is adequate for my purposes. The >>> intent is to do the following in a circular buffer: >>> >>> y = y + x[n] - x[n-m] >> That is an FIR filter. It's not suitable for floating-point implementation. > > I did a double-take when I saw this. > > I'm sure Jerry didn't mean to indicate that all FIR filters are > unsuitable for floating-point calculations. > > In case I'm not the only one who misunderstood, I will clarify. > > Using feedback in the above manner described leads to a potentially > unstable situation -- a pole on the unit circle. > > e.g. consider a floating point moving average of three samples across > the sequence. For simplicity sake, ignore the division by 3. > > input: [ 1e20 2e20 3e20 4e20 5 0 0 0 0 0 ] > > step 1: 1e20 + 2e20 + 3e20 = 6e20. Correct > step 2: 6e20 + 4e20 - 1e20 = 9e20. Correct > step 3: 9e20 + 5 - 2e20 = 7e20. Sort of correct > step 4: 7e20 + 0 - 3e20 = 4e20. Sort of correct > step 5: 4e20 + 0 - 4e20 = 0 . Wrongo!! should be 5 > step 6: 0 + 0 - 5 =-5 . Wrongo!! should be 0 > > The problem is that the quantization error depends on the amount > currently in the sum. > > A constant quantization unit, such as with fixed-point processing, > prevents drift.Nice demonstration. I certainly didn't mean to imply that no FIRs are suitable for floating point. By "It is not suitable ..." I meant "This particular FIR is not ...." It actually works with floats if the exponent never changes, but that can be a difficult condition to assure. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Reply by ●August 31, 20062006-08-31
Andor wrote:> Dave wrote:>>I've got a data rate of about 20 kHz and a desired LPF cutoff of about >>2.5 Hz. For starters, I was unable to design an appropriate FIR to >>manage this. Even if I was successful, I expect the # of >>taps/computation required would have made the filter impractical.> Not necessarily. You have a tremendous data overhead. The most > efficient way (computationally and with regards to memory) to get rid > of all that data is to use a cascade of lowpass filter / downsampling / > lowpass / downsampling ... (iterate until happy).Or multiple levels of moving average filters. -- glen
Reply by ●August 31, 20062006-08-31
glen herrmannsfeldt wrote:> Andor wrote: > > Dave wrote: > > >>I've got a data rate of about 20 kHz and a desired LPF cutoff of about > >>2.5 Hz. For starters, I was unable to design an appropriate FIR to > >>manage this. Even if I was successful, I expect the # of > >>taps/computation required would have made the filter impractical. > > > Not necessarily. You have a tremendous data overhead. The most > > efficient way (computationally and with regards to memory) to get rid > > of all that data is to use a cascade of lowpass filter / downsampling / > > lowpass / downsampling ... (iterate until happy). > > Or multiple levels of moving average filters.But moving averages are awful lowpass filters.






