DSPRelated.com
Forums

simple perspective in using floats for recursive moving-average

Started by Randy Yates December 26, 2016
Tim Wescott <tim@seemywebsite.com> writes:

> On Sun, 01 Jan 2017 22:09:42 -0500, Randy Yates wrote: > >> Tim Wescott <tim@seemywebsite.com> writes: >> >>> On Fri, 30 Dec 2016 05:56:16 -0800, radams2000 wrote: >>> >>>> I use these filters all the time, but there's this nagging worry that >>>> a single-cycle math error (caused by a glitch or an alpha particle or >>>> whatever ) will produce a permanent DC offset in the output. If you >>>> want to get fancy you can detect this and reset your filter. >>>> >>>> Bob >>> >>> The technique used in CIC filters makes that a non-problem -- instead >>> of saving the vector of past data and subtracting out the n-th oldest >>> one from the sum (and thus preserving any errors to the sum forever), >>> save a vector of past sums and subtract out the n-th oldest one. If >>> you use wrap-on-overflow arithmetic, then your alpha-particle >>> data-smash will only affect the filter output for at most n samples -- >>> the same as if it happened to the input data. >>> >>> In general it means you're saving data with a wider word length, but >>> you're picking up some dependability. >> >> Tim, >> >> I don't understand. Are you saying form a filter with this difference >> equation: >> >> y[n] = y[n - 1] - y[n - N] + x[n] ? >> >> If my math is correct, this works out to a z-transform of >> >> Y(z)/X(z) = 1 - z^(-1) + z^(-N), >> >> and this is not the same as a boxcar sum, is it? > > Given u[n] in and y[n] out: > > y[n] = y[n - 1] + u[n] - u[n - N] > > vs. > > y[n] = sum_k={n-N+1}^n u[n] > > vs. > > x[n] = x[n - 1] + u[n] > y[n] = x[n] - x[n - N] > > Unless I've done something tremendously stupid with my math, they all > have the transfer function H(z) = (1 - z^-N)/(1 - z^-1) = 1 + z^-1 + ... > + z^-(N-1). > > The first is "traditional", the second is the cannonical FIR way of doing > it. The third looks like it'd have all sorts of overflow problems, and > would if you were using floats, but if you're using fixed-point overflow > arithmetic ("typical C implementation" fixed point), then it'll be just > fine. > > Looked at another way, the first and third cascade a difference that > spans N taps and an integrator -- it's just that the third method > switches the order of operations.
Hey Tim, I see - that is really neat! I actually went through a simple example of the recursion with pencil and paper to show myself it works. It looks like the magic of this implementation comes from the following facts: 1. The integrator portion of the filter is the only one that requires maintaining state. By computing it separately and storing a vector of the integrator outputs, the subsequent differentiator cancels out everything in the current state that is N states back. This gets us alpa particle robustness. 2. When implemented in two's complement fixed-point, the likely overflows in x[n] don't matter because the final y[n] will always bring us back in the original range and nothing is lost due to the wraparound characteristic of two's complement. Very neat! -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
On 1/1/2017 23:18, Tim Wescott wrote:
> On Sun, 01 Jan 2017 22:09:42 -0500, Randy Yates wrote: > >> Tim Wescott <tim@seemywebsite.com> writes: >> >>> On Fri, 30 Dec 2016 05:56:16 -0800, radams2000 wrote: >>> >>>> I use these filters all the time, but there's this nagging worry that >>>> a single-cycle math error (caused by a glitch or an alpha particle or >>>> whatever ) will produce a permanent DC offset in the output. If you >>>> want to get fancy you can detect this and reset your filter. >>>> >>>> Bob >>> >>> The technique used in CIC filters makes that a non-problem -- instead >>> of saving the vector of past data and subtracting out the n-th oldest >>> one from the sum (and thus preserving any errors to the sum forever), >>> save a vector of past sums and subtract out the n-th oldest one. If >>> you use wrap-on-overflow arithmetic, then your alpha-particle >>> data-smash will only affect the filter output for at most n samples -- >>> the same as if it happened to the input data. >>> >>> In general it means you're saving data with a wider word length, but >>> you're picking up some dependability. >> >> Tim, >> >> I don't understand. Are you saying form a filter with this difference >> equation: >> >> y[n] = y[n - 1] - y[n - N] + x[n] ? >> >> If my math is correct, this works out to a z-transform of >> >> Y(z)/X(z) = 1 - z^(-1) + z^(-N), >> >> and this is not the same as a boxcar sum, is it? > > Given u[n] in and y[n] out: > > y[n] = y[n - 1] + u[n] - u[n - N] > > vs. > > y[n] = sum_k={n-N+1}^n u[n] > > vs. > > x[n] = x[n - 1] + u[n] > y[n] = x[n] - x[n - N] > > Unless I've done something tremendously stupid with my math, they all > have the transfer function H(z) = (1 - z^-N)/(1 - z^-1) = 1 + z^-1 + ... > + z^-(N-1). > > The first is "traditional", the second is the cannonical FIR way of doing > it. The third looks like it'd have all sorts of overflow problems, and > would if you were using floats, but if you're using fixed-point overflow > arithmetic ("typical C implementation" fixed point), then it'll be just > fine. > > Looked at another way, the first and third cascade a difference that > spans N taps and an integrator -- it's just that the third method > switches the order of operations. >
The only issue I see with the third method is the possibility that your x[n] can overflow any fixed-length word. Consider a constant input. -- Best wishes, --Phil pomartel At Comcast(ignore_this) dot net
On Mon, 02 Jan 2017 10:42:13 -0500, Phil Martel wrote:

> On 1/1/2017 23:18, Tim Wescott wrote: >> On Sun, 01 Jan 2017 22:09:42 -0500, Randy Yates wrote: >> >>> Tim Wescott <tim@seemywebsite.com> writes: >>> >>>> On Fri, 30 Dec 2016 05:56:16 -0800, radams2000 wrote: >>>> >>>>> I use these filters all the time, but there's this nagging worry >>>>> that a single-cycle math error (caused by a glitch or an alpha >>>>> particle or whatever ) will produce a permanent DC offset in the >>>>> output. If you want to get fancy you can detect this and reset your >>>>> filter. >>>>> >>>>> Bob >>>> >>>> The technique used in CIC filters makes that a non-problem -- instead >>>> of saving the vector of past data and subtracting out the n-th oldest >>>> one from the sum (and thus preserving any errors to the sum forever), >>>> save a vector of past sums and subtract out the n-th oldest one. If >>>> you use wrap-on-overflow arithmetic, then your alpha-particle >>>> data-smash will only affect the filter output for at most n samples >>>> -- >>>> the same as if it happened to the input data. >>>> >>>> In general it means you're saving data with a wider word length, but >>>> you're picking up some dependability. >>> >>> Tim, >>> >>> I don't understand. Are you saying form a filter with this difference >>> equation: >>> >>> y[n] = y[n - 1] - y[n - N] + x[n] ? >>> >>> If my math is correct, this works out to a z-transform of >>> >>> Y(z)/X(z) = 1 - z^(-1) + z^(-N), >>> >>> and this is not the same as a boxcar sum, is it? >> >> Given u[n] in and y[n] out: >> >> y[n] = y[n - 1] + u[n] - u[n - N] >> >> vs. >> >> y[n] = sum_k={n-N+1}^n u[n] >> >> vs. >> >> x[n] = x[n - 1] + u[n] >> y[n] = x[n] - x[n - N] >> >> Unless I've done something tremendously stupid with my math, they all >> have the transfer function H(z) = (1 - z^-N)/(1 - z^-1) = 1 + z^-1 + >> ... + z^-(N-1). >> >> The first is "traditional", the second is the cannonical FIR way of >> doing it. The third looks like it'd have all sorts of overflow >> problems, and would if you were using floats, but if you're using >> fixed-point overflow arithmetic ("typical C implementation" fixed >> point), then it'll be just fine. >> >> Looked at another way, the first and third cascade a difference that >> spans N taps and an integrator -- it's just that the third method >> switches the order of operations. >> >> > The only issue I see with the third method is the possibility that your > x[n] can overflow any fixed-length word. Consider a constant input.
The _inevitability_ that x[n] will overflow any fixed-length word is not an issue, as long as the sum of u from n-N to n does not exceed certain limits. Because, if you're doing the usual "C-style" 2's compliment arithmetic, such an overflow gets washed out when you do the differentiation. Consider the case where we're using plain old C integers to hold x[n], on a machine that uses 2's compliment math. For the sake of simplicity in exposition, assume 16-bit integers. Let sum_{k=n-N}^n u[k] = 42, and let x[n-N] = INT_MAX - 12 (0x7ff3). Then, after all the addition, x[n] = INT_MIN + 29 (0x801d). Now, x[n] - x[n-N] = 0x801d - 0x7ff3 = 0x28, or 42 decimal. As long as the sum is inherently limited to being between -INT_MAX and INT_MAX, there will only ever be one rollover during the summation phase, and if there is, it'll be matched by exactly one rollover in the difference phase, and the correct number will always be coughed up. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
On Monday, January 2, 2017 at 10:08:38 AM UTC-5, Randy Yates wrote:
> Tim Wescott <tim@seemywebsite.com> writes: > > > On Sun, 01 Jan 2017 22:09:42 -0500, Randy Yates wrote: > > > >> Tim Wescott <tim@seemywebsite.com> writes: > >> > >>> On Fri, 30 Dec 2016 05:56:16 -0800, radams2000 wrote: > >>> > >>>> I use these filters all the time, but there's this nagging worry that > >>>> a single-cycle math error (caused by a glitch or an alpha particle or > >>>> whatever ) will produce a permanent DC offset in the output. If you > >>>> want to get fancy you can detect this and reset your filter. > >>> > >>> The technique used in CIC filters makes that a non-problem -- instead > >>> of saving the vector of past data and subtracting out the n-th oldest > >>> one from the sum (and thus preserving any errors to the sum forever), > >>> save a vector of past sums and subtract out the n-th oldest one. If > >>> you use wrap-on-overflow arithmetic, then your alpha-particle > >>> data-smash will only affect the filter output for at most n samples -- > >>> the same as if it happened to the input data. > >>> > >>> In general it means you're saving data with a wider word length, but > >>> you're picking up some dependability. > >> > >> > > > > Given u[n] in and y[n] out: > > > > y[n] = y[n - 1] + u[n] - u[n - N] > > > > vs. > > > > y[n] = sum_k={n-N+1}^n u[n] > > > > vs. > > > > x[n] = x[n - 1] + u[n] > > y[n] = x[n] - x[n - N] > > > > Unless I've done something tremendously stupid with my math, they all > > have the transfer function H(z) = (1 - z^-N)/(1 - z^-1) = 1 + z^-1 + ... > > + z^-(N-1). > > > > The first is "traditional", the second is the cannonical FIR way of doing > > it. The third looks like it'd have all sorts of overflow problems, and > > would if you were using floats, but if you're using fixed-point overflow > > arithmetic ("typical C implementation" fixed point), then it'll be just > > fine. > > > > Looked at another way, the first and third cascade a difference that > > spans N taps and an integrator -- it's just that the third method > > switches the order of operations. > > > I actually went through a simple example of the recursion with pencil > and paper to show myself it works. > > It looks like the magic of this implementation comes from the following > facts: > > 1. The integrator portion of the filter is the only one that requires > maintaining state.
well, the delay line also contains states. the version 1 and 3 (that have the integrator) actually has one more state (and it's in feedback, so it results in a pole at z=1) than the "FIR canonical" which has N states, no feedback.
> By computing it separately and storing a vector of > the integrator outputs, the subsequent differentiator cancels out > everything in the current state that is N states back. This gets us > alpha particle robustness. > > 2. When implemented in two's complement fixed-point, the likely > overflows in x[n] don't matter because the final y[n] will always > bring us back in the original range and nothing is lost due to the > wraparound characteristic of two's complement.
remember all of this is a moving-sum not yet a moving-average (until you divide by N). in both cases 1. and 3. your result better be ceil(log2(N)) bits wider, so that division by N brings you back into the same ballpark as the original u[n]. r b-j
robert bristow-johnson <rbj@audioimagination.com> writes:

> On Monday, January 2, 2017 at 10:08:38 AM UTC-5, Randy Yates wrote: >> Tim Wescott <tim@seemywebsite.com> writes: >> >> > On Sun, 01 Jan 2017 22:09:42 -0500, Randy Yates wrote: >> > >> >> Tim Wescott <tim@seemywebsite.com> writes: >> >> >> >>> On Fri, 30 Dec 2016 05:56:16 -0800, radams2000 wrote: >> >>> >> >>>> I use these filters all the time, but there's this nagging worry that >> >>>> a single-cycle math error (caused by a glitch or an alpha particle or >> >>>> whatever ) will produce a permanent DC offset in the output. If you >> >>>> want to get fancy you can detect this and reset your filter. >> >>> >> >>> The technique used in CIC filters makes that a non-problem -- instead >> >>> of saving the vector of past data and subtracting out the n-th oldest >> >>> one from the sum (and thus preserving any errors to the sum forever), >> >>> save a vector of past sums and subtract out the n-th oldest one. If >> >>> you use wrap-on-overflow arithmetic, then your alpha-particle >> >>> data-smash will only affect the filter output for at most n samples -- >> >>> the same as if it happened to the input data. >> >>> >> >>> In general it means you're saving data with a wider word length, but >> >>> you're picking up some dependability. >> >> >> >> >> > >> > Given u[n] in and y[n] out: >> > >> > y[n] = y[n - 1] + u[n] - u[n - N] >> > >> > vs. >> > >> > y[n] = sum_k={n-N+1}^n u[n] >> > >> > vs. >> > >> > x[n] = x[n - 1] + u[n] >> > y[n] = x[n] - x[n - N] >> > >> > Unless I've done something tremendously stupid with my math, they all >> > have the transfer function H(z) = (1 - z^-N)/(1 - z^-1) = 1 + z^-1 + ... >> > + z^-(N-1). >> > >> > The first is "traditional", the second is the cannonical FIR way of doing >> > it. The third looks like it'd have all sorts of overflow problems, and >> > would if you were using floats, but if you're using fixed-point overflow >> > arithmetic ("typical C implementation" fixed point), then it'll be just >> > fine. >> > >> > Looked at another way, the first and third cascade a difference that >> > spans N taps and an integrator -- it's just that the third method >> > switches the order of operations. >> >> >> I actually went through a simple example of the recursion with pencil >> and paper to show myself it works. >> >> It looks like the magic of this implementation comes from the following >> facts: >> >> 1. The integrator portion of the filter is the only one that requires >> maintaining state. > > well, the delay line also contains states. the version 1 and 3 (that > have the integrator) actually has one more state (and it's in > feedback, so it results in a pole at z=1) than the "FIR canonical" > which has N states, no feedback.
I think you're saying the same thing. I meant "the only OUTPUT that requires state," going back to the concept that started this thread.
>> By computing it separately and storing a vector of >> the integrator outputs, the subsequent differentiator cancels out >> everything in the current state that is N states back. This gets us >> alpha particle robustness. >> >> 2. When implemented in two's complement fixed-point, the likely >> overflows in x[n] don't matter because the final y[n] will always >> bring us back in the original range and nothing is lost due to the >> wraparound characteristic of two's complement. > > remember all of this is a moving-sum not yet a moving-average (until > you divide by N). in both cases 1. and 3. your result better be > ceil(log2(N)) bits wider, so that division by N brings you back into > the same ballpark as the original u[n].
True. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
On 1/2/2017 9:21 AM, Randy Yates wrote:
> rickman <gnuarm@gmail.com> writes: > >> On 12/25/2016 11:27 PM, Randy Yates wrote: >>> A colleague was explaining that when he tried using floats in a >>> recursive moving-average filter, there was some residual error which >>> wasn't present in the standard (non-recursive) moving-average filter. >>> >>> Would it be correct and reasonable to frame the problem like this: >>> >>> In the recursive moving-average implementation, the output y[n - 1] >>> needs to store perfect knowledge of the past N states, so that when you >>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>> not. >>> >>> In general, every addition/subtraction to y[n - m], m < N, produces a >>> quantization error; this quantization error is propagated into the new >>> output y[n]. >>> >>> So it is the fact that the output "state" of the filter (i.e., the >>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>> tainted, that creates this residual error. >>> >>> This also illustrates why a fixed-point implementation (using the >>> appropriate intermediate accumulator width) of the same filter >>> works: the output state is known perfectly. >>> >>> Correct? Good way to view it? (Trivial???) >> >> Am I correct in thinking the recursive version of the filter is one >> where the current value of the output is calculated from the previous >> value of the output, the current value of the input and an older value >> of the input? The idea being that the recursive calculation is fewer >> steps than the "standard" approach where each output is calculated >> from a series of values of the input. > > Hi rickman, > > Yes. > > y[n] = y[n - 1] - x[n - N] + x[n]
With coefficients of course, and the non-recursive filter is (again with coefficients) y[n] = x[n] + x[n-1] + ... + x[n - N] Isn't this just the same as a FIR filter vs. an IIR filter? I don't get why the math issues of this are hard to understand. IIR filters have significant issues with quantization errors. FIR filters, in contrast, can be easily designed to never even have quantization errors if you have the flexibility to control the word size. If you coefficients are all 1, then it is a boxcar filter and the IIR version can be designed with no quantization errors if the mantissa is large enough. -- Rick C
rickman <gnuarm@gmail.com> writes:

> On 1/2/2017 9:21 AM, Randy Yates wrote: >> rickman <gnuarm@gmail.com> writes: >> >>> On 12/25/2016 11:27 PM, Randy Yates wrote: >>>> A colleague was explaining that when he tried using floats in a >>>> recursive moving-average filter, there was some residual error which >>>> wasn't present in the standard (non-recursive) moving-average filter. >>>> >>>> Would it be correct and reasonable to frame the problem like this: >>>> >>>> In the recursive moving-average implementation, the output y[n - 1] >>>> needs to store perfect knowledge of the past N states, so that when you >>>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>>> not. >>>> >>>> In general, every addition/subtraction to y[n - m], m < N, produces a >>>> quantization error; this quantization error is propagated into the new >>>> output y[n]. >>>> >>>> So it is the fact that the output "state" of the filter (i.e., the >>>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>>> tainted, that creates this residual error. >>>> >>>> This also illustrates why a fixed-point implementation (using the >>>> appropriate intermediate accumulator width) of the same filter >>>> works: the output state is known perfectly. >>>> >>>> Correct? Good way to view it? (Trivial???) >>> >>> Am I correct in thinking the recursive version of the filter is one >>> where the current value of the output is calculated from the previous >>> value of the output, the current value of the input and an older value >>> of the input? The idea being that the recursive calculation is fewer >>> steps than the "standard" approach where each output is calculated >>> from a series of values of the input. >> >> Hi rickman, >> >> Yes. >> >> y[n] = y[n - 1] - x[n - N] + x[n] > > With coefficients of course, and the non-recursive filter is (again > with coefficients) > > y[n] = x[n] + x[n-1] + ... + x[n - N] > > Isn't this just the same as a FIR filter vs. an IIR filter?
Yes it is, and there is absolutely no difference if the computations are performed analytically, i.e., under the real or complex number systems. It's only when you start performing the arithmetic with various types of limited-precision number systems that bad things can start to happen and/or you have to be careful about your implementation.
> I don't get why the math issues of this are hard to understand. IIR > filters have significant issues with quantization errors.
Your statement is pretty general. I was trying to be more specific in determining where the issues come from, and once you start looking at specifics and the various potential issues, it is no longer an "easy" problem, at least not for me.
> FIR filters, in contrast, can be easily designed to never even have > quantization errors if you have the flexibility to control the word > size.
"Easily designed" is pretty subjective. What's easy for you may be difficult for someone else. It's like saying a piece of string is "short" or "long" - it's all relative. At my company a decision was made (erroneously, in my opinion) prior to my arrival to use single-precision floats for most all DSP. That decision has had several impacts. For example, even if you use the non-recursive implementation, Tim (and others) have shown that floating point can requantize on EVERY addition or subtraction. When you use the recursive implementation, you get even worse problems.
> If you coefficients are all 1, then it is a boxcar filter and the IIR > version can be designed with no quantization errors if the mantissa is > large enough.
"Can be designed", yes, but that single phrase hides a multitude of politics and design issues that, when unhidden, are not so easy to deal with, at least not for me. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
On 1/2/2017 2:40 PM, Randy Yates wrote:
> rickman <gnuarm@gmail.com> writes: > >> On 1/2/2017 9:21 AM, Randy Yates wrote: >>> rickman <gnuarm@gmail.com> writes: >>> >>>> On 12/25/2016 11:27 PM, Randy Yates wrote: >>>>> A colleague was explaining that when he tried using floats in a >>>>> recursive moving-average filter, there was some residual error which >>>>> wasn't present in the standard (non-recursive) moving-average filter. >>>>> >>>>> Would it be correct and reasonable to frame the problem like this: >>>>> >>>>> In the recursive moving-average implementation, the output y[n - 1] >>>>> needs to store perfect knowledge of the past N states, so that when you >>>>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>>>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>>>> not. >>>>> >>>>> In general, every addition/subtraction to y[n - m], m < N, produces a >>>>> quantization error; this quantization error is propagated into the new >>>>> output y[n]. >>>>> >>>>> So it is the fact that the output "state" of the filter (i.e., the >>>>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>>>> tainted, that creates this residual error. >>>>> >>>>> This also illustrates why a fixed-point implementation (using the >>>>> appropriate intermediate accumulator width) of the same filter >>>>> works: the output state is known perfectly. >>>>> >>>>> Correct? Good way to view it? (Trivial???) >>>> >>>> Am I correct in thinking the recursive version of the filter is one >>>> where the current value of the output is calculated from the previous >>>> value of the output, the current value of the input and an older value >>>> of the input? The idea being that the recursive calculation is fewer >>>> steps than the "standard" approach where each output is calculated >>>> from a series of values of the input. >>> >>> Hi rickman, >>> >>> Yes. >>> >>> y[n] = y[n - 1] - x[n - N] + x[n] >> >> With coefficients of course, and the non-recursive filter is (again >> with coefficients) >> >> y[n] = x[n] + x[n-1] + ... + x[n - N] >> >> Isn't this just the same as a FIR filter vs. an IIR filter? > > Yes it is, and there is absolutely no difference if the computations are > performed analytically, i.e., under the real or complex number systems. > It's only when you start performing the arithmetic with various types of > limited-precision number systems that bad things can start to happen > and/or you have to be careful about your implementation. > >> I don't get why the math issues of this are hard to understand. IIR >> filters have significant issues with quantization errors. > > Your statement is pretty general. I was trying to be more specific in > determining where the issues come from, and once you start looking at > specifics and the various potential issues, it is no longer an "easy" > problem, at least not for me. > >> FIR filters, in contrast, can be easily designed to never even have >> quantization errors if you have the flexibility to control the word >> size. > > "Easily designed" is pretty subjective. What's easy for you may be > difficult for someone else. It's like saying a piece of string is > "short" or "long" - it's all relative. > > At my company a decision was made (erroneously, in my opinion) prior to > my arrival to use single-precision floats for most all DSP. That > decision has had several impacts. > > For example, even if you use the non-recursive implementation, Tim (and > others) have shown that floating point can requantize on EVERY addition > or subtraction. When you use the recursive implementation, you get even > worse problems.
Why would someone make a decision like that? Sounds like someone has their head up their ass. Floating point can have issues in domains where integers won't, but that really is because the integer uses *all* the bits for significands. If you are doing things like fractional multiplies both integer and floating point start to have quantization errors. But as you said, the IIR type filter will recirculate that error multifold. Combine that with floating point and you have the worst of all worlds if you need precision. However, often applications are not sensitive to this sort of noise in either type of filter depending on the requirements vs. word size.
>> If you coefficients are all 1, then it is a boxcar filter and the IIR >> version can be designed with no quantization errors if the mantissa is >> large enough. > > "Can be designed", yes, but that single phrase hides a multitude of > politics and design issues that, when unhidden, are not so easy to deal > with, at least not for me.
I don't know much about DSP politics. A boxcar filter has no coefficients so the accumulator can have no quantization error. Then there is nothing to propagate in an IIR filter. -- Rick C
Tim

Could you explain in detail how you would propose to interpolate by, say, 8, while only using 2 registers per order?

Bob
On Tue, 03 Jan 2017 13:57:12 -0800, radams2000 wrote:

> Tim > > Could you explain in detail how you would propose to interpolate by, > say, 8, while only using 2 registers per order? > > Bob
I'm not sure what you're asking (where did interpolation come into this?) But if you wanted to a moving-average filter over 8 samples using my "integrate first" method you'd need to have at least 8 words of "integrator" storage, and 9 would be easier. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com