Reply by January 4, 20172017-01-04
The Wikipedia entry is pretty much the standard way I've always done it, based on the original Hoganouer paper. It's also interesting to read about the Noble identities. 

https://en.m.wikipedia.org/wiki/Cascaded_integrator%E2%80%93comb_filter

Bob
Reply by Tim Wescott January 4, 20172017-01-04
On Tue, 03 Jan 2017 19:59:38 -0800, radams2000 wrote:

> Tim > > Thanks, I was trying to understand why my experience with CIC filters, > when used for interpolation, was that they will crash with a single math > error, whereas your method did not. Now I see that the "integrate first > " approach will not crash in that situation, but requires extra > registers compared to the "differentiate first" structure, where you can > use the Noble Identities to shift all the (1 -Z^-n) sections to the > lower rate where they become (1 - Z^-1). This is the normal way it is > done in VLSI design to save registers and power but it does have this > susceptibility to crashing. > If you are interpolating with an Nth-order filter using your method, the > first integrator can replaced by a zero-order hold, due to the initial > zero-stuffing, so you can save one integrator. > > Bob
I always assumed that the way that a CIC filter did it was to save the integrator state at every N'th sample when decimating by N, and then (for integrator state = x) subtract x[n] - x[n-N] at the decimated rate. This is not done? It seems like it would still be susceptible to a math error, but only for one decimated sample, not for all time afterward. I'm not a VLSI guy -- the closest I've gotten is threatening to implement algorithms in Verilog, to scare the real logic guys into doing it themselves. (Because, nothing's more scary than an ace software guy claiming competence at writing Verilog...) -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by January 4, 20172017-01-04
Depends , if you are designing for a battery-powered consumer device a or wearable medical device ,  every mw counts, but in other applications it may not matter. 

Bob
Reply by January 4, 20172017-01-04
On Tue, 3 Jan 2017 19:59:38 -0800 (PST), radams2000@gmail.com wrote:

>Tim > >Thanks, I was trying to understand why my experience with CIC filters, when= > used for interpolation, was that they will crash with a single math error,= > whereas your method did not. Now I see that the "integrate first " approac= >h will not crash in that situation, but requires extra registers compared t= >o the "differentiate first" structure, where you can use the Noble Identiti= >es to shift all the (1 -Z^-n) sections to the lower rate where they become = >(1 - Z^-1). This is the normal way it is done in VLSI design to save regist= >ers and power but it does have this susceptibility to crashing.=20 >If you are interpolating with an Nth-order filter using your method, the fi= >rst integrator can replaced by a zero-order hold, due to the initial zero-s= >tuffing, so you can save one integrator.=20 > >Bob
Yup. But I'm wondering how critical it is to be stingy on populating registers in silicon these days, since the geometries are so small so the real-estate is less expensive?
Reply by January 3, 20172017-01-03
Tim

Thanks, I was trying to understand why my experience with CIC filters, when used for interpolation, was that they will crash with a single math error, whereas your method did not. Now I see that the "integrate first " approach will not crash in that situation, but requires extra registers compared to the "differentiate first" structure, where you can use the Noble Identities to shift all the (1 -Z^-n) sections to the lower rate where they become (1 - Z^-1). This is the normal way it is done in VLSI design to save registers and power but it does have this susceptibility to crashing. 
If you are interpolating with an Nth-order filter using your method, the first integrator can replaced by a zero-order hold, due to the initial zero-stuffing, so you can save one integrator. 

Bob
Reply by Tim Wescott January 3, 20172017-01-03
On Tue, 03 Jan 2017 13:57:12 -0800, radams2000 wrote:

> Tim > > Could you explain in detail how you would propose to interpolate by, > say, 8, while only using 2 registers per order? > > Bob
I'm not sure what you're asking (where did interpolation come into this?) But if you wanted to a moving-average filter over 8 samples using my "integrate first" method you'd need to have at least 8 words of "integrator" storage, and 9 would be easier. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by January 3, 20172017-01-03
Tim

Could you explain in detail how you would propose to interpolate by, say, 8, while only using 2 registers per order?

Bob
Reply by rickman January 2, 20172017-01-02
On 1/2/2017 2:40 PM, Randy Yates wrote:
> rickman <gnuarm@gmail.com> writes: > >> On 1/2/2017 9:21 AM, Randy Yates wrote: >>> rickman <gnuarm@gmail.com> writes: >>> >>>> On 12/25/2016 11:27 PM, Randy Yates wrote: >>>>> A colleague was explaining that when he tried using floats in a >>>>> recursive moving-average filter, there was some residual error which >>>>> wasn't present in the standard (non-recursive) moving-average filter. >>>>> >>>>> Would it be correct and reasonable to frame the problem like this: >>>>> >>>>> In the recursive moving-average implementation, the output y[n - 1] >>>>> needs to store perfect knowledge of the past N states, so that when you >>>>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>>>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>>>> not. >>>>> >>>>> In general, every addition/subtraction to y[n - m], m < N, produces a >>>>> quantization error; this quantization error is propagated into the new >>>>> output y[n]. >>>>> >>>>> So it is the fact that the output "state" of the filter (i.e., the >>>>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>>>> tainted, that creates this residual error. >>>>> >>>>> This also illustrates why a fixed-point implementation (using the >>>>> appropriate intermediate accumulator width) of the same filter >>>>> works: the output state is known perfectly. >>>>> >>>>> Correct? Good way to view it? (Trivial???) >>>> >>>> Am I correct in thinking the recursive version of the filter is one >>>> where the current value of the output is calculated from the previous >>>> value of the output, the current value of the input and an older value >>>> of the input? The idea being that the recursive calculation is fewer >>>> steps than the "standard" approach where each output is calculated >>>> from a series of values of the input. >>> >>> Hi rickman, >>> >>> Yes. >>> >>> y[n] = y[n - 1] - x[n - N] + x[n] >> >> With coefficients of course, and the non-recursive filter is (again >> with coefficients) >> >> y[n] = x[n] + x[n-1] + ... + x[n - N] >> >> Isn't this just the same as a FIR filter vs. an IIR filter? > > Yes it is, and there is absolutely no difference if the computations are > performed analytically, i.e., under the real or complex number systems. > It's only when you start performing the arithmetic with various types of > limited-precision number systems that bad things can start to happen > and/or you have to be careful about your implementation. > >> I don't get why the math issues of this are hard to understand. IIR >> filters have significant issues with quantization errors. > > Your statement is pretty general. I was trying to be more specific in > determining where the issues come from, and once you start looking at > specifics and the various potential issues, it is no longer an "easy" > problem, at least not for me. > >> FIR filters, in contrast, can be easily designed to never even have >> quantization errors if you have the flexibility to control the word >> size. > > "Easily designed" is pretty subjective. What's easy for you may be > difficult for someone else. It's like saying a piece of string is > "short" or "long" - it's all relative. > > At my company a decision was made (erroneously, in my opinion) prior to > my arrival to use single-precision floats for most all DSP. That > decision has had several impacts. > > For example, even if you use the non-recursive implementation, Tim (and > others) have shown that floating point can requantize on EVERY addition > or subtraction. When you use the recursive implementation, you get even > worse problems.
Why would someone make a decision like that? Sounds like someone has their head up their ass. Floating point can have issues in domains where integers won't, but that really is because the integer uses *all* the bits for significands. If you are doing things like fractional multiplies both integer and floating point start to have quantization errors. But as you said, the IIR type filter will recirculate that error multifold. Combine that with floating point and you have the worst of all worlds if you need precision. However, often applications are not sensitive to this sort of noise in either type of filter depending on the requirements vs. word size.
>> If you coefficients are all 1, then it is a boxcar filter and the IIR >> version can be designed with no quantization errors if the mantissa is >> large enough. > > "Can be designed", yes, but that single phrase hides a multitude of > politics and design issues that, when unhidden, are not so easy to deal > with, at least not for me.
I don't know much about DSP politics. A boxcar filter has no coefficients so the accumulator can have no quantization error. Then there is nothing to propagate in an IIR filter. -- Rick C
Reply by Randy Yates January 2, 20172017-01-02
rickman <gnuarm@gmail.com> writes:

> On 1/2/2017 9:21 AM, Randy Yates wrote: >> rickman <gnuarm@gmail.com> writes: >> >>> On 12/25/2016 11:27 PM, Randy Yates wrote: >>>> A colleague was explaining that when he tried using floats in a >>>> recursive moving-average filter, there was some residual error which >>>> wasn't present in the standard (non-recursive) moving-average filter. >>>> >>>> Would it be correct and reasonable to frame the problem like this: >>>> >>>> In the recursive moving-average implementation, the output y[n - 1] >>>> needs to store perfect knowledge of the past N states, so that when you >>>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>>> not. >>>> >>>> In general, every addition/subtraction to y[n - m], m < N, produces a >>>> quantization error; this quantization error is propagated into the new >>>> output y[n]. >>>> >>>> So it is the fact that the output "state" of the filter (i.e., the >>>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>>> tainted, that creates this residual error. >>>> >>>> This also illustrates why a fixed-point implementation (using the >>>> appropriate intermediate accumulator width) of the same filter >>>> works: the output state is known perfectly. >>>> >>>> Correct? Good way to view it? (Trivial???) >>> >>> Am I correct in thinking the recursive version of the filter is one >>> where the current value of the output is calculated from the previous >>> value of the output, the current value of the input and an older value >>> of the input? The idea being that the recursive calculation is fewer >>> steps than the "standard" approach where each output is calculated >>> from a series of values of the input. >> >> Hi rickman, >> >> Yes. >> >> y[n] = y[n - 1] - x[n - N] + x[n] > > With coefficients of course, and the non-recursive filter is (again > with coefficients) > > y[n] = x[n] + x[n-1] + ... + x[n - N] > > Isn't this just the same as a FIR filter vs. an IIR filter?
Yes it is, and there is absolutely no difference if the computations are performed analytically, i.e., under the real or complex number systems. It's only when you start performing the arithmetic with various types of limited-precision number systems that bad things can start to happen and/or you have to be careful about your implementation.
> I don't get why the math issues of this are hard to understand. IIR > filters have significant issues with quantization errors.
Your statement is pretty general. I was trying to be more specific in determining where the issues come from, and once you start looking at specifics and the various potential issues, it is no longer an "easy" problem, at least not for me.
> FIR filters, in contrast, can be easily designed to never even have > quantization errors if you have the flexibility to control the word > size.
"Easily designed" is pretty subjective. What's easy for you may be difficult for someone else. It's like saying a piece of string is "short" or "long" - it's all relative. At my company a decision was made (erroneously, in my opinion) prior to my arrival to use single-precision floats for most all DSP. That decision has had several impacts. For example, even if you use the non-recursive implementation, Tim (and others) have shown that floating point can requantize on EVERY addition or subtraction. When you use the recursive implementation, you get even worse problems.
> If you coefficients are all 1, then it is a boxcar filter and the IIR > version can be designed with no quantization errors if the mantissa is > large enough.
"Can be designed", yes, but that single phrase hides a multitude of politics and design issues that, when unhidden, are not so easy to deal with, at least not for me. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com
Reply by rickman January 2, 20172017-01-02
On 1/2/2017 9:21 AM, Randy Yates wrote:
> rickman <gnuarm@gmail.com> writes: > >> On 12/25/2016 11:27 PM, Randy Yates wrote: >>> A colleague was explaining that when he tried using floats in a >>> recursive moving-average filter, there was some residual error which >>> wasn't present in the standard (non-recursive) moving-average filter. >>> >>> Would it be correct and reasonable to frame the problem like this: >>> >>> In the recursive moving-average implementation, the output y[n - 1] >>> needs to store perfect knowledge of the past N states, so that when you >>> subtract the input N - 1 samples back from y[n - 1], you perfectly >>> maintain the sum of inputs x[n - N + 2] to x[n - 1]. However, it does >>> not. >>> >>> In general, every addition/subtraction to y[n - m], m < N, produces a >>> quantization error; this quantization error is propagated into the new >>> output y[n]. >>> >>> So it is the fact that the output "state" of the filter (i.e., the >>> output y[n - 1]) is not perfect, i.e., the state of the filter is >>> tainted, that creates this residual error. >>> >>> This also illustrates why a fixed-point implementation (using the >>> appropriate intermediate accumulator width) of the same filter >>> works: the output state is known perfectly. >>> >>> Correct? Good way to view it? (Trivial???) >> >> Am I correct in thinking the recursive version of the filter is one >> where the current value of the output is calculated from the previous >> value of the output, the current value of the input and an older value >> of the input? The idea being that the recursive calculation is fewer >> steps than the "standard" approach where each output is calculated >> from a series of values of the input. > > Hi rickman, > > Yes. > > y[n] = y[n - 1] - x[n - N] + x[n]
With coefficients of course, and the non-recursive filter is (again with coefficients) y[n] = x[n] + x[n-1] + ... + x[n - N] Isn't this just the same as a FIR filter vs. an IIR filter? I don't get why the math issues of this are hard to understand. IIR filters have significant issues with quantization errors. FIR filters, in contrast, can be easily designed to never even have quantization errors if you have the flexibility to control the word size. If you coefficients are all 1, then it is a boxcar filter and the IIR version can be designed with no quantization errors if the mantissa is large enough. -- Rick C