Reply by September 27, 20162016-09-27
On Tuesday, September 27, 2016 at 1:37:54 PM UTC-7, eric.j...@ieee.org wrote:

> >> (snip on factoring polynomials)
(then I wrote)
> >> Seems to me that some polynomials you can factor exactly, > >> and some even into two equal factors. (That is, they have an exact > >> square root.)ALL polynomials can be factored. "Exactly" is another story.
(snip)
> More or less, and the precision tolerance is more or less the issue.
Reminds me of some years ago when I was working with Mathematica. (That was when version was -1.0) My favorite test was to: Factor[(x+1)^100+1] Sometimes with slightly different constants, but close. Factor has an algorithm for factoring polynomials over integers, even at fairly high order. (That was on a 68020 based Sun 3.) So small and easy to write, but the result is fairly big.
> If the implementation is contrained to integers, which it may be in > many practical cases, then many polynomials are not reducible over the > integer field, but that essentially just looks like a precision issue > over the real field.
(snip on approximations and FIR filtering) -- glen
Reply by September 27, 20162016-09-27
On Tue, 27 Sep 2016 16:09:06 -0500, Tim Wescott <tim@seemywebsite.com>
wrote:

>On Tue, 27 Sep 2016 13:55:46 -0700, herrmannsfeldt wrote: > >> On Tuesday, September 27, 2016 at 12:45:02 PM UTC-7, Tim Wescott wrote: >> >> (snip on factoring FIR polynomials) >> >>> Here's a longer pair of filters: >> >>> H1: >>> 1. - 0.5014714 0.3787014 - 0.3190415 0.2833841 >>> - 0.2600124 0.2440775 - 0.2332013 0.2261056 - 0.2220854 >>> 0.2207821 - 0.2220854 0.2261056 - 0.2332013 0.2440775 >>> - 0.2600124 0.2833841 - 0.3190415 0.3787014 - 0.5014714 1. >> >>> H2: >>> 1. 1.5014714 1.3742436 1.439576 1.3971258 >>> 1.4284091 1.4033822 1.4246499 1.4056821 1.4233128 >>> 1.4062998 1.4233128 1.4056821 1.4246499 1.4033822 >>> 1.4284091 1.3971258 1.439576 1.3742436 1.5014714 1. >>> >>> H1 * H2 = 1 1 1 1 ... >> >> Somehow reminds me of the composite Simpson's rule. >> >> There is a good explanation in Numerical Recipes on how and why >> Simpson't rule works. >> >> Integrating over an even number of equally spaced intervals, you have >> >> (b-a)/(3*n)*(f(0) + 4*f(h) + 2*f*2*h) + 4*f(3*h) ... 4*f((n-1(*h) + >> f(n*h)) >> >> That is, alternating coefficients of 2 and 4, with 1 for the first and >> last. >> >> But why the oscillation between 4 and 2 for the coefficients, especially >> in the case where n is getting large? >> >> The explanation in Numerical Recipes is that, as n gets large, >> you can instead do the first and last interval with trapezoidal rule, >> and Simpson's rule for the rest, and average this with the above >> composite rule. The result, then, is constant coefficient for most >> terms, except for the first and last few, which have an end effect. >> >> The result, in the large n limit, has the same error term. >> >>> But -- H1 oscillates around 0, which implies (to me, at least) that >>> there will be precision issues with the FIR. Also, it's a pretty >>> absurd way to implement a running average, at least on most devices. >> >> In fixed point, you just have to be sure that you don't overflow in any >> intermediate calculation. Not so obvious in floating point. > >Floating point is insidious. You don't have to worry so much about >outright overflow, but you do have to worry about some intermediate value >getting huge, and subsequent calculations losing precision. It's much >harder to pin down the effects of quantization when you're trying to aim >at a moving target.
It's the price you pay for enormous dynamic range.
Reply by Tim Wescott September 27, 20162016-09-27
On Tue, 27 Sep 2016 13:55:46 -0700, herrmannsfeldt wrote:

> On Tuesday, September 27, 2016 at 12:45:02 PM UTC-7, Tim Wescott wrote: > > (snip on factoring FIR polynomials) > >> Here's a longer pair of filters: > >> H1: >> 1. - 0.5014714 0.3787014 - 0.3190415 0.2833841 >> - 0.2600124 0.2440775 - 0.2332013 0.2261056 - 0.2220854 >> 0.2207821 - 0.2220854 0.2261056 - 0.2332013 0.2440775 >> - 0.2600124 0.2833841 - 0.3190415 0.3787014 - 0.5014714 1. > >> H2: >> 1. 1.5014714 1.3742436 1.439576 1.3971258 >> 1.4284091 1.4033822 1.4246499 1.4056821 1.4233128 >> 1.4062998 1.4233128 1.4056821 1.4246499 1.4033822 >> 1.4284091 1.3971258 1.439576 1.3742436 1.5014714 1. >> >> H1 * H2 = 1 1 1 1 ... > > Somehow reminds me of the composite Simpson's rule. > > There is a good explanation in Numerical Recipes on how and why > Simpson't rule works. > > Integrating over an even number of equally spaced intervals, you have > > (b-a)/(3*n)*(f(0) + 4*f(h) + 2*f*2*h) + 4*f(3*h) ... 4*f((n-1(*h) + > f(n*h)) > > That is, alternating coefficients of 2 and 4, with 1 for the first and > last. > > But why the oscillation between 4 and 2 for the coefficients, especially > in the case where n is getting large? > > The explanation in Numerical Recipes is that, as n gets large, > you can instead do the first and last interval with trapezoidal rule, > and Simpson's rule for the rest, and average this with the above > composite rule. The result, then, is constant coefficient for most > terms, except for the first and last few, which have an end effect. > > The result, in the large n limit, has the same error term. > >> But -- H1 oscillates around 0, which implies (to me, at least) that >> there will be precision issues with the FIR. Also, it's a pretty >> absurd way to implement a running average, at least on most devices. > > In fixed point, you just have to be sure that you don't overflow in any > intermediate calculation. Not so obvious in floating point.
Floating point is insidious. You don't have to worry so much about outright overflow, but you do have to worry about some intermediate value getting huge, and subsequent calculations losing precision. It's much harder to pin down the effects of quantization when you're trying to aim at a moving target. -- Tim Wescott Control systems, embedded software and circuit design I'm looking for work! See my website if you're interested http://www.wescottdesign.com
Reply by September 27, 20162016-09-27
On Tuesday, September 27, 2016 at 12:45:02 PM UTC-7, Tim Wescott wrote:

(snip on factoring FIR polynomials)

> Here's a longer pair of filters:
> H1: > 1. - 0.5014714 0.3787014 - 0.3190415 0.2833841 > - 0.2600124 0.2440775 - 0.2332013 0.2261056 - 0.2220854 > 0.2207821 - 0.2220854 0.2261056 - 0.2332013 0.2440775 > - 0.2600124 0.2833841 - 0.3190415 0.3787014 - 0.5014714 1.
> H2: > 1. 1.5014714 1.3742436 1.439576 1.3971258 > 1.4284091 1.4033822 1.4246499 1.4056821 1.4233128 > 1.4062998 1.4233128 1.4056821 1.4246499 1.4033822 > 1.4284091 1.3971258 1.439576 1.3742436 1.5014714 1. > > H1 * H2 = 1 1 1 1 ...
Somehow reminds me of the composite Simpson's rule. There is a good explanation in Numerical Recipes on how and why Simpson't rule works. Integrating over an even number of equally spaced intervals, you have (b-a)/(3*n)*(f(0) + 4*f(h) + 2*f*2*h) + 4*f(3*h) ... 4*f((n-1(*h) + f(n*h)) That is, alternating coefficients of 2 and 4, with 1 for the first and last. But why the oscillation between 4 and 2 for the coefficients, especially in the case where n is getting large? The explanation in Numerical Recipes is that, as n gets large, you can instead do the first and last interval with trapezoidal rule, and Simpson's rule for the rest, and average this with the above composite rule. The result, then, is constant coefficient for most terms, except for the first and last few, which have an end effect. The result, in the large n limit, has the same error term.
> But -- H1 oscillates around 0, which implies (to me, at least) that there > will be precision issues with the FIR. Also, it's a pretty absurd way to > implement a running average, at least on most devices.
In fixed point, you just have to be sure that you don't overflow in any intermediate calculation. Not so obvious in floating point.
Reply by September 27, 20162016-09-27
On Tue, 27 Sep 2016 14:35:55 -0500, Tim Wescott
<seemywebsite@myfooter.really> wrote:

>On Tue, 27 Sep 2016 10:07:12 -0700, herrmannsfeldt wrote: > >> On Tuesday, September 27, 2016 at 8:22:44 AM UTC-7, Tim Wescott wrote: >>> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: >>> > Tim Wescott <seemywebsite@myfooter.really> writes: >> >> (snip on factoring polynomials) >> >>> >>> roots([1 1 1 1 1]) >>> > ans = >> >>> > 0.30902 + 0.95106i 0.30902 - 0.95106i >>> > -0.80902 + 0.58779i -0.80902 - 0.58779i >> >>> > Other than the fact that you have to be careful to group the results >>> > into conjugate root pairs, what's the problem? >> >>> Now put those back into FIR form and look at the magnitude of the taps. >>> Think about numerical stability. That sort of thing. >> >> Seems to me that some polynomials you can factor exactly, >> and some even into two equal factors. (That is, they have an exact >> square root.) > >ALL polynomials can be factored. "Exactly" is another story.
More or less, and the precision tolerance is more or less the issue. If the implementation is contrained to integers, which it may be in many practical cases, then many polynomials are not reducible over the integer field, but that essentially just looks like a precision issue over the real field.
>> But also, in most cases and FIR is not the exact solution to the desired >> filter, when done with finite precision.
Yup.
>> Even more, there could be two polynomials that when multiplied come out >> nice and close to the desired filter.
Yup. So it's really an issue of the system's tolerance to the imperfections.
>>> It may get better as the filter gets longer -- I don't know. >> >> And I suspect that it gets easier, or closer, as the filter gets longer. >> >> But the OP hasn't said why a 2000 term filter is needed. > >-- > >Tim Wescott >Wescott Design Services >http://www.wescottdesign.com > >I'm looking for work -- see my website!
Reply by Tim Wescott September 27, 20162016-09-27
On Tue, 27 Sep 2016 12:28:17 -0400, Randy Yates wrote:

> Tim Wescott <tim@seemywebsite.com> writes: > >> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: >> >>> Tim Wescott <seemywebsite@myfooter.really> writes: >>> >>>> On Tue, 27 Sep 2016 01:32:27 -0400, Randy Yates wrote: >>>> >>>>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>>> >>>>>> On Sun, 25 Sep 2016 09:47:29 -0500, Vasilius wrote: >>>>>> >>>>>>> Hello >>>>>>> >>>>>>> I have two DSP connected in serial mode. >>>>>>> Have FIR with 2000 coefficients . >>>>>>> Need to recalc and divide coefficients of FIR by 1000 for 2 DSP, >>>>>>> so that in serial application give result of FIR with 2000 points. >>>>>>> Delay of FIR here dont work, bcs FIR of each DSP is 1000 points >>>>>>> maximum. >>>>>> >>>>>> I had thought I saw someone suggest it, but you could factor the >>>>>> polynomial defined by the FIR, then rebuild two filters using half >>>>>> of the roots in each. >>>>> >>>>> This is what I was thinking would be the most efficient way to do >>>>> this. >>>> >>>> I suspect it only works well for some filters. Try it with a moving >>>> average, for yuks. >>> >>> OK: >>> >>>>> roots([1 1 1 1 1]) >>> ans = >>> >>> 0.30902 + 0.95106i 0.30902 - 0.95106i >>> -0.80902 + 0.58779i -0.80902 - 0.58779i >>> >>> Other than the fact that you have to be careful to group the results >>> into conjugate root pairs, what's the problem? >> >> Now put those back into FIR form and look at the magnitude of the taps. >> Think about numerical stability. That sort of thing. >> >> It may get better as the filter gets longer -- I don't know. > > Are you referring to the problem (in this specific example) of the > coefficients being exactly representable in the original filter (i.e., > "1") in fixed-point but not in the factored filters: > > f1(z) = z^2 - 0.61803*z + 1 f1(z) = z^2 + 1.61800*z + 1 > > ? > > That is certainly a source of error in the one that the other didn't > have. How significant it would be I don't know. I would think it would > get worse with longer filters.
Here's a longer pair of filters: H1: 1. - 0.5014714 0.3787014 - 0.3190415 0.2833841 - 0.2600124 0.2440775 - 0.2332013 0.2261056 - 0.2220854 0.2207821 - 0.2220854 0.2261056 - 0.2332013 0.2440775 - 0.2600124 0.2833841 - 0.3190415 0.3787014 - 0.5014714 1. H2: 1. 1.5014714 1.3742436 1.439576 1.3971258 1.4284091 1.4033822 1.4246499 1.4056821 1.4233128 1.4062998 1.4233128 1.4056821 1.4246499 1.4033822 1.4284091 1.3971258 1.439576 1.3742436 1.5014714 1. H1 * H2 = 1 1 1 1 ... But -- H1 oscillates around 0, which implies (to me, at least) that there will be precision issues with the FIR. Also, it's a pretty absurd way to implement a running average, at least on most devices. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by Tim Wescott September 27, 20162016-09-27
On Tue, 27 Sep 2016 10:07:12 -0700, herrmannsfeldt wrote:

> On Tuesday, September 27, 2016 at 8:22:44 AM UTC-7, Tim Wescott wrote: >> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: >> > Tim Wescott <seemywebsite@myfooter.really> writes: > > (snip on factoring polynomials) > >> >>> roots([1 1 1 1 1]) >> > ans = > >> > 0.30902 + 0.95106i 0.30902 - 0.95106i >> > -0.80902 + 0.58779i -0.80902 - 0.58779i > >> > Other than the fact that you have to be careful to group the results >> > into conjugate root pairs, what's the problem? > >> Now put those back into FIR form and look at the magnitude of the taps. >> Think about numerical stability. That sort of thing. > > Seems to me that some polynomials you can factor exactly, > and some even into two equal factors. (That is, they have an exact > square root.)
ALL polynomials can be factored. "Exactly" is another story.
> But also, in most cases and FIR is not the exact solution to the desired > filter, when done with finite precision. > > Even more, there could be two polynomials that when multiplied come out > nice and close to the desired filter. > >> It may get better as the filter gets longer -- I don't know. > > And I suspect that it gets easier, or closer, as the filter gets longer. > > But the OP hasn't said why a 2000 term filter is needed.
-- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!
Reply by September 27, 20162016-09-27
On Tue, 27 Sep 2016 12:28:17 -0400, Randy Yates
<yates@digitalsignallabs.com> wrote:

>Tim Wescott <tim@seemywebsite.com> writes: > >> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: >> >>> Tim Wescott <seemywebsite@myfooter.really> writes: >>> >>>> On Tue, 27 Sep 2016 01:32:27 -0400, Randy Yates wrote: >>>> >>>>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>>> >>>>>> On Sun, 25 Sep 2016 09:47:29 -0500, Vasilius wrote: >>>>>> >>>>>>> Hello >>>>>>> >>>>>>> I have two DSP connected in serial mode. >>>>>>> Have FIR with 2000 coefficients . >>>>>>> Need to recalc and divide coefficients of FIR by 1000 for 2 DSP, so >>>>>>> that in serial application give result of FIR with 2000 points. >>>>>>> Delay of FIR here dont work, bcs FIR of each DSP is 1000 points >>>>>>> maximum. >>>>>> >>>>>> I had thought I saw someone suggest it, but you could factor the >>>>>> polynomial defined by the FIR, then rebuild two filters using half of >>>>>> the roots in each. >>>>> >>>>> This is what I was thinking would be the most efficient way to do >>>>> this. >>>> >>>> I suspect it only works well for some filters. Try it with a moving >>>> average, for yuks. >>> >>> OK: >>> >>>>> roots([1 1 1 1 1]) >>> ans = >>> >>> 0.30902 + 0.95106i 0.30902 - 0.95106i >>> -0.80902 + 0.58779i -0.80902 - 0.58779i >>> >>> Other than the fact that you have to be careful to group the results >>> into conjugate root pairs, what's the problem? >> >> Now put those back into FIR form and look at the magnitude of the taps. >> Think about numerical stability. That sort of thing. >> >> It may get better as the filter gets longer -- I don't know. > >Are you referring to the problem (in this specific example) of the >coefficients being exactly representable in the original filter (i.e., >"1") in fixed-point but not in the factored filters: > > f1(z) = z^2 - 0.61803*z + 1 > f1(z) = z^2 + 1.61800*z + 1 > >? > >That is certainly a source of error in the one that the other didn't >have. How significant it would be I don't know. I would think it would >get worse with longer filters.
If you are constrained to integer implementations it can get very difficult for the polynomials that are reducible.
Reply by September 27, 20162016-09-27
On Tuesday, September 27, 2016 at 8:22:44 AM UTC-7, Tim Wescott wrote:
> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: > > Tim Wescott <seemywebsite@myfooter.really> writes:
(snip on factoring polynomials)
> >>> roots([1 1 1 1 1]) > > ans =
> > 0.30902 + 0.95106i 0.30902 - 0.95106i > > -0.80902 + 0.58779i -0.80902 - 0.58779i
> > Other than the fact that you have to be careful to group the results > > into conjugate root pairs, what's the problem?
> Now put those back into FIR form and look at the magnitude of the taps. > Think about numerical stability. That sort of thing.
Seems to me that some polynomials you can factor exactly, and some even into two equal factors. (That is, they have an exact square root.) But also, in most cases and FIR is not the exact solution to the desired filter, when done with finite precision. Even more, there could be two polynomials that when multiplied come out nice and close to the desired filter.
> It may get better as the filter gets longer -- I don't know.
And I suspect that it gets easier, or closer, as the filter gets longer. But the OP hasn't said why a 2000 term filter is needed.
Reply by Randy Yates September 27, 20162016-09-27
Tim Wescott <tim@seemywebsite.com> writes:

> On Tue, 27 Sep 2016 07:20:35 -0400, Randy Yates wrote: > >> Tim Wescott <seemywebsite@myfooter.really> writes: >> >>> On Tue, 27 Sep 2016 01:32:27 -0400, Randy Yates wrote: >>> >>>> Tim Wescott <seemywebsite@myfooter.really> writes: >>>> >>>>> On Sun, 25 Sep 2016 09:47:29 -0500, Vasilius wrote: >>>>> >>>>>> Hello >>>>>> >>>>>> I have two DSP connected in serial mode. >>>>>> Have FIR with 2000 coefficients . >>>>>> Need to recalc and divide coefficients of FIR by 1000 for 2 DSP, so >>>>>> that in serial application give result of FIR with 2000 points. >>>>>> Delay of FIR here dont work, bcs FIR of each DSP is 1000 points >>>>>> maximum. >>>>> >>>>> I had thought I saw someone suggest it, but you could factor the >>>>> polynomial defined by the FIR, then rebuild two filters using half of >>>>> the roots in each. >>>> >>>> This is what I was thinking would be the most efficient way to do >>>> this. >>> >>> I suspect it only works well for some filters. Try it with a moving >>> average, for yuks. >> >> OK: >> >>>> roots([1 1 1 1 1]) >> ans = >> >> 0.30902 + 0.95106i 0.30902 - 0.95106i >> -0.80902 + 0.58779i -0.80902 - 0.58779i >> >> Other than the fact that you have to be careful to group the results >> into conjugate root pairs, what's the problem? > > Now put those back into FIR form and look at the magnitude of the taps. > Think about numerical stability. That sort of thing. > > It may get better as the filter gets longer -- I don't know.
Are you referring to the problem (in this specific example) of the coefficients being exactly representable in the original filter (i.e., "1") in fixed-point but not in the factored filters: f1(z) = z^2 - 0.61803*z + 1 f1(z) = z^2 + 1.61800*z + 1 ? That is certainly a source of error in the one that the other didn't have. How significant it would be I don't know. I would think it would get worse with longer filters. -- Randy Yates, DSP/Embedded Firmware Developer Digital Signal Labs http://www.digitalsignallabs.com