Reply by Rick Lyons October 7, 20062006-10-07
On Wed, 04 Oct 2006 16:04:49 GMT, ---@--- (Robert Scott) wrote:

>I have an approximation for the Pythagorean distance formula (magnitude of >vector [x,y]) using integer arithmetic that I would like to improve. Currently >I am using this: >
(snipped)
> >The improvement I am looking for is in accuracy. I would like to try for a >4-fold improvement in accuracy (.5%) without substantially increasing the >running time of what I have now. Does anyone know of a better approximation >that is almost as fast? > >Robert Scott >Ypsilanti, Michigan
Hi Robert, 0.5% max error is a tough goal to achieve. (I don't speak "C", so I'm not able understand your code.) Have ya' had a look at: "A Root of Less Evil", IEEE Signal Processing Magazine, DSP Tips & Tricks column, Vol. 22, No. 2, March 2005. There's a scheme in that article that yields 1% max error in magnitude approximation. Good Luck, [-Rick-] Ypsilanti, huh? Have they fixed the roads in Michigan yet?
Reply by Everett M. Greene October 6, 20062006-10-06
"Phil Frisbie, Jr." <phil@hawksoft.com> writes:
> Everett M. Greene wrote: > > > "Phil Frisbie, Jr." <phil@hawksoft.com> writes: > > [snip] > > > >>I like this algorithm. It is snipped from my OpenLPC fixed point codec. > >>It uses only simple operations. It is likely more accurate than you > >>need, but I think you can simply truncate the algorithm sooner for less > >>precision. At the end PRECISION is the fractional bits you are using in > >>your fixed point code. > > > > [snip] > > > > Isn't computing the exact square root of an integer > > faster than the algorithm you show? > > That depends on your target CPU/DSP. I am now targeting ARM most > of the time so I will make a note to profile that code and see.
It would be interesting to know the results of the comparison.
Reply by Phil Frisbie, Jr. October 5, 20062006-10-05
Everett M. Greene wrote:

> "Phil Frisbie, Jr." <phil@hawksoft.com> writes: > [snip] > >>I like this algorithm. It is snipped from my OpenLPC fixed point codec. >>It uses only simple operations. It is likely more accurate than you >>need, but I think you can simply truncate the algorithm sooner for less >>precision. At the end PRECISION is the fractional bits you are using in >>your fixed point code. > > [snip] > > Isn't computing the exact square root of an integer > faster than the algorithm you show?
That depends on your target CPU/DSP. I am now targeting ARM most of the time so I will make a note to profile that code and see. -- Phil Frisbie, Jr. Hawk Software http://www.hawksoft.com
Reply by Everett M. Greene October 5, 20062006-10-05
"Phil Frisbie, Jr." <phil@hawksoft.com> writes:
[snip]
> I like this algorithm. It is snipped from my OpenLPC fixed point codec. > It uses only simple operations. It is likely more accurate than you > need, but I think you can simply truncate the algorithm sooner for less > precision. At the end PRECISION is the fractional bits you are using in > your fixed point code.
[snip] Isn't computing the exact square root of an integer faster than the algorithm you show?
Reply by Mark Borgerding October 5, 20062006-10-05
Robert Scott wrote:
> I have an approximation for the Pythagorean distance formula (magnitude of > vector [x,y]) using integer arithmetic that I would like to improve. Currently
Look for a paper entitled Filip, A. E. , "A baker's dozen magnitude approximations and their detection statistics" IEEE Trans on Aerospace and Elcectronic Systems, Jan 1976 It is a classic work with many tricks similar to the one you are using. With a little trickery, several of them could be implemented with little or no multiplication. -- Mark Borgerding 3dB Labs, Inc Innovate. Develop. Deliver.
Reply by Steve Underwood October 5, 20062006-10-05
Phil Frisbie, Jr. wrote:
> Robert Scott wrote: >> I have an approximation for the Pythagorean distance formula >> (magnitude of >> vector [x,y]) using integer arithmetic that I would like to improve. >> Currently >> I am using this: > <snip> >> >> This is for an ARM processor in an application where I cannot afford >> the time >> for any floating point operations. The integer values of x and y come >> from an >> array of signed 16-bit numbers, but x and y themselves are 32-bit >> numbers in the >> above formula. The final result (z) will be shifted right one bit >> before being >> stored back into an array of 16-bit integers, since the distance >> formula can >> potentially extend the range of x and y by one bit. In case you are >> wondering, >> this is part of calculating the power spectrum at the end of an FFT. >> >> The improvement I am looking for is in accuracy. I would like to try >> for a >> 4-fold improvement in accuracy (.5%) without substantially increasing the >> running time of what I have now. Does anyone know of a better >> approximation >> that is almost as fast? > > I like this algorithm. It is snipped from my OpenLPC fixed point codec. > It uses only simple operations. It is likely more accurate than you > need, but I think you can simply truncate the algorithm sooner for less > precision. At the end PRECISION is the fractional bits you are using in > your fixed point code. > > static fixed32 fixsqrt32(fixed32 x) > { > > unsigned long r = 0, s, v = (unsigned long)x; > > #define STEP(k) s = r + (1 << k * 2); r >>= 1; \ > if (s <= v) { v -= s; r |= (1 << k * 2); } > > STEP(15); > STEP(14); > STEP(13); > STEP(12); > STEP(11); > STEP(10); > STEP(9); > STEP(8); > STEP(7); > STEP(6); > STEP(5); > STEP(4); > STEP(3); > STEP(2); > STEP(1); > STEP(0); > > return (fixed32)(r << (PRECISION / 2)); > } >
That only seems to calculate a part of the answer, leaving lots of zeros at the end. Why not use something that calculates as many bits as it can, like: int32_t isqrt32(int32_t h) { int32_t x; int32_t y; int i; /* The answer is calculated as a 32 bit value, where the last 16 bits are fractional. */ /* Calling with negative numbers is not a good idea :-) */ x = y = 0; for (i = 0; i < 32; i++) { x = (x << 1) | 1; if (y < x) x -= 2; else y -= x; x++; y <<= 1; if ((h & 0x80000000)) y |= 1; h <<= 1; y <<= 1; if ((h & 0x80000000)) y |= 1; h <<= 1; } return x; } Of course, both these routines seem like overkill for what the original poster needs. :-) Steve
Reply by Jerry Wolf October 4, 20062006-10-04
The comp.dsp "DSP Tricks" at
http://www.dspguru.com/comp.dsp/tricks/tricks.htm contains two entries
on magnitude estimation, which is the same computation.  In particular,
this one http://www.dspguru.com/comp.dsp/tricks/alg/mag_est.htm should
tell you all that you want to know (altho IIRC, the C program contains
a typo -- the coefficients Alpha and Beta are printed backwards in the
printed table (which one multiples the max and which the min?)) .

cheers,
  jerry

Robert Scott wrote:
> I have an approximation for the Pythagorean distance formula (magnitude of > vector [x,y]) using integer arithmetic that I would like to improve. Currently > I am using this: > > if(x<0) x=-x; > if(y<0) y=-y; > if(x < y) > { > int t = x; > x = y; > y = t; // ensures that x >= y > } > z = (y < ((13107 * x)>>15)) ? // * (.4) > (x + ((y * 6310)>>15)) : // * (.192582403) > (((x * 27926)>>15) // * (.852245894) > + ((y * 18414)>>15)); // * (.561967668) > //..(linear approximation to within 2% of the Pythagorean distance formula).. > > This is for an ARM processor in an application where I cannot afford the time > for any floating point operations. The integer values of x and y come from an > array of signed 16-bit numbers, but x and y themselves are 32-bit numbers in the > above formula. The final result (z) will be shifted right one bit before being > stored back into an array of 16-bit integers, since the distance formula can > potentially extend the range of x and y by one bit. In case you are wondering, > this is part of calculating the power spectrum at the end of an FFT. > > The improvement I am looking for is in accuracy. I would like to try for a > 4-fold improvement in accuracy (.5%) without substantially increasing the > running time of what I have now. Does anyone know of a better approximation > that is almost as fast? > > > > Robert Scott > Ypsilanti, Michigan
Reply by Phil Frisbie, Jr. October 4, 20062006-10-04
Robert Scott wrote:
> I have an approximation for the Pythagorean distance formula (magnitude of > vector [x,y]) using integer arithmetic that I would like to improve. Currently > I am using this:
<snip>
> > This is for an ARM processor in an application where I cannot afford the time > for any floating point operations. The integer values of x and y come from an > array of signed 16-bit numbers, but x and y themselves are 32-bit numbers in the > above formula. The final result (z) will be shifted right one bit before being > stored back into an array of 16-bit integers, since the distance formula can > potentially extend the range of x and y by one bit. In case you are wondering, > this is part of calculating the power spectrum at the end of an FFT. > > The improvement I am looking for is in accuracy. I would like to try for a > 4-fold improvement in accuracy (.5%) without substantially increasing the > running time of what I have now. Does anyone know of a better approximation > that is almost as fast?
I like this algorithm. It is snipped from my OpenLPC fixed point codec. It uses only simple operations. It is likely more accurate than you need, but I think you can simply truncate the algorithm sooner for less precision. At the end PRECISION is the fractional bits you are using in your fixed point code. static fixed32 fixsqrt32(fixed32 x) { unsigned long r = 0, s, v = (unsigned long)x; #define STEP(k) s = r + (1 << k * 2); r >>= 1; \ if (s <= v) { v -= s; r |= (1 << k * 2); } STEP(15); STEP(14); STEP(13); STEP(12); STEP(11); STEP(10); STEP(9); STEP(8); STEP(7); STEP(6); STEP(5); STEP(4); STEP(3); STEP(2); STEP(1); STEP(0); return (fixed32)(r << (PRECISION / 2)); } -- Phil Frisbie, Jr. Hawk Software http://www.hawksoft.com
Reply by dbell October 4, 20062006-10-04
Robert,

I do not have the book handy, but my recollection is that methods very
similar to the method you are using, with varying levels of accuracy,
are detailed in 'Digital Signal Processing in Communications Systems'
by Marvin E. Frerking.  You might look there and see if you can find
something useful.

Maybe someone here who has access to the book could see if my
recollection is correct and help you out.

Dirk Bell
DSP Consultant


Robert Scott wrote:
> I have an approximation for the Pythagorean distance formula (magnitude of > vector [x,y]) using integer arithmetic that I would like to improve. Currently > I am using this: > > if(x<0) x=-x; > if(y<0) y=-y; > if(x < y) > { > int t = x; > x = y; > y = t; // ensures that x >= y > } > z = (y < ((13107 * x)>>15)) ? // * (.4) > (x + ((y * 6310)>>15)) : // * (.192582403) > (((x * 27926)>>15) // * (.852245894) > + ((y * 18414)>>15)); // * (.561967668) > //..(linear approximation to within 2% of the Pythagorean distance formula).. > > This is for an ARM processor in an application where I cannot afford the time > for any floating point operations. The integer values of x and y come from an > array of signed 16-bit numbers, but x and y themselves are 32-bit numbers in the > above formula. The final result (z) will be shifted right one bit before being > stored back into an array of 16-bit integers, since the distance formula can > potentially extend the range of x and y by one bit. In case you are wondering, > this is part of calculating the power spectrum at the end of an FFT. > > The improvement I am looking for is in accuracy. I would like to try for a > 4-fold improvement in accuracy (.5%) without substantially increasing the > running time of what I have now. Does anyone know of a better approximation > that is almost as fast? > > > > Robert Scott > Ypsilanti, Michigan
Reply by Peter Nachtwey October 4, 20062006-10-04
Robert Scott wrote:
> I have an approximation for the Pythagorean distance formula (magnitude of > vector [x,y]) using integer arithmetic that I would like to improve. Currently > I am using this: > > if(x<0) x=-x; > if(y<0) y=-y; > if(x < y) > { > int t = x; > x = y; > y = t; // ensures that x >= y > } > z = (y < ((13107 * x)>>15)) ? // * (.4) > (x + ((y * 6310)>>15)) : // * (.192582403) > (((x * 27926)>>15) // * (.852245894) > + ((y * 18414)>>15)); // * (.561967668) > //..(linear approximation to within 2% of the Pythagorean distance formula).. > > This is for an ARM processor in an application where I cannot afford the time > for any floating point operations. The integer values of x and y come from an > array of signed 16-bit numbers, but x and y themselves are 32-bit numbers in the > above formula. The final result (z) will be shifted right one bit before being > stored back into an array of 16-bit integers, since the distance formula can > potentially extend the range of x and y by one bit. In case you are wondering, > this is part of calculating the power spectrum at the end of an FFT. > > The improvement I am looking for is in accuracy. I would like to try for a > 4-fold improvement in accuracy (.5%) without substantially increasing the > running time of what I have now. Does anyone know of a better approximation > that is almost as fast? > > > > Robert Scott > Ypsilanti, Michigan
I still like the tried and true isqrt((x^2+y^2)<<16). You can see we scale up the sum of x^+y^ by shifing left 16 times so we have a fixed point number with the interger in the upper 16 bits. When we take the square root of at 16.16 we get an 8.8 result. This meets your .5% criteria. There are plenty of fast and simple square root routines. We use a table to find the must significant 8 bits and then do one Newton iteration to get the lower 8 bits but then I have a divide. If you don't there are other fast and simple routines on the internet. I think our isqrt routine takes about 8 microseconds on a 40 Mhz 196. The squaring and summing would take two extra microseconds. Peter Nachtwey