DSPRelated.com
Forums

Antilog - base 10

Started by Unknown June 1, 2005
tanmay.zargar@gmail.com wrote:
> I developed the algorithm to compute 10^x as e^(ln(10)*x). However, > this was too computationally intensive for my liking so I have > reconciled to the table lookup approach. > > The original problem was to convert a dBm0 value to absolute value - in > other words, computing the antilog. However that wasn't all - I had to > then multiply this 10^x value to another fixed point number. Now the > DSP processor I am using does provide intrinsic functions for fixed > point math but only for numbers in 1.15 format or 1.31 format. None of > these functions could be used directly for the problem at hand. I had > therefore to develop the algorithm myself. However, having done that, I > found that it was too demanding in terms of execution cycles. > > I doubt if there is an algorithm that will efficiently do what I need > and more so be portable across the processor I am using. I say this > because most optimized DSP algorithms make use of processor specific > intrinsic functions which ultimately make use of the hardware units > specific to the processor. If however, there is such an algorithm, > please do let me know.
Are you trying to convert the whole number? You need a conversion routine with a range of one decade. The rest is scaling. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������

tanmay.zargar@gmail.com wrote:
> > I doubt if there is an algorithm that will efficiently do what I need > and more so be portable across the processor I am using. I say this > because most optimized DSP algorithms make use of processor specific > intrinsic functions which ultimately make use of the hardware units > specific to the processor. If however, there is such an algorithm, > please do let me know. >
Try this: unsigned antilog10(unsigned e) { unsigned A, S; A = 1; S = 10; /* 10 for Base 10 logs */ while (e != 0) { if ((e & 1) == 1) A = A * S; e = e >> 1; S = S * S; } return A; } Computing 10**65535 will require 16 loop iterations. Of course, overflow will render the result useless, but that wasn't your question.
ed, 01 Jun 2005 13:43:39 -0400 rbj wrote:
...
> 10^x = 2^(log2(10)*x) and you can get a finite series that well approximates > 2^x from > > http://groups-beta.google.com/group/comp.dsp/msg/8ba6bd6fe2474876 . > > try that.
A slightly better result (|error| < 1.85e-6 instead of 2.217e-6), 0<=x<=1, with 1 coefficient more than Robert's solution (calculated with Remez' algorithm): log2(1+x)=-0.02569080531737*x^6 +0.12100135260687*x^5 -0.27653971502536*x^4 +0.45652112998840*x^3 -0.71779096376437*x^2 +1.44249531064838*x +0.00000184543159 With 2 coefficients more: |error|<2.78e-7: log2(1+x)=0.01512511049134*x^7 -0.07806127805543*x^6 +0.19208841036971*x^5 -0.32425319654703*x^4 +0.47289769142617*x^3 -0.72045300022462*x^2 +1.44265626253987*x +0.00000027725826 Siegbert
John Hadstate wrote:

> Computing 10**65535 will require 16 loop iterations. Of course, > overflow will render the result useless, but that wasn't your question.
We're much alike: both smug SOBs. I like to think that I have redeeming qualities. Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
On Wed, 01 Jun 2005 13:43:39 -0400 rbj wrote:
...
> 10^x = 2^(log2(10)*x) and you can get a finite series that well approximates > 2^x from > > http://groups-beta.google.com/group/comp.dsp/msg/8ba6bd6fe2474876 . > > try that.
A slightly better result (|error| < 1.85e-6 instead of 2.217e-6), 0<=x<=1, with 1 coefficient more than Robert's solution (calculated with Remez' algorithm): log2(1+x)=-0.02569080531737*x^6 +0.12100135260687*x^5 -0.27653971502536*x^4 +0.45652112998840*x^3 -0.71779096376437*x^2 +1.44249531064838*x +0.00000184543159 With 2 coefficients more: |error|<2.78e-7: log2(1+x)=0.01512511049134*x^7 -0.07806127805543*x^6 +0.19208841036971*x^5 -0.32425319654703*x^4 +0.47289769142617*x^3 -0.72045300022462*x^2 +1.44265626253987*x +0.00000027725826 Siegbert
in article ks0a1ixr5s35.ajhthf2taxdu.dlg@40tude.net, Siegbert Steinlechner
at sigi.nospam@onlinehome.de wrote on 06/01/2005 16:44:

> ed, 01 Jun 2005 13:43:39 -0400 rbj wrote: > ... >> 10^x = 2^(log2(10)*x) and you can get a finite series that well approximates >> 2^x from >> >> http://groups-beta.google.com/group/comp.dsp/msg/8ba6bd6fe2474876 . > > A slightly better result (|error| < 1.85e-6 instead of 2.217e-6), 0<=x<=1, > with 1 coefficient more than Robert's solution (calculated with Remez' > algorithm): > > log2(1+x)=-0.02569080531737*x^6 > +0.12100135260687*x^5 > -0.27653971502536*x^4 > +0.45652112998840*x^3 > -0.71779096376437*x^2 > +1.44249531064838*x > +0.00000184543159 > > With 2 coefficients more: |error|<2.78e-7:
it looks like one coef more. :-/ 6 + 1 = 7
> log2(1+x)=0.01512511049134*x^7 > -0.07806127805543*x^6 > +0.19208841036971*x^5 > -0.32425319654703*x^4 > +0.47289769142617*x^3 > -0.72045300022462*x^2 > +1.44265626253987*x > +0.00000027725826
it is. i used remes also (i have a crude C program and another crude MATLAB program since i couldn't find one in MATLAB) but i modified it to constrain the error at the endpoints to be zero. since normally the endpoints are set to the max error and used in the "alternation theorem", when you constrain those endpoint errors to be zero, you have to pay for it somewhere else. it just bothered me that 2^0 and 2^1 was not exactly 1 and 2 and log2(1) and log2(2) were not exactly 0 and 1, respectively. no other reason motivated it. i did a similar constraint for sqrt(). i'm sure our latest troll is combing this over to find the fatal flaw in it. (don't worry, John, there will be many things i post that are flawed. you'll get your opportunity.) -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
"robert bristow-johnson" <rbj@audioimagination.com> wrote in message
news:BEC3AE0A.7DB0%rbj@audioimagination.com...
> in article ks0a1ixr5s35.ajhthf2taxdu.dlg@40tude.net, Siegbert Steinlechner > at sigi.nospam@onlinehome.de wrote on 06/01/2005 16:44: > > > it is. i used remes also (i have a crude C program and another crude MATLAB > program since i couldn't find one in MATLAB) but i modified it to constrain > the error at the endpoints to be zero. since normally the endpoints are set > to the max error and used in the "alternation theorem", when you constrain > those endpoint errors to be zero, you have to pay for it somewhere else. > > it just bothered me that 2^0 and 2^1 was not exactly 1 and 2 and log2(1) and > log2(2) were not exactly 0 and 1, respectively. no other reason motivated > it. i did a similar constraint for sqrt().
I like to do that too, especially if the approximation is used in a piece-wise fashion, i.e. covering one octave where the input value spans multiple octaves. It eliminates any discontinuity at the "splice" points. I have used a simple linear algebra method in Excel where for a polynomial of degree n, you constrain n+1 points to exactly equal the "correct" function values. Along with this, I use a crude trial and error method to pick good "matching" points so that I don't get a gross error somewhere else. (I usually use the end points as 2 of the matching points.) It works OK for small n (3-4) but is not optimal. I should look into the Remez method. RB-J, if you find your Matlab script and don't mind posting it, I wouldn't mind having a look.
"Jerry Avins" <jya@ieee.org> wrote in message 
news:deCdnUkUHcdXgwPfRVn-og@rcn.net...
> John Hadstate wrote: > >> Computing 10**65535 will require 16 loop iterations. Of >> course, >> overflow will render the result useless, but that wasn't >> your question. > > We're much alike: both smug SOBs. I like to think that I > have redeeming qualities. >
I don't have any redeeming qualities. I really wasn't trying to be sarcastic to the OP. I didn't go back and read some of the other posts where the OP *finally* got around to mentioning that the exponent 'x' might be a scaled fractional number until after I had posted the algorithm (which is from U Waterloo's Handbook of Applied Cryptography, by the way). I think the algorithm I posted can still be made to do what the OP wants, but I haven't thought about it enough to propose the right changes.
in article 8mrne.50710$CR5.6856@bignews1.bellsouth.net, John E. Hadstate at
jh113355@hotmail.com wrote on 06/01/2005 19:25:

> I don't have any redeeming qualities.
[dead silence] :-)
> I really wasn't trying to be sarcastic to the OP. I didn't > go back and read some of the other posts where the OP > *finally* got around to mentioning that the exponent 'x' > might be a scaled fractional number until after I had posted > the algorithm (which is from U Waterloo's Handbook of > Applied Cryptography, by the way).
are you at U. Waterloo? if so, might you look up Stanley Lipshitz and/or John Vanderkooy and/or Robert Wannamaker? ask them what they think about dynamic range. Stan nearly wrote the book on it. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
in article 119sfr2serefd32@corp.supernews.com, Jon Harris at
jon_harrisTIGER@hotmail.com wrote on 06/01/2005 19:07:

> "robert bristow-johnson" <rbj@audioimagination.com> wrote in message > news:BEC3AE0A.7DB0%rbj@audioimagination.com... >> in article ks0a1ixr5s35.ajhthf2taxdu.dlg@40tude.net, Siegbert Steinlechner >> at sigi.nospam@onlinehome.de wrote on 06/01/2005 16:44: >> >> >> it is. i used remes also (i have a crude C program and another crude MATLAB >> program since i couldn't find one in MATLAB) but i modified it to constrain >> the error at the endpoints to be zero. since normally the endpoints are set >> to the max error and used in the "alternation theorem", when you constrain >> those endpoint errors to be zero, you have to pay for it somewhere else. >> >> it just bothered me that 2^0 and 2^1 was not exactly 1 and 2 and log2(1) and >> log2(2) were not exactly 0 and 1, respectively. no other reason motivated >> it. i did a similar constraint for sqrt(). > > I like to do that too, especially if the approximation is used in a piece-wise > fashion, i.e. covering one octave where the input value spans multiple > octaves. It eliminates any discontinuity at the "splice" points.
that was my motivation, now that i think of it. also, i restricted my polynomial order to even orders so that there was little (it wasn't zero) discontinuity of the slope of the error at splice points.
> I have used a simple linear algebra method in Excel where for a polynomial of > degree n, you constrain n+1 points to exactly equal the "correct" function > values. Along with this, I use a crude trial and error method to pick good > "matching" points so that I don't get a gross error somewhere else. (I > usually use the end points as 2 of the matching points.) It works OK for > small n (3-4) but is not optimal. I should look into the Remez method. > RB-J, if you find your Matlab script and don't mind posting it, I wouldn't > mind having a look.
i'll email you. it's 3 files. i s'pose i remove that big game cat from your e-dress? or do you have another (you can send it to me)? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."