Reply by Glen Herrmannsfeldt September 26, 20032003-09-26
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BB990CCE.3DB2%rbj@surfglobal.net...
> In article _AEcb.576748$Ho3.107165@sccrnsc03, Glen Herrmannsfeldt at > gah@ugcs.caltech.edu wrote on 09/25/2003 12:20:
> > "robert bristow-johnson" <rbj@surfglobal.net> wrote in message > > news:BB97F528.3D51%rbj@surfglobal.net...
> >> i haven't seen the original post (and have been to lazy to look it up
in
> >> Google Groups) but can anybody tell me what sorta accuracy they need.
i
> >> have posted, repeatedly, a couple of optimized finite series for doing > >> log2() and exp2(). and they can certainly be implemented in
fixed-point
> > as > >> i have done it with the 56K. > > > > It says 32 bit words, but that is all. It doesn't say where the binary > > point is. > > well, if they need 32 bit or even 24 bit accuracy in the log() or exp(),
the
> finite power series i've posted in the past won't do. for me, it's an > amplitude to dB thing or a frequency to pitch thing (or the inverse) and i > never needed perfect accuracy in the algorithm.
Knuth's "Metafont: The Program" has log() and exp() for 32 bits with 16 bits after the binary point. (I think that is where it is. Metafont uses different binary point positions in different places.) -- glen
Reply by robert bristow-johnson September 25, 20032003-09-25
In article _AEcb.576748$Ho3.107165@sccrnsc03, Glen Herrmannsfeldt at
gah@ugcs.caltech.edu wrote on 09/25/2003 12:20:

> > "robert bristow-johnson" <rbj@surfglobal.net> wrote in message > news:BB97F528.3D51%rbj@surfglobal.net... >> >> >> i haven't seen the original post (and have been to lazy to look it up in >> Google Groups) but can anybody tell me what sorta accuracy they need. i >> have posted, repeatedly, a couple of optimized finite series for doing >> log2() and exp2(). and they can certainly be implemented in fixed-point > as >> i have done it with the 56K. > > It says 32 bit words, but that is all. It doesn't say where the binary > point is.
well, if they need 32 bit or even 24 bit accuracy in the log() or exp(), the finite power series i've posted in the past won't do. for me, it's an amplitude to dB thing or a frequency to pitch thing (or the inverse) and i never needed perfect accuracy in the algorithm. r b-j
Reply by Glen Herrmannsfeldt September 25, 20032003-09-25
"robert bristow-johnson" <rbj@surfglobal.net> wrote in message
news:BB97F528.3D51%rbj@surfglobal.net...
> > > i haven't seen the original post (and have been to lazy to look it up in > Google Groups) but can anybody tell me what sorta accuracy they need. i > have posted, repeatedly, a couple of optimized finite series for doing > log2() and exp2(). and they can certainly be implemented in fixed-point
as
> i have done it with the 56K.
It says 32 bit words, but that is all. It doesn't say where the binary point is. -- glen
Reply by robert bristow-johnson September 25, 20032003-09-25

i haven't seen the original post (and have been to lazy to look it up in
Google Groups) but can anybody tell me what sorta accuracy they need.  i
have posted, repeatedly, a couple of optimized finite series for doing
log2() and exp2().  and they can certainly be implemented in fixed-point as
i have done it with the 56K.

r b-j

In article eOqcb.568486$Ho3.103614@sccrnsc03, Glen Herrmannsfeldt at
gah@ugcs.caltech.edu wrote on 09/24/2003 20:38:

> > "Matt Boytim" <maboytim@yahoo.com> wrote in message > news:b90ff073.0309232132.3563039c@posting.google.com... >> "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message > news:<9DTbb.553711$o%2.243736@sccrnsc02>... >>> "Matt Boytim" <maboytim@yahoo.com> wrote in message >>> news:b90ff073.0309222021.6a3b739@posting.google.com... >>>> You could easily get at 'em if you had log/antilog - these usually >>>> aren't too bad to implement. >>> >>> The subject asks for fixed point, hopefully with the binary point not to > the >>> right of the LSB. In fixed point arithmetic you must be more careful > with >>> the range of numbers and the operations performed on them. It gets > worse >>> when you use functions like log() and exp() on them, without being very >>> careful with the number if bits before and after the binary (or other > radix) >>> point. > >> I don't follow. If I'm working on a fixed point machine I don't work >> with logs directly but scaled versions. Since the log of a fractional >> fixed point 32 bit number is between -32.0 and 0.0 I work with log2/32 >> instead of log2; on an integer machine the log is between 0.0 and 32.0 >> so I work with log2*2^26 - this way, both numbers and their logs are >> representable by the machine. > > I prefer the description with the binary point in a different place, but a > scale factor is fine, too. I used to use PL/I, one of the few languages > that knows how to do fixed point arithmetic with the radix point not at the > right of the MSB. > > Metafont, a language from D.E.Knuth for doing font design does fixed point > arithmetic with 16 bits after the binary point, including sqrt, log, exp, > and a variety of other functions. The only point I was making is that you > have to be a little careful about overflow, or loss of significance when > working with log and exp. You have to be careful in floating point, too, > and just a little more in fixed point. > >> Obviously exp is the dual. If I'm >> working with fixed point then I'm used to scale factors scattered >> about anyway. Introducing scale is just an equivalent way to 'keep >> track of the binary point', except more general since scale factors >> can be arbitrary while the binary point is quantized in position. >> Actually, in fixed point I tend to use scale plus translation (affine >> mappings). Logs and antilogs pose no particular difficulty in a fixed >> point setting. > > I suppose no more difficult, overall. Though more people are used to using > them in floating point, and what can happen. > > If the OP described the real problem in more detail, it would be easier to > come up with a good solution. > > -- glen > >
Reply by Glen Herrmannsfeldt September 24, 20032003-09-24
"Matt Boytim" <maboytim@yahoo.com> wrote in message
news:b90ff073.0309232132.3563039c@posting.google.com...
> "Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message
news:<9DTbb.553711$o%2.243736@sccrnsc02>...
> > "Matt Boytim" <maboytim@yahoo.com> wrote in message > > news:b90ff073.0309222021.6a3b739@posting.google.com... > > > You could easily get at 'em if you had log/antilog - these usually > > > aren't too bad to implement. > > > > The subject asks for fixed point, hopefully with the binary point not to
the
> > right of the LSB. In fixed point arithmetic you must be more careful
with
> > the range of numbers and the operations performed on them. It gets
worse
> > when you use functions like log() and exp() on them, without being very > > careful with the number if bits before and after the binary (or other
radix)
> > point.
> I don't follow. If I'm working on a fixed point machine I don't work > with logs directly but scaled versions. Since the log of a fractional > fixed point 32 bit number is between -32.0 and 0.0 I work with log2/32 > instead of log2; on an integer machine the log is between 0.0 and 32.0 > so I work with log2*2^26 - this way, both numbers and their logs are > representable by the machine.
I prefer the description with the binary point in a different place, but a scale factor is fine, too. I used to use PL/I, one of the few languages that knows how to do fixed point arithmetic with the radix point not at the right of the MSB. Metafont, a language from D.E.Knuth for doing font design does fixed point arithmetic with 16 bits after the binary point, including sqrt, log, exp, and a variety of other functions. The only point I was making is that you have to be a little careful about overflow, or loss of significance when working with log and exp. You have to be careful in floating point, too, and just a little more in fixed point.
> Obviously exp is the dual. If I'm > working with fixed point then I'm used to scale factors scattered > about anyway. Introducing scale is just an equivalent way to 'keep > track of the binary point', except more general since scale factors > can be arbitrary while the binary point is quantized in position. > Actually, in fixed point I tend to use scale plus translation (affine > mappings). Logs and antilogs pose no particular difficulty in a fixed > point setting.
I suppose no more difficult, overall. Though more people are used to using them in floating point, and what can happen. If the OP described the real problem in more detail, it would be easier to come up with a good solution. -- glen
Reply by Hemanth M S September 24, 20032003-09-24
Anders,
If your x  has a small domain or if you can bring x to a small domain *
Integer, where the 0.5^x  or  x^0.75 of the integer part  can be
computed using shifts/multiplies you can have optimum method for these
two functions.
the soln is a polynomial curve fit for 0.5^x / x^.75 using  remez  or
Least sqaures. you can use polyfit (IIRC)  on Matlab for a least squares
curve fit
Check google on this group for some posts by  r b-j on  x^.75.

regards
Hemanth


Anders Buvarp wrote:

> Hello, > > I need to implement in fixed-point pow(0.5,x) and pow(x, 0.75) and > I was wondering if anyone has some pointers with regards to this? > > We are dealing with 32-bit words. > > Any help is greatly appreciated. > > -- > Best regards, > Anders Buvarp > anders@lsil.com
Reply by Matt Boytim September 24, 20032003-09-24
"Glen Herrmannsfeldt" <gah@ugcs.caltech.edu> wrote in message news:<9DTbb.553711$o%2.243736@sccrnsc02>...
> "Matt Boytim" <maboytim@yahoo.com> wrote in message > news:b90ff073.0309222021.6a3b739@posting.google.com... > > You could easily get at 'em if you had log/antilog - these usually > > aren't too bad to implement. > > The subject asks for fixed point, hopefully with the binary point not to the > right of the LSB. In fixed point arithmetic you must be more careful with > the range of numbers and the operations performed on them. It gets worse > when you use functions like log() and exp() on them, without being very > careful with the number if bits before and after the binary (or other radix) > point. > > -- glen
I don't follow. If I'm working on a fixed point machine I don't work with logs directly but scaled versions. Since the log of a fractional fixed point 32 bit number is between -32.0 and 0.0 I work with log2/32 instead of log2; on an integer machine the log is between 0.0 and 32.0 so I work with log2*2^26 - this way, both numbers and their logs are representable by the machine. Obviously exp is the dual. If I'm working with fixed point then I'm used to scale factors scattered about anyway. Introducing scale is just an equivalent way to 'keep track of the binary point', except more general since scale factors can be arbitrary while the binary point is quantized in position. Actually, in fixed point I tend to use scale plus translation (affine mappings). Logs and antilogs pose no particular difficulty in a fixed point setting. Matt
Reply by Glen Herrmannsfeldt September 23, 20032003-09-23
"Matt Boytim" <maboytim@yahoo.com> wrote in message
news:b90ff073.0309222021.6a3b739@posting.google.com...
> You could easily get at 'em if you had log/antilog - these usually > aren't too bad to implement.
The subject asks for fixed point, hopefully with the binary point not to the right of the LSB. In fixed point arithmetic you must be more careful with the range of numbers and the operations performed on them. It gets worse when you use functions like log() and exp() on them, without being very careful with the number if bits before and after the binary (or other radix) point. -- glen
Reply by Glen Herrmannsfeldt September 23, 20032003-09-23
"Anders Buvarp" <anders@lsil.com> wrote in message
news:3F6F348E.F8214A04@lsil.com...
> Hello Jim, > > Thanks for your reply. > > It is 0.5^x I need, maybe I can do a look-up table for now.
Well, 0.5**x (Fortran notation), is just a right shift if x is an integer, and 0.5 is a fixed point number with the binary point far enough not to shift out the only 1. If you want to multiply a fixed point number by 0.5**x, shift it right by x. (snip)
> > >> I need to implement in fixed-point pow(0.5,x) and pow(x, 0.75) and > > >> I was wondering if anyone has some pointers with regards to this? > > >> > > >> We are dealing with 32-bit words.
It probably isn't hard to write a Newton-Raphson algorithm to do x**0.75. In floating point I might do it sqrt(x)*sqrt(sqrt(x)), or sqrt(sqrt(x*x*x)). The latter might work well in fixed point arithmetic if it can be made not to overflow. If this is homework, please reference the newsgroup. -- glen
Reply by Matt Boytim September 23, 20032003-09-23
You could easily get at 'em if you had log/antilog - these usually
aren't too bad to implement.

Matt

Anders Buvarp <anders@lsil.com> wrote in message news:<3F6BB47E.F0EBF176@lsil.com>...
> Hello, > > I need to implement in fixed-point pow(0.5,x) and pow(x, 0.75) and > I was wondering if anyone has some pointers with regards to this? > > We are dealing with 32-bit words. > > Any help is greatly appreciated.