DSPRelated.com
Forums

Fixed point logarithm

Started by sauwen December 2, 2010
Hello there,

I am trying to get a grasp of doing fixed point arithmetic on a DSP.  My
goal is to implement a natural logarithm.  Like this thread,
http://www.dsprelated.com/showmessage/39806/1.php, I am first attempting
the route of doing a log base 2 first and then when convenient, do a
multiply at the end to convert to ln. 

My troubles start because my data is coming from a Q1.15 fract16 format (1
sign bit and rest fractional).  I am able to find the integer portion of
the log base 2 answer by shifting bits to the left to keep multiplying by 2
to try and get the data between 1 and 2.  (Of course, I can't do this
because my format limits me so I stop when I am at the point of
saturation.) I record what's left over.

However, I'm stuck on how to get the second half of the log number.  Clay's
algorithm requires that the data be between 1 to 2.  Assuming that I do get
the data between 1 and 2, how do I square it?? 

Ex: 1.875^2=3.515625 <-- How to get this in fixed pt arithmetic

I feel like this can only be done in floating point...

Any insight is much appreciated.  Thanks!


sauwen <sauwen.jl@n_o_s_p_a_m.gmail.com> wrote:

>My troubles start because my data is coming from a Q1.15 fract16 format (1 >sign bit and rest fractional).
(Whence this fascination with "Q" format numbers? How long does it take something like that to become obselete?)
>Ex: 1.875^2=3.515625 <-- How to get this in fixed pt arithmetic > >I feel like this can only be done in floating point...
Let's see... if you square an unsigned fixed point number with N integer bits, the result has 2*N integer bits. It also has twice the total number of bits. It is still fixed point however. Hope this helps. Steve
On Dec 2, 9:08&#4294967295;pm, "sauwen" <sauwen.jl@n_o_s_p_a_m.gmail.com> wrote:
> Hello there, > > I am trying to get a grasp of doing fixed point arithmetic on a DSP. &#4294967295;My > goal is to implement a natural logarithm. &#4294967295;Like this thread,http://www.dsprelated.com/showmessage/39806/1.php, I am first attempting > the route of doing a log base 2 first and then when convenient, do a > multiply at the end to convert to ln. > > My troubles start because my data is coming from a Q1.15 fract16 format (1 > sign bit and rest fractional). &#4294967295;I am able to find the integer portion of > the log base 2 answer by shifting bits to the left to keep multiplying by 2 > to try and get the data between 1 and 2. &#4294967295;(Of course, I can't do this > because my format limits me so I stop when I am at the point of > saturation.) I record what's left over. > > However, I'm stuck on how to get the second half of the log number. &#4294967295;Clay's > algorithm requires that the data be between 1 to 2. &#4294967295;Assuming that I do get > the data between 1 and 2, how do I square it?? > > Ex: 1.875^2=3.515625 <-- How to get this in fixed pt arithmetic > > I feel like this can only be done in floating point... > > Any insight is much appreciated. &#4294967295;Thanks!
Steve gave the basic thought. Normally with integers, we think of the binary point as being at the right end of the number. But we can imagine that the binary point is elsewhere in the number. For example the decimal value 1.875 is 1.111 in binary. So you may also write this as 1111. * 2^-3 So if we do an integer multply we then get (1111. * 1111.) * 2^-6 which equals 11100001. *2^-6 ( in decimal we found 15*15 = 225 and 225*(2^-6) = 225/64) but continuing in binary, move the binary point over 6 places and we find 11.100001 is the binary representation for 1.875 squared and if you convert this to decimal (2^1)+(2^0)+(2^-1)+(2^-6) = 3.515625 I hope this helps. Clay
On 12/2/2010 6:20 PM, Steve Pope wrote:
> sauwen<sauwen.jl@n_o_s_p_a_m.gmail.com> wrote: > >> My troubles start because my data is coming from a Q1.15 fract16 format (1 >> sign bit and rest fractional). > > (Whence this fascination with "Q" format numbers? How long does > it take something like that to become obselete?) >
I don't know, I generally use Ux.y and Sx.y numbers (unsigned and signed) and find the notation pretty useful. The information about range and resolution are immediately visible, keeping track of what happens in a multiply works a beaut, and I don't really know of any better answer. -- Rob Gaddi, Highland Technology Email address is currently out of order
Clay <clay@claysturner.com> wrote:
(snip)

> Steve gave the basic thought. Normally with integers, we think of the > binary point as being at the right end of the number. But we can > imagine that the binary point is elsewhere in the number.
Or even outside of the number. PL/I scaled fixed point allows negative values for the scale factor (digits after the radix point) or more than the number of digits. I believe the one I used to use allows for + or - 63 (binary or decimal). -- glen
On Dec 3, 2:29&#4294967295;pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Clay <c...@claysturner.com> wrote: > > (snip) > > > Steve gave the basic thought. Normally with integers, we think of the > > binary point as being at the right end of the number. But we can > > imagine that the binary point is elsewhere in the number. > > Or even outside of the number. &#4294967295;PL/I scaled fixed point allows > negative values for the scale factor (digits after the radix point) > or more than the number of digits. &#4294967295;I believe the one I used to > use allows for + or - 63 &#4294967295;(binary or decimal). > > -- glen
Certainly true with a lot of floating point implementations where the exponent has a range of values that far exceeds the number of bits in the mantissa. I can see where one could use (need) this scaling in fixed point numbers. I never did PL/I. Common in astronomical calcs is the factoring out of a huge scaling factor so the remaining values themselves don't need scientific notation. For example this is done in Chapront's lunar theory or Brentagnon's VSOP87 planetary theory. Clay
Rob Gaddi  <rgaddi@technologyhighland.com> wrote:

>On 12/2/2010 6:20 PM, Steve Pope wrote:
>> (Whence this fascination with "Q" format numbers? How long does >> it take something like that to become obselete?)
>I don't know, I generally use Ux.y and Sx.y numbers (unsigned and >signed) and find the notation pretty useful. The information about >range and resolution are immediately visible, keeping track of what >happens in a multiply works a beaut, and I don't really know of any >better answer.
Sure you need the information, but it's best to present it in a standard way. Steve
Hello there,

Thanks to everyone's response!  I now feel like I have a bigger picture of
what to do with fixed point arithmetic.

I have a couple questions on the log algorithm that Clay suggested:
1.  Why must we normalize the number to be in the range from 1 to 2?  Why
not like 0 to 1 or 1 to 3?   
2.  When implementing the algorithm to find the log of a negative number,
why does it seem like we have to take the complement of the mantissa's
binary? 
For example:
Log base 2 of 0.11 = -3.1844245711
Multiply 0.11 by 4 to get 1.76.  Take 3 or 11 binary.  The answer is always
# shifts &ndash; 1?
(1.76)^2  = 3.0976	1
(1.5488)^2 = 2.39878	1
(1.19939)^2 = 1.43854	0
(1.43854)^2=2.06939	1
(1.0347)^2=1.0706	0
(1.0706)^2=1.14618	0
Binary=-11.110100
Decimal=-3.8125 (This is a little off from what we expect.)
If we use the complement of the mantissa &hellip; -11.001011 or -3.171875, it is
pretty close to the answer.

I can see how the fixed point format of Ux.y and Sx.y numbers is an elegant
approach.  However, this implies that for every number, you are using 2x
the bits compared to a Q format number or you'd have to keep track of the
number of bits for each x and y?

Thanks.

sauwen <sauwen.jl@n_o_s_p_a_m.gmail.com> wrote:

>I can see how the fixed point format of Ux.y and Sx.y numbers is an elegant >approach. However, this implies that for every number, you are using 2x >the bits compared to a Q format number or you'd have to keep track of the >number of bits for each x and y?
Is there a formal, agreed upon definition of "Q format"? IEEE 1666 defines a number of fixed point types. The standardized way of defining these number types is in terms of the total number of bits (must be non-negative); the number of integer bits (can be either positive or negative); whether it is signed or unsigned; as well as the rounding/trunction mode and the overflow/saturation mode. Once you have definted all these, you are also able to define the result type of any arithmetic operations. The original question in this thread could be viewed as the OP needing to know the result type of a squaring operation. In comp.dsp you do not see many IEEE 1666 / System C users wondering about basic fixed point arithmetic. Because it's already well-defined. But I see a lot of such questions from Q-format users so I am wondering if Q-format is not so well-defined in the first place. At the very least it does not seem sufficiently comprehensive. As a secondary issue, neither of the numbers in Q-format notation (15.1) seems to be the total number of bits. What do you do in Q-format if you have a negative number of integer bits? Steve
>But I see a lot of such questions from Q-format users so I >am wondering if Q-format is not so well-defined in the first place. >At the very least it does not seem sufficiently comprehensive. > >As a secondary issue, neither of the numbers in Q-format notation >(15.1) seems to be the total number of bits. What do you do >in Q-format if you have a negative number of integer bits? > >Steve >
I believe I was confused in the first place because the data given in the fract16 format goes only from -1 to +1. I shouldn't have said Q1.15 (incorrect term). The more accurate notation should have been Q0.15, with an assumption that there is a sign bit in there for 16-bit signed numbers. It was a bit fuzzy in my mind about how to change my data from 0 to +1 to a different range, from i.e. +1 to +2. The data probably needs to come out of this format (fractional only), and become a unsigned number set or something. I am still curious about the log algorithm from Clay. (Please see my above questions about taking the log of numbers between 0 and 1.) Thanks!