<a1eah71@aol.com> wrote in message
news:1118120058.906145.30700@g49g2000cwa.googlegroups.com...
>
>
> John E. Hadstate wrote:
>> <a1eah71@aol.com> wrote in message
>> news:1118041775.949460.8790@z14g2000cwz.googlegroups.com...
>> >
>> >
>> >
>> > Your algorithm suggests a modification.
>> >
>> > Use successive approximation to resolve the fractional
>> > part of the
>> > argument as the sum from m =1 to M of Am 2**-m where
>> > the
>> > coefficients
>> > Am are either plus one or minus one.
>> >
>> > For example if x = .34375 then x = .5 - .25 +
>> > .125 -.0625
>> > + .03125
>> >
>> > At each step of the resolution the mth table entry
>> > containing
>> > 10**(2**-m) multiples a product register when am is one
>> > or
>> > divides the
>> > register for am equal minus one.
>> >
>> > Hence 10**.34375 = 1 * (10**(1/2)) / (10**(1/4)) *
>> > (10**(1/8)) . . .
>> >
>>
>> I can see what you say is true, but I don't quite see
>> what
>> it buys me (maybe half the table size?). Also, isn't
>> division universally more time consuming than
>> multiplication?
>
> Generally division is more time consuming than
> multiplication. However,
> these differences are negligible compared with the cost of
> ignoring the
> benefits of binary arithmetic. By using resolving the
> argument as a sum
> from m=1 to M of Am log base two ( 1 + 2**-m ),
> calculation of 2**x
> requires only shifts and additions.
>
> Perhaps I am old fashioned but my position is anyone who
> does decimal
> arithmetic deserves to wait for what they get.
>
I'm trying to understand where the improvement is. The
light hasn't switched on yet.
Reply by ●June 7, 20052005-06-07
John E. Hadstate wrote:
> <a1eah71@aol.com> wrote in message
> news:1118041775.949460.8790@z14g2000cwz.googlegroups.com...
> >
> >
> >
> > Your algorithm suggests a modification.
> >
> > Use successive approximation to resolve the fractional
> > part of the
> > argument as the sum from m =1 to M of Am 2**-m where the
> > coefficients
> > Am are either plus one or minus one.
> >
> > For example if x = .34375 then x = .5 - .25 + .125 -.0625
> > + .03125
> >
> > At each step of the resolution the mth table entry
> > containing
> > 10**(2**-m) multiples a product register when am is one or
> > divides the
> > register for am equal minus one.
> >
> > Hence 10**.34375 = 1 * (10**(1/2)) / (10**(1/4)) *
> > (10**(1/8)) . . .
> >
>
> I can see what you say is true, but I don't quite see what
> it buys me (maybe half the table size?). Also, isn't
> division universally more time consuming than
> multiplication?
Generally division is more time consuming than multiplication. However,
these differences are negligible compared with the cost of ignoring the
benefits of binary arithmetic. By using resolving the argument as a sum
from m=1 to M of Am log base two ( 1 + 2**-m ), calculation of 2**x
requires only shifts and additions.
Perhaps I am old fashioned but my position is anyone who does decimal
arithmetic deserves to wait for what they get.
Herbert
Reply by John E. Hadstate●June 6, 20052005-06-06
<a1eah71@aol.com> wrote in message
news:1118041775.949460.8790@z14g2000cwz.googlegroups.com...
>
>
>
> Your algorithm suggests a modification.
>
> Use successive approximation to resolve the fractional
> part of the
> argument as the sum from m =1 to M of Am 2**-m where the
> coefficients
> Am are either plus one or minus one.
>
> For example if x = .34375 then x = .5 - .25 + .125 -.0625
> + .03125
>
> At each step of the resolution the mth table entry
> containing
> 10**(2**-m) multiples a product register when am is one or
> divides the
> register for am equal minus one.
>
> Hence 10**.34375 = 1 * (10**(1/2)) / (10**(1/4)) *
> (10**(1/8)) . . .
>
I can see what you say is true, but I don't quite see what
it buys me (maybe half the table size?). Also, isn't
division universally more time consuming than
multiplication?
Reply by ●June 6, 20052005-06-06
John E. Hadstate wrote:
> "John Hadstate" <jh113355@hotmail.com> wrote in message
> news:1117654632.785625.143310@f14g2000cwb.googlegroups.com...
> >
> >
> > tanmay.zargar@gmail.com wrote:
> >>
> >> I doubt if there is an algorithm that will efficiently do
> >> what I need
> >> and more so be portable across the processor I am using.
> >> I say this
> >> because most optimized DSP algorithms make use of
> >> processor specific
> >> intrinsic functions which ultimately make use of the
> >> hardware units
> >> specific to the processor. If however, there is such an
> >> algorithm,
> >> please do let me know.
> >>
> >
>
> Here is an efficient, hardware-independent algorithm that
> will exactly compute 10**x where x is a 16-bit scaled number
> in at most 16 iterations. It works because:
>
> 10**(1 + 1/2 + 1/4 + ...) =
> (10**1)*(10**(1/2))*(10**(1/4)*...
>
> First, define the scaling. If 1.15, I'm assuming you mean 1
> whole number bit and 15 fractional bits and the maximum
> value of x = 1 + 1/2 + 1/4 + ... + 1/32768.
>
> Then, define a table of 16 numbers according to the
> following algorithm:
>
> #define XBITS 16
>
> double tenTable[XBITS];
>
> tenTable[0] = 10.;
>
> for (i = 1; i < XBITS; i++)
> {
> tenTable[i] = sqrt(tenTable[i - 1]);
> }
>
> This computation needs be done once only, and ever after,
> tenTable can live in ROM or RAM.
>
> Now, using said table, the compute 10**x as follows:
>
> double Antilog10(unsigned int x)
> {
> int i;
> double y;
>
> i = XBITS;
> y = 1.;
>
> while (x != 0)
> {
> i = i - 1;
>
> if ((x & 1) != 0) y = y * tenTable[i];
>
> x = (x >> 1);
> }
>
> return y;
> }
>
>
> By adjusting tenTable, this algorithm is easily adaptable to
> any scaling of x such as 1.31, 2.14, 8.8, etc.
Your algorithm suggests a modification.
Use successive approximation to resolve the fractional part of the
argument as the sum from m =1 to M of Am 2**-m where the coefficients
Am are either plus one or minus one.
For example if x = .34375 then x = .5 - .25 + .125 -.0625 + .03125
At each step of the resolution the mth table entry containing
10**(2**-m) multiples a product register when am is one or divides the
register for am equal minus one.
Hence 10**.34375 = 1 * (10**(1/2)) / (10**(1/4)) * (10**(1/8)) . . .
Herbert
Reply by John E. Hadstate●June 5, 20052005-06-05
"John Hadstate" <jh113355@hotmail.com> wrote in message
news:1117654632.785625.143310@f14g2000cwb.googlegroups.com...
>
>
> tanmay.zargar@gmail.com wrote:
>>
>> I doubt if there is an algorithm that will efficiently do
>> what I need
>> and more so be portable across the processor I am using.
>> I say this
>> because most optimized DSP algorithms make use of
>> processor specific
>> intrinsic functions which ultimately make use of the
>> hardware units
>> specific to the processor. If however, there is such an
>> algorithm,
>> please do let me know.
>>
>
Here is an efficient, hardware-independent algorithm that
will exactly compute 10**x where x is a 16-bit scaled number
in at most 16 iterations. It works because:
10**(1 + 1/2 + 1/4 + ...) =
(10**1)*(10**(1/2))*(10**(1/4)*...
First, define the scaling. If 1.15, I'm assuming you mean 1
whole number bit and 15 fractional bits and the maximum
value of x = 1 + 1/2 + 1/4 + ... + 1/32768.
Then, define a table of 16 numbers according to the
following algorithm:
#define XBITS 16
double tenTable[XBITS];
tenTable[0] = 10.;
for (i = 1; i < XBITS; i++)
{
tenTable[i] = sqrt(tenTable[i - 1]);
}
This computation needs be done once only, and ever after,
tenTable can live in ROM or RAM.
Now, using said table, the compute 10**x as follows:
double Antilog10(unsigned int x)
{
int i;
double y;
i = XBITS;
y = 1.;
while (x != 0)
{
i = i - 1;
if ((x & 1) != 0) y = y * tenTable[i];
x = (x >> 1);
}
return y;
}
By adjusting tenTable, this algorithm is easily adaptable to
any scaling of x such as 1.31, 2.14, 8.8, etc.
Reply by Jerry Avins●June 3, 20052005-06-03
Jon Harris wrote:
> "Martin Eisenberg" <martin.eisenberg@udo.edu> wrote in message
> news:1117810754.190977@ostenberg.wh.uni-dortmund.de...
>
>>Jon Harris wrote:
>>
>>
>>>"Jerry Avins" <jya@ieee.org> wrote in message
>>>news:7OqdneXcgM3PvgLfRVn-vQ@rcn.net...
>>
>>>>You don't have to put the end points of straight-line
>>>>approximations on the defining curve in order to ensure
>>>>continuity.
>>
>>>I _think_ you are saying that as long as I constrain the 2
>>>end points to be equal to _each other_, even though they don't
>>>necesarrily match the "ideal" curve value, I still avoid the
>>>discontinuity plus theoretically achieve lower peak error.
>>
>>Rather, wouldn't the ratio of the end points have to equal the
>>scaling applied in range reduction?
>
>
> I see your point here--my wording was imprecise at best. The idea is that you
> want the curve sections to splice smoothly together over multiple intervals. So
> the 2 end points won't generally be identical, but related by some constant
> based on your interval.
>
> For example, consider approximating a power function over 1 octave, say y=2^x
> from [0, 1). Ideally, y(0) = 1 and y(1) = 2. If you use an approximation such
> that y(0) = 0.99 stead of 1.0, then you would want to make sure that y(1) = 1.98
> so the ends "line up" when spliced together.
Yes. I hadn't understood your point about repeating the same basic
section at different scales. I'm glad you brought it out explicitly.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by Jon Harris●June 3, 20052005-06-03
"Martin Eisenberg" <martin.eisenberg@udo.edu> wrote in message
news:1117810754.190977@ostenberg.wh.uni-dortmund.de...
> Jon Harris wrote:
>
> > "Jerry Avins" <jya@ieee.org> wrote in message
> > news:7OqdneXcgM3PvgLfRVn-vQ@rcn.net...
>
> >> You don't have to put the end points of straight-line
> >> approximations on the defining curve in order to ensure
> >> continuity.
>
> > I _think_ you are saying that as long as I constrain the 2
> > end points to be equal to _each other_, even though they don't
> > necesarrily match the "ideal" curve value, I still avoid the
> > discontinuity plus theoretically achieve lower peak error.
>
> Rather, wouldn't the ratio of the end points have to equal the
> scaling applied in range reduction?
I see your point here--my wording was imprecise at best. The idea is that you
want the curve sections to splice smoothly together over multiple intervals. So
the 2 end points won't generally be identical, but related by some constant
based on your interval.
For example, consider approximating a power function over 1 octave, say y=2^x
from [0, 1). Ideally, y(0) = 1 and y(1) = 2. If you use an approximation such
that y(0) = 0.99 stead of 1.0, then you would want to make sure that y(1) = 1.98
so the ends "line up" when spliced together.
Reply by Jerry Avins●June 3, 20052005-06-03
Martin Eisenberg wrote:
> Jon Harris wrote:
>
>
>>"Jerry Avins" <jya@ieee.org> wrote in message
>>news:7OqdneXcgM3PvgLfRVn-vQ@rcn.net...
>
>
>>>You don't have to put the end points of straight-line
>>>approximations on the defining curve in order to ensure
>>>continuity.
>
>
>>I _think_ you are saying that as long as I constrain the 2
>>end points to be equal to _each other_, even though they don't
>>necesarrily match the "ideal" curve value, I still avoid the
>>discontinuity plus theoretically achieve lower peak error.
>
>
> Rather, wouldn't the ratio of the end points have to equal the
> scaling applied in range reduction?
I don't clearly understand what you mean. Is it that error can be
measured either as a departure from or as a fraction of the correct value?
Making the the center and end-point errors equal in magnitude (for mild
curves, that doesn't differ much from passing the secant through the
1/4 and 3/4 points) minimizes the first kind of error. This technique
typically minimizes the RMS absolute error by a factor of 3 to 4.
Jerry
--
Engineering is the art of making what you want from things you can get.
�����������������������������������������������������������������������
Reply by Martin Eisenberg●June 3, 20052005-06-03
Jon Harris wrote:
> "Jerry Avins" <jya@ieee.org> wrote in message
> news:7OqdneXcgM3PvgLfRVn-vQ@rcn.net...
>> You don't have to put the end points of straight-line
>> approximations on the defining curve in order to ensure
>> continuity.
> I _think_ you are saying that as long as I constrain the 2
> end points to be equal to _each other_, even though they don't
> necesarrily match the "ideal" curve value, I still avoid the
> discontinuity plus theoretically achieve lower peak error.
Rather, wouldn't the ratio of the end points have to equal the
scaling applied in range reduction?
--
Quidquid latine dictum sit, altum viditur.
Reply by Andre●June 3, 20052005-06-03
If your dB value is an int, a lookup table for one decade would only be
20 entries long.
tanmay.zargar@gmail.com wrote:
> I developed the algorithm to compute 10^x as e^(ln(10)*x). However,
> this was too computationally intensive for my liking so I have
> reconciled to the table lookup approach.
>
> The original problem was to convert a dBm0 value to absolute value - in
> other words, computing the antilog. However that wasn't all - I had to
> then multiply this 10^x value to another fixed point number. Now the
> DSP processor I am using does provide intrinsic functions for fixed
> point math but only for numbers in 1.15 format or 1.31 format. None of
> these functions could be used directly for the problem at hand. I had
> therefore to develop the algorithm myself. However, having done that, I
> found that it was too demanding in terms of execution cycles.
>
> I doubt if there is an algorithm that will efficiently do what I need
> and more so be portable across the processor I am using. I say this
> because most optimized DSP algorithms make use of processor specific
> intrinsic functions which ultimately make use of the hardware units
> specific to the processor. If however, there is such an algorithm,
> please do let me know.
>
> Thanks,
> T.
>
--
Please change no_spam to a.lodwig when replying via email!