Reply by ga.....@u.washington.edu August 18, 20202020-08-18
On Tuesday, August 18, 2020 at 1:44:10 PM UTC-7, gtwrek wrote:

(snip, I wrote)

> >Many algorithms are much better done in fixed point, but teaching it > >seems to be a lost art. Part of the reason is that most high-level languages > >don't make it easy to do.
(snip)
> >Maybe some will have some good counterexamples. More DSP chips support > >floating point, and it isn't all that hard to do now. But often enough, fixed > >point is the right choice.
> "Doing DSP" in FPGAs (more and more common) is almost exclusively fixed > point. Floating point in FPGAs makes almost no sense at all. Even > though FPGAs vendors keep offering more and more "High-level" tools that > make floating point more easily accessable within FPGAs, it's mostly > (drumroll...) pointless.
It does seem that there are some people doing floating point in FPGAs, and for some scientific (non-DSP) problems it might be useful. It isn't so bad, until you get to pre- and post-normalization, which takes a huge amount of logic for add/subtract, not so much for multiply and divide. Newer FPGA families with 6 input LUTs, instead of the 4 input LUTs of previous generations should be better.
> Other than padding the FPGA Vendor pockets by doing more and more > useless things on a FPGA and driving up the resources consumed. This > causes bigger FPGAs to be (needlessly) selected.
Over 50 years ago, IBM design System/360 with a hexadecimal floating point format. That is, the exponent is of 16 instead of 2. This is especially convenient for fast hardware (that is, not microprogrammed) implementations as it simplifies the barrel shifter needed. Numerically, though, it was not so nice, and some new numerical analysis methods had to be found to work with it. It might not be so bad in an FPGA, though.
Reply by gtwrek August 18, 20202020-08-18
In article <12ce8e9c-1c39-4871-a46b-093da51e84bbn@googlegroups.com>,
ga...@u.washington.edu <gah4@u.washington.edu> wrote:
>On Saturday, August 15, 2020 at 5:00:47 AM UTC-7, blo...@columbus.rr.com wrote: >> DO many people still do DSP in fixed point or are the processors mostly >> floating point and the DSP designer no longer needs to work in fixed point? > >Many algorithms are much better done in fixed point, but teaching it >seems to be a lost art. Part of the reason is that most high-level languages >don't make it easy to do. > >D.E.Knuth says that finance and typesetting should be done in fixed point, >but it might be that not so many people even know that. > >In numerical terms, values that have an absolute uncertainty should be done in >fixed point, and relative uncertainty in floating point. > >For most DSP algorithms, using extra bits for more precision is more useful >than using them for exponents. > >Maybe some will have some good counterexamples. More DSP chips support >floating point, and it isn't all that hard to do now. But often enough, fixed >point is the right choice.
"Doing DSP" in FPGAs (more and more common) is almost exclusively fixed point. Floating point in FPGAs makes almost no sense at all. Even though FPGAs vendors keep offering more and more "High-level" tools that make floating point more easily accessable within FPGAs, it's mostly (drumroll...) pointless. Other than padding the FPGA Vendor pockets by doing more and more useless things on a FPGA and driving up the resources consumed. This causes bigger FPGAs to be (needlessly) selected.
Reply by ga.....@u.washington.edu August 17, 20202020-08-17
On Saturday, August 15, 2020 at 5:00:47 AM UTC-7, blo...@columbus.rr.com wrote:
> DO many people still do DSP in fixed point or are the processors mostly > floating point and the DSP designer no longer needs to work in fixed point?
Many algorithms are much better done in fixed point, but teaching it seems to be a lost art. Part of the reason is that most high-level languages don't make it easy to do. D.E.Knuth says that finance and typesetting should be done in fixed point, but it might be that not so many people even know that. In numerical terms, values that have an absolute uncertainty should be done in fixed point, and relative uncertainty in floating point. For most DSP algorithms, using extra bits for more precision is more useful than using them for exponents. Maybe some will have some good counterexamples. More DSP chips support floating point, and it isn't all that hard to do now. But often enough, fixed point is the right choice.
Reply by boB August 16, 20202020-08-16
On Sat, 15 Aug 2020 07:59:33 -0500, Richard Owlett
<rowlett@cloud85.net> wrote:

>On 08/15/2020 07:00 AM, blocher@columbus.rr.com wrote: >> DO many people still do DSP in fixed point or are the processors mostly > floating point and the DSP designer no longer needs to work in fixed >point? >> > >ROFL - You are *not* asking the right question ;} > >Consider; > are you recognizing a two tone sequence? > are you doing real-time speech recognition? > >P.S. Haven't needed to do any Signal Processing since being a BSEE >student in 60's ;/ > >
I make pretty good use of the internal 32 bit FPU in my STM32F4 processor where I would do things fixed point before. Then convert back to fixed point to use that output. Had to write some ASM code so that the IAR compiler would use the correct rounding instruction though. Not doing much DSP really in there.
Reply by Richard Owlett August 15, 20202020-08-15
On 08/15/2020 07:00 AM, blocher@columbus.rr.com wrote:
> DO many people still do DSP in fixed point or are the processors mostly > floating point and the DSP designer no longer needs to work in fixed
point?
>
ROFL - You are *not* asking the right question ;} Consider; are you recognizing a two tone sequence? are you doing real-time speech recognition? P.S. Haven't needed to do any Signal Processing since being a BSEE student in 60's ;/
Reply by August 15, 20202020-08-15
DO many people still do DSP in fixed point or are the processors mostly floating point and the DSP designer no longer needs to work in fixed point?