> The 32 bit data types of the Blackfin processors should preserve more bits
> of precision than a 16 bit fixed point processor, if the FFT routines make
> use of the 32 bit data types. Can anyone comment on whether this is the
> case, and how well the precision is maintained in the calculations?
Dunno about the details of the Blackfin
> How
> much precision would result from a library FFT routine of 2048 points of
> 10
> bit data after an FFT is performed? Thanks.
Take a look at the recent "scaling fixed point fft" thread where
this was discussed.
The gist is for a 2^P point transform you need to allow for P/2 bits
of growth in the amplitude of sinusiods wrt white noise.
So for 16 bits, 2^11 point transform you only really have
16-5.5 = 10.5 bits of useful dynamic range with a fixed point
FFT, which seems OK if your input is only 10 bits anyway.
Using bits 15..6 and scale by 1/2 on each FFT pass is the way
to go in this case I think.
Regards
--
Adrian Hey
Reply by Ed●July 1, 20032003-07-01
The 32 bit data types of the Blackfin processors should preserve more bits
of precision than a 16 bit fixed point processor, if the FFT routines make
use of the 32 bit data types. Can anyone comment on whether this is the
case, and how well the precision is maintained in the calculations? How
much precision would result from a library FFT routine of 2048 points of 10
bit data after an FFT is performed? Thanks.