DSPRelated.com
Forums

Scale data before or after IFFT?

Started by Guenter Dannoritzer April 28, 2004
Hello,

I am working on a self-study project and try to implement a DMT
transmitter in VHDL. This is the first time I am dealing with
fixed-point implementation of a DSP algorithm. One of the blocks is an
IFFT which outputs the time domain signal. At the end this signal is
passed to an ADC, which has at the most a precision of 16-bit.

My question is now whether I should scale my data at the input to the
IFFT and use an IFFT that outputs already only 16-bit or is it better to
calculate the IFFT with full precision and then scale the output to the
16-bit?

I am kind of lost in what effort I should take to determine how 
significant the error difference between the two ways of implementation 
and a floating point implementation is.

What is a common practice of doing this?


Thanks for the help.

Guenter


Guenter Dannoritzer <dannoritzer@web.de> wrote in message news:<c6pb6o$pof$07$1@news.t-online.com>...
> Hello, > > I am working on a self-study project and try to implement a DMT > transmitter in VHDL. This is the first time I am dealing with > fixed-point implementation of a DSP algorithm. One of the blocks is an > IFFT which outputs the time domain signal. At the end this signal is > passed to an ADC, which has at the most a precision of 16-bit. > > My question is now whether I should scale my data at the input to the > IFFT and use an IFFT that outputs already only 16-bit or is it better to > calculate the IFFT with full precision and then scale the output to the > 16-bit? > > I am kind of lost in what effort I should take to determine how > significant the error difference between the two ways of implementation > and a floating point implementation is. > > What is a common practice of doing this? > > > Thanks for the help. > > Guenter
The answer depends on the internal behavior of the fft/ifft. The fft "block" may take care of overflow protection for you. This is a common option for fixed point ffts. If that is the case, then you need not perform any scaling if the input is 16 bit. The fft routine will scale down by one bit for each power of two. If the fft routine does not scale at each stage, then ... If the internals of the fft are sufficiently large, scale afterwards. If the internals of the fft cannot handle more than 16 bits, you must scale before. -- Mark Borgerding
Hello Mark,

Mark Borgerding wrote:
> The answer depends on the internal behavior of the fft/ifft. > > The fft "block" may take care of overflow protection for you. This is > a common option for fixed point ffts. If that is the case, then you > need not perform any scaling if the input is 16 bit. The fft routine > will scale down by one bit for each power of two. > > If the fft routine does not scale at each stage, then ... > > If the internals of the fft are sufficiently large, scale afterwards. > > If the internals of the fft cannot handle more than 16 bits, you must > scale before. > > > -- Mark Borgerding
I am considering to use a scaling fft/ifft. The input width is equal to the output width, which is a generic parameter. Maybe I am haunting a ghost here, what I am trying to understand now is whether I should use the width the data are coming to the ifft, calculate the ifft with that precision and then scale it down to the 16-bit. Or scale it first down to 16-bit and then calculate the ifft. With this implementation I am following the ADSL standard. For the DMT transmitter the input to the ifft comes from a constellation encoder. The biggest constellation output value is +/-181. So I am using 9-bit to represent the I and Q values. Before passing this value to the ifft there is a gain scaling unit. The receiver can specify a gain value which the transmitter should apply to each frequency value, before performing the ifft. The gain value is represented as a 12-bit unsigned fixed-point fractional, with 3-bit integer and 9-bit fractional part. Applying this multiplication to the constellation output I result with a 21-bit wide signal that would go into the ifft. So I have the possibility to scale the data after the gain multiplication down to 16-bit and calculate the ifft or I stay with the 21-bit and scale it down to 16-bit after the ifft. I assume that the implementation with the higher precision ifft brings me results that are closer to the results of my floating point simulation, because the calculation is done with full precision up to the end. What I try to understand whether this is so significant or whether I can live with the down scaling before the ifft? I think what also makes me uncertain is why the standard takes such a big precision with the gain scaling, which results in a high bit number, when after the ifft I will have to go down to at least 16-bit anyway? Thanks for your thoughts. Guenter