DSPRelated.com
Forums

FFT or DFT with large N (for OFDM/DMT)

Started by Unknown February 3, 2006
Hello,

I have a question about the DFT that is used
in OFDM and DMT. Emerging standards such as
VDSL, DVB and Powerline allow transfrom  sizes
which can be as large as N = 1024, 2048 or 4096.

Associated with this there is the problem of large
peak to average ratio, clipping and non-linearities
at the power amplifier. But there are methods to
reduce these effects (see Tellado and Cioffi).

My question is more about the FFT itself. An
N-point FFT usually has log2(N) stages of computation.
At each stage an extra bit of resolution is needed
to fully capture signal dynamic range at the output
of that stage. Does this not make it prohibitively
expensive to have a large FFT?

Example, suppose we have a DMT receiver with a ten bit
analog to digital converter. The ADC output is fed to an
FFT of N = 4096. Since log2(N)=12, the FFT output will
have 22 bits, which is very large. Is it acceptable to do truncation
here? I imagine clipping would be no problem at the receiver,
because the expected values are the known transmit constellation
points.

Suppose we consider the transmitter of the above system.
Modern bit-loading schemes allow up to 10 bits per each
sub-carrier use. So if we have a ten-bit input to the FFT we
will have a 22-bit output of an FFT with 4096. This time clipping will
definitely cause problems, but as mentioned above there are means to
correct this. But do we really need the full 22 bits of resolution?
Is truncation acceptable at this stage?

Cheers
Porterboy