DSPRelated.com
Forums

simple-minded OFDM question

Started by Randy Yates September 25, 2006
From what little I've read on OFDM, it seems that its claim to fame is
its resistance to frequency-selective fading (at least I think that's
the proper term - correct me if I'm wrong).

That resistance comes via the multiple carriers. If one or two
carriers fade, no problem - you've got a horde of others which are
more than the coherence bandwidth away, so you still get a lot of
usable data through.

My questtion is this: How are OFDM signals typically designed 
to handle such frequency-selective carrier dropouts? Is the
data "banded" across multiple carriers and protected by coding?

Just curious.
-- 
%  Randy Yates                  % "So now it's getting late,
%% Fuquay-Varina, NC            %    and those who hesitate
%%% 919-577-9882                %    got no one..."
%%%% <yates@ieee.org>           % 'Waterfall', *Face The Music*, ELO
http://home.earthlink.net/~yatescr
Interesting, I will try to carry on the discussion.

Randy Yates wrote:
> From what little I've read on OFDM, it seems that its claim to fame is > its resistance to frequency-selective fading (at least I think that's > the proper term - correct me if I'm wrong).
I also think the same.
> > That resistance comes via the multiple carriers. If one or two > carriers fade, no problem - you've got a horde of others which are > more than the coherence bandwidth away, so you still get a lot of > usable data through. >
My be this is what they call frequency diversity.
> My questtion is this: How are OFDM signals typically designed > to handle such frequency-selective carrier dropouts? Is the > data "banded" across multiple carriers and protected by coding? >
I think, the receiver tells the transmitter the information of the forward channel through feedback, based on that i estimate the 'good' carriers and send the data on those carriers. This is the case when SNR is good enough to allow a 'good' channel estimate. For bad SNR this idea will surely fail. I hope my attempt to answer was good enough, though i feel the opposite :o\
> Just curious. > -- > % Randy Yates % "So now it's getting late, > %% Fuquay-Varina, NC % and those who hesitate > %%% 919-577-9882 % got no one..." > %%%% <yates@ieee.org> % 'Waterfall', *Face The Music*, ELO > http://home.earthlink.net/~yatescr
Randy Yates said the following on 26/09/2006 03:58:
> From what little I've read on OFDM, it seems that its claim to fame is > its resistance to frequency-selective fading (at least I think that's > the proper term - correct me if I'm wrong). > > That resistance comes via the multiple carriers. If one or two > carriers fade, no problem - you've got a horde of others which are > more than the coherence bandwidth away, so you still get a lot of > usable data through.
Another advantage is that when the a Cyclic Prefix* is used, ISI and ICI are eliminated (that would otherwise have been caused by a time-dispersive channel). However, this of course does not mitigate the problem that the SNRs will be lower in some sub-carriers than others.
> My questtion is this: How are OFDM signals typically designed > to handle such frequency-selective carrier dropouts? Is the > data "banded" across multiple carriers and protected by coding?
Typically, convolutional coding is done "across" the sub-carriers. To protect against burst errors caused by correlated nulls in adjacent sub-carriers, the codewords are bit-interleaved in frequency. If time-domain fading is also a concern, then interleaving is done in time as well. If this is done well, the aggregate performance is determined by the average SNR. Done badly (or with no coding/interleaving), performance is limited by the SNR of the worst sub-carrier. * A Cyclic Prefix is a time-domain guard interval formed by prefixing a segment of the end of the OFDM symbol at the beginning, e.g. original symbol: A B C D E F G H With Cyclic Prefix: E F G H A B C D E F G H -- Oli
On Tue, 26 Sep 2006 02:58:39 GMT, Randy Yates <yates@ieee.org> wrote:

>From what little I've read on OFDM, it seems that its claim to fame is >its resistance to frequency-selective fading (at least I think that's >the proper term - correct me if I'm wrong).
Yes, or just frequency-selectivity or frequency-selective channels or whatever. In a single-carrier system this equally corrupts all symbols, resulting in a general SNR reduction. Most EQs that help fix the signal do so at the expense of some noise amplification (e.g., zero-forcing equalizers), so single-carrier systems take a bit of a double-whammy in frequency-selective channels. OFDM, by slowing down the symbol rate substantially, makes all the subcarriers equalizable with single-tap equalizers. This is done at the expense of some signal bandwidth being consumed by the cyclic prefix, but it's often a very good tradeoff.
>That resistance comes via the multiple carriers. If one or two >carriers fade, no problem - you've got a horde of others which are >more than the coherence bandwidth away, so you still get a lot of >usable data through.
>My questtion is this: How are OFDM signals typically designed >to handle such frequency-selective carrier dropouts? Is the >data "banded" across multiple carriers and protected by coding?
Usually only one coding stream is applied, and it's applied across all of the subcarriers for maximum frequency diversity. OFDM is pretty useless without coding, so really what people use is COFDM, but that's really so obvious that nobody bothers with the initial C any more because all practical systems are coded. You probably wouldn't want to use OFDM if the channel isn't frequency selective, and if it is it's not going to work worth a damn unless it's coded, so, they're really all COFDM. And your intuition is right that the nulled subcarriers drive the error response and coding across them restores the information. The "good" subcarriers are higher than the average SNR and smearing the information between the good and bad subcarriers allows the problem subcarriers to be corrected in practical systems. Faded subcarriers experience noise amplification after equalization (just like the symbols in an equalized single-carrier system) compared to the rest of the subcarriers. It is possible to adapt the modulations of each subcarrier or groups of subcarriers based on their SNR, and still apply the coding across the whole group. This is usually called "adaptive bit loading" and is used in some standards like DMT for cable and the Home Plug standard. This provides substantial gain in the channel but the majority of that gain is obtained just by ingoring or "nulling" the worst subcarriers (which usually represent the nulls in the channel response). So you get substantial gain just by turning off subcarriers in the channel nulls. This requires channel feedback to control the adaptation, but other than that is pretty efficient. Another thing that is sometimes done in multi-user systems is assigning certain subcarriers to certain users, and doing the assignment based on each user's channel response. In other words, user A is assigned subcarriers that are good subcarriers in his channel, and some other user is given the subcarriers that were bad for user A but aren't bad for them. This is usually called OFDMA where the OFDM and FDMA terms are combined and contracted. This gives pretty good network gain but presents some gnarly timing and registration problems. Both 802.16 (WiMAX) and 3GPP/LTE include OFDMA. So OFDM also provides some signficant adaptation and flexibility opportunities that aren't available with single-carrier modulation. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Eric Jacobsen wrote:
> On Tue, 26 Sep 2006 02:58:39 GMT, Randy Yates <yates@ieee.org> wrote: > >>From what little I've read on OFDM, it seems that its claim to fame is >> its resistance to frequency-selective fading (at least I think that's >> the proper term - correct me if I'm wrong). > > Yes, or just frequency-selectivity or frequency-selective channels or > whatever. In a single-carrier system this equally corrupts all > symbols, resulting in a general SNR reduction. Most EQs that help fix > the signal do so at the expense of some noise amplification (e.g., > zero-forcing equalizers), so single-carrier systems take a bit of a > double-whammy in frequency-selective channels. OFDM, by slowing down > the symbol rate substantially, makes all the subcarriers equalizable > with single-tap equalizers. This is done at the expense of some > signal bandwidth being consumed by the cyclic prefix, but it's often a > very good tradeoff. > >> That resistance comes via the multiple carriers. If one or two >> carriers fade, no problem - you've got a horde of others which are >> more than the coherence bandwidth away, so you still get a lot of >> usable data through. > >> My questtion is this: How are OFDM signals typically designed >> to handle such frequency-selective carrier dropouts? Is the >> data "banded" across multiple carriers and protected by coding? > > Usually only one coding stream is applied, and it's applied across all > of the subcarriers for maximum frequency diversity. OFDM is pretty > useless without coding, so really what people use is COFDM, but that's > really so obvious that nobody bothers with the initial C any more > because all practical systems are coded. You probably wouldn't want > to use OFDM if the channel isn't frequency selective, and if it is > it's not going to work worth a damn unless it's coded, so, they're > really all COFDM. > > And your intuition is right that the nulled subcarriers drive the > error response and coding across them restores the information. The > "good" subcarriers are higher than the average SNR and smearing the > information between the good and bad subcarriers allows the problem > subcarriers to be corrected in practical systems. Faded subcarriers > experience noise amplification after equalization (just like the > symbols in an equalized single-carrier system) compared to the rest of > the subcarriers. > > It is possible to adapt the modulations of each subcarrier or groups > of subcarriers based on their SNR, and still apply the coding across > the whole group. This is usually called "adaptive bit loading" and is > used in some standards like DMT for cable and the Home Plug standard. > This provides substantial gain in the channel but the majority of that > gain is obtained just by ingoring or "nulling" the worst subcarriers > (which usually represent the nulls in the channel response). So you > get substantial gain just by turning off subcarriers in the channel > nulls. This requires channel feedback to control the adaptation, but > other than that is pretty efficient. > > Another thing that is sometimes done in multi-user systems is > assigning certain subcarriers to certain users, and doing the > assignment based on each user's channel response. In other words, > user A is assigned subcarriers that are good subcarriers in his > channel, and some other user is given the subcarriers that were bad > for user A but aren't bad for them. This is usually called OFDMA > where the OFDM and FDMA terms are combined and contracted. This > gives pretty good network gain but presents some gnarly timing and > registration problems. Both 802.16 (WiMAX) and 3GPP/LTE include > OFDMA. > > So OFDM also provides some signficant adaptation and flexibility > opportunities that aren't available with single-carrier modulation. > Eric Jacobsen > Minister of Algorithms, Intel Corp. > My opinions may not be Intel's opinions. > http://www.ericjacobsen.org
>
How about DMB-TH, the new Chinese standard. Intel is a big player in DMB-TH both in the Chinese standard (investor in Legend Silicon) and in using TDS-OFDM in WiMAx right? Isn't Intel working to incorporate more of this into the WiMax standard? Tell us more. Bob Miller