Consider a 64-subcarrier OFDM signal with QPSK modulation and all 64 subcarriers have data. Each of the 64 subcarriers will have the same amplitude (assume 1.0) -- but the subcarriers phases will be 45deg + n*90 deg : n=0, 1, 2, 3. If the four possible phase values have equal probability of occurance, what is the RMS signal amplitude? Given this signal, and its S/N ratio, what is the Eb/N0?
These are things that are easily sorted out and understood with a simulation. Parseval's theorem says the power level doesn't change across the transform, so the RMS level in the time domain doesn't depend on the modulation phases in the frequency domain. The PAPR can change dramatically, though, which often matters when trying to manage output power through an amplifier.
Since the time domain power level doesn't change with the modulation phases, SNR and Eb/No are unchanged as well.
If we assume no pulse shaping is used (a good assumption for OFDM), then the rms amplitude for each subcarrier will be 1. The total power is the sum of the squares of each of the subcarriers, so in this case will be 64, and the rms of that total is the square root of that, so the answer is 8.
Further, assuming each of the channels is independent, identically distributed, then the total distribution will be reasonably approximated as a Gaussian, given the central limit theorem, and from that we can easily estimate the PAPR which I demonstrate below for 32 subcarriers (this only gets closer to Gaussian as the number of subcarriers increase).
We can't determine EbNo as all the subcarriers are orthogonal (so don't interfere with each other if we have perfect synchronization) and you haven't specified what the noise is. You know the signal power is 1 squared for each subcarrier, so once you know the noise, you can then determine the EbNo.
I got this code telling me a lot.
Nice to compare it with all the useful replies down here.
I don't see you asking about PAR but you get it as bonus.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
yy = [];
for i = 1:100 %number of symbols
N = 64;
%random symbols
x = randsrc(1,N,[+1,-1])+j*randsrc(1,N,[+1,-1]);
xz = [zeros(1,32), x, zeros(1,32)];
y = ifft(xz)*sqrt(128);
yy = [yy y];
end
RMS = sqrt(mean(abs(yy).^2));
plot(20*log10(abs(fft(yy))));
axis([0 length(yy) 0 60])
title(['RMS = ' num2str(RMS)]);
Nice Kaz-
See my modification below to make it match my answer:
yy = [];
for i = 1:100 %number of symbols
N = 64;
%random symbols
x = 1/sqrt(2) * (randsrc(1,N,[+1,-1])+j*randsrc(1,N,[+1,-1]));
y = 64 * ifft(x);
yy = [yy y];
end
RMS = sqrt(mean(abs(yy).^2));
plot(20*log10(abs(fft(yy))));
axis([0 length(yy) 0 60])
title(['RMS = ' num2str(RMS)]);
First I scaled x by the square root of 2 to get an amplitude of 1 for each symbol.
Next I made the FFT 64 symbols (one bin for each subcarrier) and scaled the FFT by 64 prior to taking the inverse FFT. This scaling is what would match in power from frequency to time (consider for example a DC level in frequency given as [1, 0, 0, 0], with N=4. The inverse FFT scaled by N is [1, 1, 1, 1]. So I scale it by N so it matches in time and frequency using the power in the time domain (sum squared of the time samples divided by total time) is the power in the frequency domain (sum squared of the frequency bins). This isn't the only way to scale, but one I prefer since it matches what I'd expect with the Fourier Transform, and mostly to explain how I got to rms = 8.
I understand your scaling but wouldn't go that way at all on FPGAs/ASICs
In hardware it is way too expensive to divide symbols by sqrt(2).
After all we need to have freedom to get best dynamic range on available bits.
Matlab ifft needs scaling by sqrt(N) for power unity, this is tool issue.
I chose to insert zeros in iFFT to create stop band for noise floor check.
rms is not sqrt of total power as you stated in your first reply if I understood you. It is sqrt of mean power.
In short software, understandably, have got different functions, platforms and mindset. At least we agree on principles.
Thanks Dan (and others) ...
Below/attached is my math-only attempt to answer from a time-domain perspective. The derivation assumes the per-carrier QPSK symbols are uncorrelated to yield a value of 8. But a series of equal symbols will result in a lot higher signal value. This would seem more the concern for a high PAPR.
So the actual OFDM transmitted signal power (or statistical variance of the sum of the sub carriers)) would seem to be a bit difficult to define. But assume the subcarrier phases are uncorrelated so the variance is 64 (sigma is 8). So if I simulate a 64 carrier QPSK OFDM signal with 0dB S/N -- do I add a noise sigma of 8? Or do I assume the S/N is specific to an individual subcarrier and add a noise sigma of 1.0?
Add a noise sigma of 8 assuming the noise is completely in the bandwidth of the signal. If you oversample by M (which would make sense to do for a simulation) then increase the standard deviation by sqrt(M) assuming the noise you add is white. This will keep the noise in band to have a sigma of 8.
It is reasonable to assume equiprobable data. Any realistic transmitter will whiten the data source to avoid the issue you see in the unlikely event each symbol was exactly the same (that would be very bad to do for reasons of PAPR)
You must define the non-occupied subcarriers. In 802.11 ofdm they target N = 64 at spacing of 312.5 KHz. With 52+1 for signal. This needs a sampling rate 312.5 x 64 = 20MHz.
I have never seen ofdm generated with all bins occupied. Can we do that???If all 64 bins are occupied and you sample at 20MHz you get 20MHz occupied bandwidth signal while the target is 16.6MHz (52+1) x 312.5
doing that could violate sampling rule.
Nothing to do with power of 2
you can find out details of bin allocation from standards.
The ifft is used to construct the target band limited signal. It acts as band pass filter (and in effect shaping) by itself.
No further generation is needed apart from upsampling, mixing ...etc.
If you fill up all 64 bins you use up the entire digital domain and get suffocated for any further processing.
I'm not disagreeing with Kaz with regards to a more accurate simulation to a particular standard, but want to point out that I believe the reason for the non-occupied carriers are more to meet OOBE limits (to not interfere with an adjacent channel), not necessarily because of Nyquist. The IFFT waveform is always upsampled prior to the DAC otherwise the reconstruction filter after the DAC is extremely challenging regardless of the non-occupied carriers. There's nothing wrong with doing some of that upsampling by doubling the number of subcarriers which then simplifies any subsequent resampling filter (if further resampling is even needed). If you want to simulate an actual OFDM waveform specifically to a particular standard, then yes I agree it would be more accurate to have the zero carriers, as well as the zero DC bins and with the pilot tones etc all based on the particular standard being emulated (you'll also see that it will make little difference with regards to SNR and PAPR given the small ratio zeroed (the waveform will still have a Gaussian distribution).
Without zero bins for stop band it is not an ofdm simulation model. I don't know of any realistic system doing ofdm without creating stop band at iFFT.
Imagine in above case that targets 312.5KHz spacing, without zeroing we generate 10MHz baseband signal sampled at 20Msps i.e. signal occupies 10MHz baseband + its mirror = 20MHz so all digital domain is full...what are we going to do with it?
I agree that there is no realistic system that doesn't have null carriers at the edges and including them would be the most realistic simulation but was just pointing out that I believe it's primarily due to adjacent carrier occupancy not inability to meet Nyquist. As we can create that waveform by zero padding the IFFT (and that is an approach to efficiently upsample 2x if we zero pad to twice the number of carriers). It's not a bad approach to simulate even with the null carriers included when you want an oversampled waveform.
I agree we can fill all 64 bins, using 128 iFFT and hence upsample by 2 at iFFT level with zeros thus maintaining the spacing. But that imposes constraints on standards.
I can do that in a private design if the double fft size is affordable.
Dan --
I assume the Out-of-Bounds Emissions (OOBE) are from neighbor transmitters operating at 802.11 (e.g., independent neighbor 20MHz channels sensed by the antenna). I think we want to filter the neighbor 20MHz channel to reject its noise contribution to the preferred receive channel. The data-free (no intentional signal) band would be 12*0.3125MHz = 3.75MHz between the two channels. Is this enough room for the bandpass filter to capture the desired channel bandwidth? Would the filter be done digitally? Does the FFT act in some way as a bandpass filter -- rejecting any OOBE signals?
Finally, why not just use a 52-point FFT with 312.5KHz subcarrier spacing in order to reduce the symbol time by 52/64 (increase datarate by 64/52)?
Jim (I took your course!)
Hey Jim- Cool about the course. I hope it was helpful. Note that the symbol rate sets the bandwidth of each subcarrier and so must be the reciprocal of the spacing (which also ensures othogonality). However many subcarriers you choose to use sets your overall bandwidth (and data rate depending on what constellation you use for each subcarrier). You can’t increase the symbol rate without increasing the spacing. Make sense?
OOBE is set at the transmitter and there will be a mask you have to meet and ultimately that will dictate the linearity of your PA and what schemes you do there for improving that such as crest factor reduction and pre-distortion (close in will be too tight to meet with output filtering). Your receiver filtering and overall dynamic range will set the ability to receive a weak signal in the presence of a stronger adjacent band signal.
As an exercise do the IFFT for the filled transmit symbol (with the zero bins at the edges) and with the FFT zero padded out to twice the length (128 bins) and then look at that spectrum— that will represent your achievable spectrum with a linear PA and should meet the mask for the standard you are simulating. This is an easy way to create the interpolated waveform for adding the cyclic prefix and D/A conversion (possibly with further upsampling to simplify the DAC reconstruction filter)
Here are the steps how ofdm is approached by standards:
A) construct your frequency vector:
1) decide your bandwidth (BW)
2) decide subcarrier spacing (SCS), so number of subcarriers becomes BW/SCS, populate with your qpsk...etc.
3) add up more bins for guard band and to top up to (N) nearest power of 2 iFFT for practicality.
B) convert frequency vector to time domain(iFFT) sampling at SCS x N
add cyclic prefix
channel filter is then applied as required as iFFT and symbol to symbol phase discontinuity lead to low SNR. The iFFT guard band helps channel filtering.
If I am doing my own project I am free to choose my own figures and will get orthogonality. But not adding guard band is like hitting a brick wall.
Over how long of a window are you measuring power? In other words, what is the integration time for your power measurement?
As I mentioned early on, Parseval's theorem tells us that the power level, over the length of a transform, doesn't change across the transform. So if you aren't changing any levels in the frequency domain of the subcarrier amplitudes, then the output power will remain the same. Changing the relative phases of the equal-amplitude QPSK subcarriers can affect PAPR, but not the overall power level across the entire transform length. In other words, the average power level stays the same, the peak increases (which means it goes down somewhere else as well). The highest PAPR will occur when all subcarrier phases are equal.
If what you're really worried about is PAPR, then a whitening scrambler or similar technique applied to the data keeps the entropy elevated to a sufficient level that the incidents of high PAPR are minimized.
And, yes, 802.11 and essentially all practical OFDM systems zero the edge subcarriers to accommodate rolloff in both the transmit and receive filters. Even interpolating/decimating filters in the digital processing chain don't have brick wall responses, so it's hard to get around it.
Edit: As mentioned, filtering is needed in the modulator for out-of-band suppression, particularly of aliases. In the receiver filtering is needed to suppress out-of-band energy in the form of adjacent channels or noise or interference. The edge subcarriers are nulled to accommodate rolloff in these filters.