I would like to know importance of sampling frequency in the baseband. I am using #LTE as an example.
If we take a 20 MHz system, the sampling frequency is specified as 30.72 MHz frequency.
Does this team that baseband digital data has to be transmitted to RF at the rate of 1/30.72 mega samples per second?
Other than this, is there any specific requirement at other baseband stages like OFDM to process data only at this rate or they could work at a higher rate?
In our system we upsample lte20 from 30.72 to 360.68Mhz. You can upsample as required by DAC and RF requirements but the signal lte bandwidth will not change.
Just a follow-up question: 1 Symbol of baseband data has to be transmitted in ~66 us. Irrespective of the sampling rate, 1 symbol of baseband digital data has to be sent in ~66 us. This means that, this block has to operate at precise clock assuming we put new data on every clock cycle. Am I correct?
yes, one ofdm symbol takes 2048 data points @30.72MHz (1/30.72*2048) = 66.67 us.
This should never change. Upsampling (when chosen) increases the number of data points by interpolation and hence inserts new samples that fit nicely in between but at no extra time rather by increasing sample rate. So signal bandwidth stays as it is.
What? I'm not sure exactly what you're trying to ask.
For just "normal" modulation, you need to sample quickly enough at baseband to capture all of the desired signal in the real world. Remember that the Nyquist-Shannon sampling limit is not your friend, rather than cozying up to it you should treat it as a rabid guard dog.
You're asking a lot of questions that can be answered just by sitting down with pencil and paper and working out some trig identities, or (better) doing some basic math in the Fourier domain. I'm not trying to chase you off here, but rather I'm suggesting that if you work through these things on your own you're more likely to gain some long-lasting insight so that you really understand the results, rather than just getting unconnected factoids.
goes on a bit about sampling. Look at figure 3 on page 5 (page 6 by the browser's count) -- it shows you what happens when you sample a base-band signal with "perfect" sampling, and should give you an idea of the issues with overlap at the band edges.
Thanks for the response. Actually, Nyquist in the above case confuses me a lot. They say occupied bandwidth of a 20 MHz system is 18 MHz. In that case, Nyquist rate should be >18*2 MHz. But in LTE, sampling rate for 20 MHz system is given as 30.72 MHz. Am I missing something?
If the 30.72MHz sampling is in quadrature, then there are 61.44Msps, so you'd be golden.
I don't think I have it in that document (and perhaps I should update), but Nyquist didn't say you have to sample the same thing at an even sampling rate. You just need 2 * F independent samples per second. Theoretically, this means that if you could find a way to make them independent, you could capture 61.44 million different samples once per second and be OK.
The only thing that I know of that's done in practice is I/Q sampling, although I wouldn't be surprised at other things -- I've long since stopped being astonished by the strange things you find at the cutting edge of technology.
Tim is correct.
When your signal is real then you have to sample at least at twice the maximum frequency present in your signal. When the signal is complex (which is in general what you have after the RF stage in an LTE system) and the signal spectrum is centered around 0 (otherwise you would have frequency rotations due to aliasing) then you have to sample both real and imaginary part at least at the bandwidth of your signal, so 18 MHz for an 18MHz bandwidth signal.
In LTE systems you perform a 2048 point Fourier Transform in order to get all the symbols on the various carriers. In LTE (20MHz) only the center 1200 carriers are used to transmit signals. If you do the math: 30.72*1200/2048 = 18, hence the 30.72MHz sampling rate for the 18MHz bandwidth signal (LTE 20MHz).
Now, if you are in a real system, The RF stage will contain a mixer, that will create the real and imaginary part, plus a bunch of low-pass filters to remove high frequency components. Due to implementation cost these filters won't be very sharp and it is commonly admitted that you have to sample at least at twice the bandwidth and finish the filtering using digital filters and downsampling (DDC Digital Down Conversion).
Thanks, Tim and Oliviert
sorry, I am opening an old topic. referring to the equation:
30.72*1200/2048 = 18
Though the value is 18 MHz, I am still not sure what the N point FFT has to do with the bandwidth. I could very well use a higher order FFT and still get the same results. Am I right?
However, if I change the equation as:
BW = sub-carrier-spacing*occupied-sub-carrier = 1200*15 (KHz) = 18 MHz
Can you comment if the basis of computing system bandwidth is based on the 1st or the 2nd equation above?
The second equation is the right one in terms of system bandwidth.
You need to adapt the FFT length to the sampling rate in order to have:
sampling_rate / number_of_bins = occupied_sub_carrier
using your notation.
In terms of implementation you need to have a fast/small enough design to get it into the smallest device.
1200 = 8*3*25
So the minimum size FFT would be done using a 1200 point FFT on a signal sampled at 18MHz. The problem here is that your signal must be filtered on Rx and Tx side and for this purpose you need more bandwidth to keep untouched the border carriers.
I can see 2 different cases within my customer base:
512*3 = 1536 point FFT on a signal sampled at 23.04 MHz
2048 point FFT on a signal sampled at 30.72 MHz
The former must include a radix-3 butterfly in the FFT, the latter is a standard power of 2 FFT, leading to smaller implementation and keeping enough space around the signal for the filtering stage in order to fit within the spectrum mask without using too many coefficients in the filter.
Thanks, Oliviert. I think it is clear now.
Occupied BW = 15000*1200
Total BW = 15000*2048
Sample rate = 1//total BW
What sampling rate you use for transmission, i.e., at the final DAC before the RF system, is an implementation decision. Some may choose to digitze at baseband, perhaps at the 30.72 MHz rate, then mix it up with analog circuits to the RF frequency. Others may choose to upsample it to a much higher rate, mix it up to an IF or to the final RF, and digitize it at that frequency as a real-valued signal.
So the short answer to your question is: no, it doesn't have to be transmitted at any particular sample rate or frequency.
So, making an example in SDR, suppose the complex sampling rate at baseband is 9.142Msps (in DVB-T2, BW=8MHz). Then, for instance, using AD9362 found in many SDR hardware, what is the sampling rate at the output of the RF? In other words, does this DAC upsamples again using interpolation filters inside it, or just adapt its rate with the value (9.14Msps) we configure?
If DAC upsamples our data to much higher sample rates, then as long as we have a Nyquist sampling rate at baseband, we should not need of upsampling more at baseband, right?
I know this is a bit confusing, but (as Tim Wescott mentioned) you are likely getting a stream of complex samples...there are typically 2 ADCs running in parallel in this type of system. In which case each complex sample counts as 2 samples as per Nyquist-Shannon. If you were sampling a 'real only' signal with a single ADC then you would need to sample at twice the band of interest width (not the same thing as the bandwidth from 0 (DC) to highest frequency). However, once it is demodulated to baseband it is encoded into a complex signal. It appears you are getting these points at a rate of 30.72 MHz which is nicely above the theoretical minimum of 20 MHz complex samples per second (giving you the guard band/guard dog that Tim mentions).
True. The 30.72 MHz rate is at the baseband and data is in terms of I & Q.