I hope that you're all fine and doing great. I'm planning to implement a FSK demodulator for my thesis in FPGA. System requirements are given below.
* Modulation on the transmitter is Standard FSK modulator (Not any binary differentially encoded FSK (BDEFSK) transmitter) with modulation index h ranges from 1 to 5. (f_deviation = 0.5*h*data_rate)
* Data rate as 1400 bps.
* Doppler shift in the receiver shall be +/-15KHz (Maximum Doppler rate shall be +/-500 Hz/S).
* Data will be transmitted as a burst with preamble in the beginning.
I have DSP exposure in EW system, Analog modulation & demodulation (AM, FM & SSB) and detailed knowledge in OFDM system. I'm new to this Digital demodulation. I read few research paper for this implementation. But I haven't got any perfect idea to proceed.
Problem statement :
Consider the modulation index (h) is 1 with the data rate of 1400 bps. Hence the f_deviation will be 0.5*1*1400 = 700Hz. Hence +700Hz is the mark frequency and -700Hz is the space frequency in the complex signal. Maximum BW for this will be 2800Hz ((1+h)*data_rate). But we need to open 30KHz BW (Roughly 10 times of signal BW) in the receiver since the Doppler shift can be +/-15KHz. But I don't want to loose the demodulation sensitivity with this BW increase.
I read about Short Time - Discrete Fourier Transform (ST-DFT) implementation for this FSK demodulation (A Novel FSK Demodulation Method Using Short-Time DFT Analysis for LEO Satellite Communication Systems) which will not require any complex carrier recovery circuit. The problem statement is covered exactly here. The paper is described to open 30KHz BW, use a over-lapped ST DFT on each digital sample and find the maximum peak energy index on each DFT output. Further to that some mechanism to extract the bit. I'm clear about the idea but I'm not sure with the demodulation sensitivity that they claim.
Points to clear :
* Paper explained that they are computing a DFT of length N for every new sample with previous N-1 sample. Also they are zero padding in the time domain data to improve the frequency resolution. For eg. Fs = 120KHz, Data rate is 1400bps, modulation index is 1. So we will having 85 samples/symbol (120000/1400) and the f_deviation is 700Hz. So FFT resolution should be much better than 700Hz. We can choose 1024 point DFT so that frequency resolution will be 117Hz. But we have 85 samples/symbol. Hence we can zero pad 939 samples (1024-85) to compute the DFT with 117Hz resolution. Now narrow BPF of DFT is 117Hz. Paper explained that the receiver sensitivity is not depends on the front end BPF with 30KHz filter, and it depends on the narrow BPF of the DFT.
Is it so in this case? Will zero padding in DFT improve the SNR of the DFT output?
Also please suggest any idea or algorithm to proceed further with my thesis. I'm really thankful to you for reading my complete post.
If you need to see the effect of zero-filling, here's an FFT of a 512 point 1MHz sine wave:
Here's the same thing, zero-filled to 8192 points:
As you can see, all it's done is calculated more points over the same spectral line.
Hope this helps.
Hmm... looks like you can't post pictures. Sorry about that!
Thanks for your reply. I think that your hidden picture meant to say the SNR (or Noise floor) is same for both 512 point DFT and 8192 (512 + 7680 zero padded sample) point DFT and SNR will not improved with zero padding.
Yes, that's correct. (Pity about the pictures - they were quite good...)
Here is a pic with a 512 pt 1 MHz sine sine wave sampled at 8 MHz, trace a 512 pt FFT, trace b zero-filled to 4096 pt FFT. Trace b is smoother, but underlying information (including SNR) doesn't change.
Note I applied a Hamming window, otherwise when zero filling the end of the sine wave (even if it ends perfectly on a zero) will look to the FFT like a sharp change (i.e. sudden transition to a flat line ).
the standard approach to FSK demodulation: square the samples, so you get a carrier at the symbol frequency. You track this carrier with a PLL and downmix the signal to baseband. In the baseband you demodulate the data. The PLL will also track Doppler shifts. The Doppler will thus not affect the bit error rate.
Have a look at https://www.mikrocontroller.net/topic/500486#new . This is valid C-Code for a FSK- ( in this case Minimum Shift Keying MSK- ) modem.
the preamble will give you enough information to estimate the dopple and the clock frequency as well as the phase. At the end of the preamble you should be fully synched.
read the attached paper and pdf version of my ppt presentation...
it teaches you how to assemble a preamble with alternating conjugate tones the delayed product of the adjacent tone intervals is a single complex tone at twice the doppler offset frequency... you can easily measure the frequency of this tone by the delayed (one sample) conjugate product and then dial in the correcting doppler shift to remove it.
You don't need an FFT or spectrum analysis to solve this problem... might want to look up a mil-std-188-181b
you can always contact me for more information
The way I've seen FSK done is with a I/Q to Mag/Phase Conversion. Then take the derivative of the phase to get frequency. Measure the frequency during the preamble to find the channel center. The frequency swings about the center are your data.
There are a lot of details. Of course the your band had to be down sampled and band limited. I would shoot for 6 to 8 samples per symbol if you are not doing a tracking receiver. Correlate to a pattern in the preamble to find the center of the data eye. In the payload section use an average of a few samples (1 less than the symbol period in samples) for a cheap Integrate Sample and Dump filter.
A good way to do I/Q to Mag/Phase is a CORDIC. The derivative can be a x(n) - x(n-1) or a multi-tap filter. Phase unwrapping can be problem.
A good reference is a book by Frerking.
Best nuts and bolts book I've seen the one by Michael Rice.