On Thu, 6 Mar 2014 01:59:21 -0800 (PST), phatak27@gmail.com wrote:
>I am currently trying to simulate the s-curve of Gardner Timing algorithm. My simulation set-up is as follows,
>
>I am generating a QPSK signal oversampled to a factor of 32. I am passing a received signal through a matched filter and then down-sampling the signal by 16 to get 2 samples/symbol required for Gardner TED.
>
>While performing down-sampling each time I am selecting 2 samples out of 32 samples to simulate the delay.
>For example for the first time I select 1st and 16th samples of each symbol and calculate Timing Error, second time I select 1st and 17th samples of each symbol and so on.
>
>After averaging the timing error for each iteration of down-sampling. I averaged results.
>
>Is this approach correct in getting the s-curve?
>
Gardner's detector requires three samples per symbol, so I'm guessing
you're recycling the previous symbol boundary sample for the next
symbol. Also, what you've proposed works (with the third sample)
assuming you meant 1st and 16th sample, then 2nd and 17th, then 3rd
and 18th, etc., for the delays.
If so, and with the appropriate third sample, then what you've
described will work fine. As others have mentioned, with many TEDs
the curve is data-dependent and the final curve is the average across
all possible transitions. It is important to use a lot of data as
the trasitions throughout the length of the matched filter all have an
influence on the eye patterns, and, therefore, the final shape of the
s-curve.
If you plan to implement the detector before the matched filter in the
receiver, then the pulse shape should be RRC (or equivalent), and if
the matched filter will be before the detector then the pulse shape
should be RC.
Typically for TEDs and other detectors the curve will flatten with
decreasing SNR,so you may want to try this with no noise and then
perhaps at some lower SNR of interest to see the difference in slope.
Eric Jacobsen
Anchor Hill Communications
http://www.anchorhill.com
Reply by DougB●March 6, 20142014-03-06
>I am currently trying to simulate the s-curve of Gardner Timing algorithm.
My simulation set-up is as follows,
>
>I am generating a QPSK signal oversampled to a factor of 32. I am passing
a received signal through a matched filter and then down-sampling the
signal by 16 to get 2 samples/symbol required for Gardner TED.
>
>While performing down-sampling each time I am selecting 2 samples out of
32 samples to simulate the delay.
>For example for the first time I select 1st and 16th samples of each
symbol and calculate Timing Error, second time I select 1st and 17th
samples of each symbol and so on.
>
>After averaging the timing error for each iteration of down-sampling. I
averaged results.
>
>Is this approach correct in getting the s-curve?
>
>
Assume you meant 2nd and 17th and so on .... Your technique will work
fine. Make sure that you randomize the data bits that you are modulating
and that you add an appropriate amount of WGN. You can then determine the
phase detector gain from this data.
A more efficient way is to use a polyphase resampling filter to synthesize
as many delays as you want without having to highly oversample the signal.
-Doug
_____________________________
Posted through www.DSPRelated.com
Reply by Alexander Petrov●March 6, 20142014-03-06
>I am currently trying to simulate the s-curve of Gardner Timing algorithm.
My simulation set-up is as follows,
>
>I am generating a QPSK signal oversampled to a factor of 32. I am passing
a received signal through a matched filter and then down-sampling the
signal by 16 to get 2 samples/symbol required for Gardner TED.
>
>While performing down-sampling each time I am selecting 2 samples out of
32 samples to simulate the delay.
>For example for the first time I select 1st and 16th samples of each
symbol and calculate Timing Error, second time I select 1st and 17th
samples of each symbol and so on.
>
>After averaging the timing error for each iteration of down-sampling. I
averaged results.
>
>Is this approach correct in getting the s-curve?
>
>
One nuance, s-curve of Gardner Timing algorithm depends on symbols
transitions, for example no transitions -> averaged timing error = 0.
_____________________________
Posted through www.DSPRelated.com
Reply by ●March 6, 20142014-03-06
I am currently trying to simulate the s-curve of Gardner Timing algorithm. My simulation set-up is as follows,
I am generating a QPSK signal oversampled to a factor of 32. I am passing a received signal through a matched filter and then down-sampling the signal by 16 to get 2 samples/symbol required for Gardner TED.
While performing down-sampling each time I am selecting 2 samples out of 32 samples to simulate the delay.
For example for the first time I select 1st and 16th samples of each symbol and calculate Timing Error, second time I select 1st and 17th samples of each symbol and so on.
After averaging the timing error for each iteration of down-sampling. I averaged results.
Is this approach correct in getting the s-curve?