Reply by October 19, 20052005-10-19
Thank you all, for all your valuable suggestions... and i thank to this
group for porviding this nice  platform. i will come back if any
clarifications required.

Regards,
Sreenivas

Reply by Eric Jacobsen October 18, 20052005-10-18
On 18 Oct 2005 03:39:08 -0700, asnivas223@gmail.com wrote:

>Hi, > first of all i thank you all for your explanation, one thing i want >to ask is that in Gardners paper he is taking only 2 samples/symbol. >But i am having 16 samples/symbol or it may vary depending upon the >incoming signal frequency ie., it may become 17 or 100 samples/symbol. >so can i use the same approach independed of samples/symbol or i have >to check for them and down sample such that it contain always >2samples/symbol which is very inefficient. >Is there any other approach which is independent of samples/symbol? if >so please give me relevent information. > >Thank you in advance. > >Best Regards, >Sreenivas
If you start out with a high number of samples/symbol you can do what we used to call "dumb sampling", which is just locate the sample closest to the symbol center and use that. You can do this with a phase detector/timing error detector. As the number of available samples/symbol decreases the sample jitter gets larger and eventually starts to degrade performance more than is acceptable. Alternatively you can decimate prior to the polyphase filter if it helps reduce the overall processing overhead. There are a lot of ways to architect systems using the sort of techniques that Gardner described. Some systems use a small polyphase resampling filter at the beginning of the processing chain and then decimate and apply Nyquist filtering. Other systems decimate synchronous with the sampling system and then interpolate with a polyphase Nyquist filter as the last stage. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by October 18, 20052005-10-18
john wrote:
> You need to steer the interpolator toward the optimal sampling > instants. Because the references in the transmitter and receiver are > usually asynchronous, the optimal sampling instants drift over time. > For example, if you transmit 2400 baud data to me and I receive it at > 2399.9, then the optimal sampling instant drifts one full bit every ten > seconds. The idea is to adjust the NCO frequency and phase to match the > rate of the incoming data, in this case it has to speed up by 0.1 Hz, > but that is a number you never actually see, because the NCO adjustment > happens automatically via feedback. > > The feedback loop is a PLL and like all PLLs it starts with an error > detector, in this case the TED. The TED is just a simple calculation > using two or more samples per bit. On every 0-1 transition, it gives an > estimate of how far away your samples are from lining up on the ideal > sampling instant. This error signal is filtered with a simple lead lag > filter, and the output of that is subtracted from the NCO phase to > steer it. > > I hope that is helpful. When I was a teaching assistant I really sucked > at it so I'm not the best one to explain. > > John
Hi, first of all i thank you all for your explanation, one thing i want to ask is that in Gardners paper he is taking only 2 samples/symbol. But i am having 16 samples/symbol or it may vary depending upon the incoming signal frequency ie., it may become 17 or 100 samples/symbol. so can i use the same approach independed of samples/symbol or i have to check for them and down sample such that it contain always 2samples/symbol which is very inefficient. Is there any other approach which is independent of samples/symbol? if so please give me relevent information. Thank you in advance. Best Regards, Sreenivas
Reply by Eric Jacobsen October 8, 20052005-10-08
On 7 Oct 2005 21:43:02 -0700, asnivas223@gmail.com wrote:

>Thank you for your valuable time, actually i am new to this area >completely. what i have understood from Gardners paper is that we can >estimate symbol timing using a free running sampling clock, and the >decimation is kept constant so that we get a constant sample rate at >the out put independent of the input signal frequency , so that there >is no need for any PLL........ am i am correct till here.. > >then i have'nt understood how this actually finds the symbol timing >(interplation phase) ..... >he says that interpolation phase is determined by the NCO value after a >wrap occurs how this is determined ... the concept is not clear for me >here... i have thinked a lot but i am not getting the basic idea of it. > >i dont understand how the transition occurance is detected here...( in >Timing error detector(TED)) .... > >kindly give me the basic concept of how this NCO is adjusted and how >this TED works... > >Thank you in advance for your valuable time.
As John mentioned here and later, the system does include a PLL. The architecture is essentially the same as an analog PLL, with a phase detector, loop filter, and controllable oscillator. In this case the VCO is replaced by an NCO and the detector and filter are digital. You do need an effective Timing Error Detector to generate an error signal that is fed through the filter and then steers the NCO. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by Eric Jacobsen October 8, 20052005-10-08
On 7 Oct 2005 00:03:02 -0700, allanherriman@hotmail.com wrote:

>A *lot* of QAM modems have been made that sample at the symbol rate. >This is about half the sampling rate needed to avoid aliasing. The >aliasing doesn't hurt here; models based on signal reconstruction do >not apply as (if we aren't performing adaptive eq) we only care about >the signal value at the symbol sampling instants.
Alan, Sampling at the symbol rate is just adequately sampled for QAM, considering that the sampling is done complex on both I and Q. There'll be a little bit of potential aliasing on the skirts where the energy past the 3dB point folds back in, but if the anti-aliasing filter keeps everything else out you're correct that this won't really have any degrading effect. Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by john October 8, 20052005-10-08
You need to steer the interpolator toward the optimal sampling
instants. Because the references in the transmitter and receiver are
usually asynchronous, the optimal sampling instants drift over time.
For example, if you transmit 2400 baud data to me and I receive it at
2399.9, then the optimal sampling instant drifts one full bit every ten
seconds. The idea is to adjust the NCO frequency and phase to match the
rate of the incoming data, in this case it has to speed up by 0.1 Hz,
but that is a number you never actually see, because the NCO adjustment
happens automatically via feedback.

The feedback loop is a PLL and like all PLLs it starts with an error
detector, in this case the TED. The TED is just a simple calculation
using two or more samples per bit. On every 0-1 transition, it gives an
estimate of how far away your samples are from lining up on the ideal
sampling instant. This error signal is filtered with a simple lead lag
filter, and the output of that is subtracted from the NCO phase to
steer it.

I hope that is helpful. When I was a teaching assistant I really sucked
at it so I'm not the best one to explain.

John

Reply by October 8, 20052005-10-08
john wrote:
> As you will see from the Gardner paper, the interpolation phase is > determined by the NCO value after a wrap occurs. Typically the NCO > frequency and phase are adjusted by filtering the output of a timing > error detector (TED) through a lead-lag loop filter, creating a second > order PLL. The TED is normally chosen to give an error value > proportional to the timing offset when a transition occurs and zero > when no transition occurs (Gardner has a paper on this too). The only > thing required to lock onto the timing is a reasonable density of > transitions, not a particular bit pattern. So I would say this > qualifies as NDA. > > John
Thank you for your valuable time, actually i am new to this area completely. what i have understood from Gardners paper is that we can estimate symbol timing using a free running sampling clock, and the decimation is kept constant so that we get a constant sample rate at the out put independent of the input signal frequency , so that there is no need for any PLL........ am i am correct till here.. then i have'nt understood how this actually finds the symbol timing (interplation phase) ..... he says that interpolation phase is determined by the NCO value after a wrap occurs how this is determined ... the concept is not clear for me here... i have thinked a lot but i am not getting the basic idea of it. i dont understand how the transition occurance is detected here...( in Timing error detector(TED)) .... kindly give me the basic concept of how this NCO is adjusted and how this TED works... Thank you in advance for your valuable time. Regards, Sreenivas
Reply by October 7, 20052005-10-07
Jerry Avins wrote:
> rhnlogic@yahoo.com wrote: > > Jerry Avins wrote: > > > >>rhnlogic@yahoo.com wrote: > >> > >>>Sinc reconstruction methods of interpolation are usually only > >>>useful with signals that are band-limited before sampling. > >> > >>Sampled signals are usually useful only if they were band limited before > >>the sampling > > > > > > This is a stronger statement than I'd be willing to make with respect > > to artificial modulation schemes. Consider cases with very redundant > > data encodings and using statistical data recovery. One might be able > > to get useful information out of a severely undersampled signal. > > But you wouldn't be able to reconstruct that signal. > > > > > > IMHO. YMMV. > > To save face, I'd classify that as "unusual".
A *lot* of QAM modems have been made that sample at the symbol rate. This is about half the sampling rate needed to avoid aliasing. The aliasing doesn't hurt here; models based on signal reconstruction do not apply as (if we aren't performing adaptive eq) we only care about the signal value at the symbol sampling instants. Regards, Allan
Reply by Eric Jacobsen October 5, 20052005-10-05
On 4 Oct 2005 21:50:39 -0700, asnivas223@gmail.com wrote:

> >john wrote: >> asnivas223@gmail.com wrote: >> > Hi, >> > i am Sreenivas, i want to clarify from you, "Can the the symbol >> > timing recovery be acheived by using interpolation filter method or not >> > ?". Actuall i am working at 32MHz sampling rate, can i apply this >> > method at this rate for symbol timing recovery or not. >> > It is mentioned in "Interpolation in Digital Modems-Part-1 : >> > Fundamentals" by Floyd M.Gardner, Fellow, IEEE.-1993. that it >> > "interpolation is not and appropriate technique to be applied to >> > wideband signals". >> > if it is not suitable, why it is not suitable? and suggest what are >> > other methods will give better results. >> > if it is suitable, kindly suggest me some book or links where i can get >> > good description of how to implement it in fpgas. >> > >> > Thankyou for your time and valuable suggestions in advance. >> > >> > Regards, >> > Sreenivas >> >> Wideband is a relative term. What matters is how many samples per baud >> are available to estimate the value of a signal between sampling >> instants using a weighted average or curve fit based on the surrounding >> samples. Intuitively, the signal should not change much between >> sampling instants in order for the estimate to be good. As a practical >> matter, that will be the case if at least two samples per baud are >> available. >> >> John > >Hi thank you for the reply. Actually i'm looking at Non-Data-Aided(NDA) >method of timing recovery in which i'm having only the incoming data >and only. My data rate is fixed at 2Mbps. I'm basically looking at the >method which is not using NCO or DPLL or PLL. I'd like to do it using >interpolators. >---srinivas
How do you plan to steer the interpolators? Eric Jacobsen Minister of Algorithms, Intel Corp. My opinions may not be Intel's opinions. http://www.ericjacobsen.org
Reply by john October 5, 20052005-10-05
As you will see from the Gardner paper, the interpolation phase is
determined by the NCO value after a wrap occurs. Typically the NCO
frequency and phase are adjusted by filtering the output of a timing
error detector (TED) through a lead-lag loop filter, creating a second
order PLL. The TED is normally chosen to give an error value
proportional to the timing offset when a transition occurs and zero
when no transition occurs (Gardner has a paper on this too). The only
thing required to lock onto the timing is a reasonable density of
transitions, not a particular bit pattern. So I would say this
qualifies as NDA.

John