Forums

Choice of Oversampling Ratio in Comm Simulations

Started by Randy Yates April 1, 2006
Randy Yates wrote:
> Jerry Avins <jya@ieee.org> writes: > > > Randy Yates wrote: > >> "Anonymous" <someone@microsoft.com> writes: > >> > >>>I usually oversample by 4 just so I don't have to do any baud tracking. In > >>>my simulations I just take the sum of the magnutide of the four phases and > >>>then use the largest as the resampling phase for everything else, i.e. > >>> > >>>for i=1:4 > >>> baud_phases(i) = sum(abs(xin(i:4:end))); > >>>end > >>>[y, best_phase] = max(baud_phases); > >>> > >>>symbols = xin(best_phase:4:end) > >>> > >>>The alternative is to run the signal through a non-linearity, pick out the > >>>baud line, and track the sample phase with a PLL tied to the baud line. But > >>>since the loop takes a while to converge you either have to run it through > >>>the data twice or throw away the first portion of the simulation result. > >>> > >>>-Clark > >> What if the ideal baud timing lies between samples? > > > > Just a guess, but I think that's the whole point. If you're > > oversampled by enough, the ideal time can't be too far from a sample. > > Maybe. I'm just thinking through this for the first time (obviously). > > What I don't understand is, presuming the timing doesn't change throughout > the sequence, why don't we just find the timing once at the beginning and > then shift it to the proper sampling points at the baseband rate using > an all-pass filter? Then instead of running the entire simulation at N times > the symbol rate, you just run a small timing recovery piece at the beginning. > Seems like it'd save oodles of simulation time. > -- > % Randy Yates % "Watching all the days go by... > %% Fuquay-Varina, NC % Who are you and who am I?" > %%% 919-577-9882 % 'Mission (A World Record)', > %%%% <yates@ieee.org> % *A New World Record*, ELO > http://home.earthlink.net/~yatescr
This is frequently done in burst-mode receivers, even if the timing does change. From the PPM error, one can compute the burst length before the timing shifts too far. One PPM error is one full bit shift in one million bits, etc. John
"Randy Yates" <yates@ieee.org> wrote in message 
news:m3zmj5s268.fsf@ieee.org...
> Jerry Avins <jya@ieee.org> writes: > >> Randy Yates wrote: >>> "Anonymous" <someone@microsoft.com> writes: >>> >>>>I usually oversample by 4 just so I don't have to do any baud tracking. >>>>In >>>>my simulations I just take the sum of the magnutide of the four phases >>>>and >>>>then use the largest as the resampling phase for everything else, i.e. >>>> >>>>for i=1:4 >>>> baud_phases(i) = sum(abs(xin(i:4:end))); >>>>end >>>>[y, best_phase] = max(baud_phases); >>>> >>>>symbols = xin(best_phase:4:end) >>>> >>>>The alternative is to run the signal through a non-linearity, pick out >>>>the >>>>baud line, and track the sample phase with a PLL tied to the baud line. >>>>But >>>>since the loop takes a while to converge you either have to run it >>>>through >>>>the data twice or throw away the first portion of the simulation result. >>>> >>>>-Clark >>> What if the ideal baud timing lies between samples? >> >> Just a guess, but I think that's the whole point. If you're >> oversampled by enough, the ideal time can't be too far from a sample. > > Maybe. I'm just thinking through this for the first time (obviously). > > What I don't understand is, presuming the timing doesn't change throughout > the sequence, why don't we just find the timing once at the beginning and > then shift it to the proper sampling points at the baseband rate using > an all-pass filter? Then instead of running the entire simulation at N > times > the symbol rate, you just run a small timing recovery piece at the > beginning. > Seems like it'd save oodles of simulation time.
I usually do do that, just to make sure it agrees with the calculated delay and gain. check out any phase shift with amplitude dependencies then run everything with a cheat wire and noise free as much as possible. Even so I like using at least 16 complex samples/symbol cos I'm usually using table look-up non-linearities and filter responses with some ACI in the system and at this rate linear interpolation isn't bad. Plus you get pretty looking constellation plots / eye diagrams etc. without any extra processing. For me, the idea of simulation is to get some information about some small part of your system's behaviour without having to model all the aggravating reality that gets in the way in test. You can always put more and more reality in later to slow your sims down more and more 'til finally it's so real you can't tell what on earth is happening without massive amounts of post processing to remove the effects of all the clutter you have wasted most of your processor power introducing. Best of luck - Mike
Mike Yarwood wrote:
> "Randy Yates" <yates@ieee.org> wrote in message > news:m3zmj5s268.fsf@ieee.org... > > Jerry Avins <jya@ieee.org> writes: > > > >> Randy Yates wrote: > >>> "Anonymous" <someone@microsoft.com> writes: > >>> > >>>>I usually oversample by 4 just so I don't have to do any baud tracking. > >>>>In > >>>>my simulations I just take the sum of the magnutide of the four phases > >>>>and > >>>>then use the largest as the resampling phase for everything else, i.e. > >>>> > >>>>for i=1:4 > >>>> baud_phases(i) = sum(abs(xin(i:4:end))); > >>>>end > >>>>[y, best_phase] = max(baud_phases); > >>>> > >>>>symbols = xin(best_phase:4:end) > >>>> > >>>>The alternative is to run the signal through a non-linearity, pick out > >>>>the > >>>>baud line, and track the sample phase with a PLL tied to the baud line. > >>>>But > >>>>since the loop takes a while to converge you either have to run it > >>>>through > >>>>the data twice or throw away the first portion of the simulation result. > >>>> > >>>>-Clark > >>> What if the ideal baud timing lies between samples? > >> > >> Just a guess, but I think that's the whole point. If you're > >> oversampled by enough, the ideal time can't be too far from a sample. > > > > Maybe. I'm just thinking through this for the first time (obviously). > > > > What I don't understand is, presuming the timing doesn't change throughout > > the sequence, why don't we just find the timing once at the beginning and > > then shift it to the proper sampling points at the baseband rate using > > an all-pass filter? Then instead of running the entire simulation at N > > times > > the symbol rate, you just run a small timing recovery piece at the > > beginning. > > Seems like it'd save oodles of simulation time. > I usually do do that, just to make sure it agrees with the calculated delay > and gain. check out any phase shift with amplitude dependencies then run > everything with a cheat wire and noise free as much as possible. Even so I > like using at least 16 complex samples/symbol cos I'm usually using table > look-up non-linearities and filter responses with some ACI in the system and > at this rate linear interpolation isn't bad. Plus you get pretty looking > constellation plots / eye diagrams etc. without any extra processing. > > For me, the idea of simulation is to get some information about some small > part of your system's behaviour without having to model all the aggravating > reality that gets in the way in test. You can always put more and more > reality in later to slow your sims down more and more 'til finally it's so > real you can't tell what on earth is happening without massive amounts of > post processing to remove the effects of all the clutter you have wasted > most of your processor power introducing. > > Best of luck - Mike
Here's a minor trick that can be used to apply sampling rate error in increments of one PPM using Matlab's interpolation routines, which don't directly support small changes: P1=999;Q1=1000;P2=1001;Q2=1000; if(PPM > 0) for k = 1:abs(PPM) z = resample(z,P1,Q1); z = resample(z,P2,Q2); end elseif(PPM < 0) for k = 1:abs(PPM) z = resample(z,Q1,P1); z = resample(z,Q2,P2); end end John