>
>
> Dear all,
>
> Thanks for the prompt messages. This is exactly what am I doing.
>
> t(k)=k/Fs+k*err
> n=1:256
> y(k)=sum( x(n)* sinc((t(k)-n*Ts)/Ts);
>
> where k runs from 1 to 256 Fs=sampling frequency, Ts=sampling time. x(n)
> is input complex data. t is the sampling instant. y(n) the resultant
> complex data after sampling.
>
> when err is 0, which means no sampling error. I got x(n)=y(n).
> when err is non zero , the sampling time changes by factor k.....as
> t(1)=1/Fs+1*err....
> t(2)=2/Fs+2*err....so on.
>
> the factor k*err increases in each step, which i have assumed as, the
> jitter in sampling which changes the sampling instant.
>
> for k*err< Ts/2 the interpolation has no error. when plotted interpolated
> data y(n) with line and actual data x(n) with circles....the circles lie
> exactly on the line.
>
> The problem am running into is, when k*err>Ts/2 , the circles(x(n)) do not
> lie on the line(y(n)).which means, interpolated data is not accurate, and
> needs to be corrected. why is it so??? and how could I solve this
> problem???
>
> Thanks in advance
> Aamer
>
>
Aamer,
The problem is caused by aliasing. If you look at the power spectrum of
your y(k) and x(n) signals, you should see the problem fairly clearly.
Also, I don't understand why you are using k*err. Clock phase noise
generally does not behave in this way. Perhaps I am misunderstanding
what you are saying, but I think
T(n) = n/Fs + err
Where err is a random variable with a PDF and spectrum matching that of
the phase noise you are simulating.
Cheers
Marc
Reply by Fred Marshall●December 13, 20062006-12-13
"aamer" <raqeebhyd@yahoo.com> wrote in message
news:962dnRmm9qIOeOLYnZ2dnUVZ_vamnZ2d@giganews.com...
> Dear all,
>
> Thanks for the prompt messages. This is exactly what am I doing.
>
> t(k)=k/Fs+k*err
> n=1:256
> y(k)=sum( x(n)* sinc((t(k)-n*Ts)/Ts);
>
> where k runs from 1 to 256 Fs=sampling frequency, Ts=sampling time. x(n)
> is input complex data. t is the sampling instant. y(n) the resultant
> complex data after sampling.
>
> when err is 0, which means no sampling error. I got x(n)=y(n).
> when err is non zero , the sampling time changes by factor k.....as
> t(1)=1/Fs+1*err....
> t(2)=2/Fs+2*err....so on.
>
> the factor k*err increases in each step, which i have assumed as, the
> jitter in sampling which changes the sampling instant.
>
> for k*err< Ts/2 the interpolation has no error. when plotted interpolated
> data y(n) with line and actual data x(n) with circles....the circles lie
> exactly on the line.
>
> The problem am running into is, when k*err>Ts/2 , the circles(x(n)) do not
> lie on the line(y(n)).which means, interpolated data is not accurate, and
> needs to be corrected. why is it so??? and how could I solve this
> problem???
>
> Thanks in advance
> Aamer
Aamer,
I still don't quite understand what you're trying to accomplish.
Reading the expressions, here is what I see is being done:
I guess I will have to assume that Ts = 1/Fs since you haven't said so.
So, if correct:
t(k)=k/Fs+k*err = k*Ts + k*err
Ooops! I don't think you want k*err here, I think you want err(k) instead
for jitter.
Then:
y(k)=sum( x(n)* sinc((k*Ts + k*err - n*Ts)/Ts)
k=() > ()?
parentheses are messed up so:
y(k)=sum{ x(n)* sinc(k*Ts + err(k) - n*Ts)/Ts) }
1) You get a set of equispaced samples x(n).
2) You interpolate those samples with a sinc if err(all k)=0.
So far so good. This is equivalent to running the samples through a perfect
lowpass filter (to the extent that the sincs are of infinite extent at
least).
However, when you jitter the time you are also jittering the position of the
sincs. I really don't know if that's your objective or not.
I wonder if your objective isn't to do this:
First, I'm going to assume that k=G*n where G is some large integer so that
there are G times the output samples compared to the number of input
samples - giving some discrete times on which the jittered output samples
can occur.
y(k)=sum{x(n)*sinc(-n*Ts)} for all "n" evaluated at time k*Ts/G + err(k)
That would simulate jitter of a sequence of samples, lowpassed back to
continuous time and then resampled with jitter.
If there are no great errors or just poor notation on my part here perhaps
this will give you an idea.
It would be good to do a couple of things if I may suggest:
1) Get k and n well defined so that one can tell better their relative
values.
2) Draw a block diagram so that one can tell better what your objective
really is. Here is the block diagram for what I was trying to do above:
+-----------+ +-----------+ +-----------+
| | | | | |
| Regular | | Sinc | | Sample on|
---->| Samples |--->|Interpolate -->| n*Ts+err(k)---> re-interpolate
| on Ts | | on Ts/G | | (jitter) | with sincs?
| | | fine grid | | |
+-----------+ +-----------+ +-----------+
err(k) is an integer multiple of Ts/G and
is much less than Ts.
Fred
Reply by aamer●December 13, 20062006-12-13
>
>
>aamer wrote:
>> Hi all,
>>
>> I am modelling sampling jitter in matlab using sinc interpolation. The
>> input is a vector of 256 complex data(16QAM symbols). Have introduced
>> jitter in sampling time as
>>
>> t=k/Fs+k*err
>>
>> where Fs is sampling frequency, k is the symbol index, t is the new
>> sampling instant. I have assumed ideal sampling when err is 0. But,
when
>> err is non zero and the product k*err exceeds Ts/2 (i.e k*err>Ts/2) I
>> found that the interpolated data is not quite accurate, when plotted
the
>> input symbols and the interplated symbols do not align. what might be
the
>> reason??? Is it due to inaccuracy of sinc interpolation and how could
I
>> overcome this problem.
>>
>> Thanks in advance.
>>
>>
>
>Aamer,
>
>What I guess you mean is that you are using bandlimited interpolation of
>this type:
>
>x(t) = sum_-inf^inf x[n] h_s(t-n/F_0)
>
>Where h_s is a sinc function and you are randomly perturbing t to
>simulate the effects of jitter in the ADC? The problem you are running
>in to, when the jitter (clock phase noise) gets too big, is that
>aliasing occurs when the sampling interval becomes too long to properly
>sample all the frequencies in the signal.
>
>The solution is to oversample your signal and adjust your simulation
>parameters to ensure that this does not have an effect on the
>performance estimate that your simulation is giving you.
>
>Are you using the entire sinc(), or are you windowing it or (hopefully
>not) just truncating it? Do you assume x[n] to be zero for n outside
>your samples? Is that a valid assumption?
>
>Regards
>
>Marc Brooker
>
Dear all,
Thanks for the prompt messages. This is exactly what am I doing.
t(k)=k/Fs+k*err
n=1:256
y(k)=sum( x(n)* sinc((t(k)-n*Ts)/Ts);
where k runs from 1 to 256 Fs=sampling frequency, Ts=sampling time. x(n)
is input complex data. t is the sampling instant. y(n) the resultant
complex data after sampling.
when err is 0, which means no sampling error. I got x(n)=y(n).
when err is non zero , the sampling time changes by factor k.....as
t(1)=1/Fs+1*err....
t(2)=2/Fs+2*err....so on.
the factor k*err increases in each step, which i have assumed as, the
jitter in sampling which changes the sampling instant.
for k*err< Ts/2 the interpolation has no error. when plotted interpolated
data y(n) with line and actual data x(n) with circles....the circles lie
exactly on the line.
The problem am running into is, when k*err>Ts/2 , the circles(x(n)) do not
lie on the line(y(n)).which means, interpolated data is not accurate, and
needs to be corrected. why is it so??? and how could I solve this
problem???
Thanks in advance
Aamer
Reply by Marc Brooker●December 13, 20062006-12-13
aamer wrote:
> Hi all,
>
> I am modelling sampling jitter in matlab using sinc interpolation. The
> input is a vector of 256 complex data(16QAM symbols). Have introduced
> jitter in sampling time as
>
> t=k/Fs+k*err
>
> where Fs is sampling frequency, k is the symbol index, t is the new
> sampling instant. I have assumed ideal sampling when err is 0. But, when
> err is non zero and the product k*err exceeds Ts/2 (i.e k*err>Ts/2) I
> found that the interpolated data is not quite accurate, when plotted the
> input symbols and the interplated symbols do not align. what might be the
> reason??? Is it due to inaccuracy of sinc interpolation and how could I
> overcome this problem.
>
> Thanks in advance.
>
>
Aamer,
What I guess you mean is that you are using bandlimited interpolation of
this type:
x(t) = sum_-inf^inf x[n] h_s(t-n/F_0)
Where h_s is a sinc function and you are randomly perturbing t to
simulate the effects of jitter in the ADC? The problem you are running
in to, when the jitter (clock phase noise) gets too big, is that
aliasing occurs when the sampling interval becomes too long to properly
sample all the frequencies in the signal.
The solution is to oversample your signal and adjust your simulation
parameters to ensure that this does not have an effect on the
performance estimate that your simulation is giving you.
Are you using the entire sinc(), or are you windowing it or (hopefully
not) just truncating it? Do you assume x[n] to be zero for n outside
your samples? Is that a valid assumption?
Regards
Marc Brooker
Reply by Ron N.●December 12, 20062006-12-12
aamer wrote:
> Hi all,
>
> I am modelling sampling jitter in matlab using sinc interpolation. The
> input is a vector of 256 complex data(16QAM symbols). Have introduced
> jitter in sampling time as
>
> t=k/Fs+k*err
>
> where Fs is sampling frequency, k is the symbol index, t is the new
> sampling instant. I have assumed ideal sampling when err is 0. But, when
> err is non zero and the product k*err exceeds Ts/2 (i.e k*err>Ts/2) I
> found that the interpolated data is not quite accurate, when plotted the
> input symbols and the interplated symbols do not align. what might be the
> reason??? Is it due to inaccuracy of sinc interpolation and how could I
> overcome this problem.
How many samples are you using for each sinc interpolation?
A sinc function has infinite extent and significant energy well
away from the center lobe. Are you windowing your sinc?
IMHO. YMMV.
--
rhn A.T nicholson d.0.t C-o-M
Reply by Fred Marshall●December 12, 20062006-12-12
"aamer" <raqeebhyd@yahoo.com> wrote in message
news:ztmdnV0iqL5a2uLYnZ2dnUVZ_rqhnZ2d@giganews.com...
> Hi all,
>
> I am modelling sampling jitter in matlab using sinc interpolation. The
> input is a vector of 256 complex data(16QAM symbols). Have introduced
> jitter in sampling time as
>
> t=k/Fs+k*err
>
> where Fs is sampling frequency, k is the symbol index, t is the new
> sampling instant. I have assumed ideal sampling when err is 0. But, when
> err is non zero and the product k*err exceeds Ts/2 (i.e k*err>Ts/2) I
> found that the interpolated data is not quite accurate, when plotted the
> input symbols and the interplated symbols do not align. what might be the
> reason??? Is it due to inaccuracy of sinc interpolation and how could I
> overcome this problem.
>
> Thanks in advance.
>
>
First, it would be good to better define what you're doing.
"Modeling sampling jitter using sinc interpolation" is pretty vague.
Are you trying to model an analog system by using nearly perfect sinc
interpolation and *then* jittering the sample times thereafter?
What do you mean that the input symbols and the interpolated symbols do not
align? What is Ts? etc.
Fred
Reply by aamer●December 12, 20062006-12-12
Hi all,
I am modelling sampling jitter in matlab using sinc interpolation. The
input is a vector of 256 complex data(16QAM symbols). Have introduced
jitter in sampling time as
t=k/Fs+k*err
where Fs is sampling frequency, k is the symbol index, t is the new
sampling instant. I have assumed ideal sampling when err is 0. But, when
err is non zero and the product k*err exceeds Ts/2 (i.e k*err>Ts/2) I
found that the interpolated data is not quite accurate, when plotted the
input symbols and the interplated symbols do not align. what might be the
reason??? Is it due to inaccuracy of sinc interpolation and how could I
overcome this problem.
Thanks in advance.