DSPRelated.com
Forums

Oversampling and quantization...

Started by kl31n December 2, 2006
Hopefully some of you can point me to the solution of this problem.

If I oversample a signal, I quantize it and then I reduce the sampling rate
with a decimator by a factor M, I can improve the QSNR by a factor of
log2(M). Nice and easy. But what if I lower the sampling rate using a
fitting technique? I mean if I know how the signal is done and I retrieve
some parameters out of it using a fitting technique, how can I estimate the
accuracy with which I know such parameters?

I'll make an example to explain my question in the most concise way
possible.

Let's take an order 0 ADC sampling at 10MS/s. The input signal is a
perfect(THD speaking) tone at 1MHz, phase modulated by a perfect tone with
a frequency of 1kHz and the only noise present is additive gaussian white
noise.
Let's say that I want to retrieve the 1kHz tone and I want to do it using a
least squares fitting.

What I do is simply solving, in the least squares sense, the overdetermined
linear system

[cos(2*pi*f*0)         sin(2*pi*f*0)      ]   [a1]   [input(0)      ]
[  ...                   ...              ] * [  ] = [...           ]
[cos(2*pi*f*(M-1)*T)   sin(2*pi*f*(M-1)*T)]   [a2]   [input((M-1)*T)]

where f indicates the frequency of the tone(1MHz), T the sampling
period(100ns) and M indicates how many samples I use in the fitting.
From a1 and a2 I can retrieve the phase of the input signal and reiterating
the solution of such a system with the next M samples on and on I can know
the phase for all of the time I've the input. 
Clearly the phase is known at (1e7/M) samples per second, but with how many
actual bits do I know it? Did I gain log2(M)? What happened?

Thanks in advance to whoever will be able to help me to solve this problem
as it's driving me crazy.

Best regards,

kl31n

kl31n wrote:

> Hopefully some of you can point me to the solution of this problem. > > If I oversample a signal, I quantize it and then I reduce the sampling rate > with a decimator by a factor M, I can improve the QSNR by a factor of > log2(M). Nice and easy. But what if I lower the sampling rate using a > fitting technique? I mean if I know how the signal is done and I retrieve > some parameters out of it using a fitting technique, how can I estimate the > accuracy with which I know such parameters? > > I'll make an example to explain my question in the most concise way > possible. > > Let's take an order 0 ADC sampling at 10MS/s. The input signal is a > perfect(THD speaking) tone at 1MHz, phase modulated by a perfect tone with > a frequency of 1kHz and the only noise present is additive gaussian white > noise. > Let's say that I want to retrieve the 1kHz tone and I want to do it using a > least squares fitting. > > What I do is simply solving, in the least squares sense, the overdetermined > linear system > > [cos(2*pi*f*0) sin(2*pi*f*0) ] [a1] [input(0) ] > [ ... ... ] * [ ] = [... ] > [cos(2*pi*f*(M-1)*T) sin(2*pi*f*(M-1)*T)] [a2] [input((M-1)*T)] > > where f indicates the frequency of the tone(1MHz), T the sampling > period(100ns) and M indicates how many samples I use in the fitting. > From a1 and a2 I can retrieve the phase of the input signal and reiterating > the solution of such a system with the next M samples on and on I can know > the phase for all of the time I've the input. > Clearly the phase is known at (1e7/M) samples per second, but with how many > actual bits do I know it? Did I gain log2(M)? What happened? > > Thanks in advance to whoever will be able to help me to solve this problem > as it's driving me crazy. >
If you calculate the maximum likelihood estimate by finding the best fit using M samples, then it is equivalent to the reduction of the noise bandwidth by M times. Consequently, your accuracy is sqrt(M) times better. If you need the accuracy of phases/amplitudes separately, then take the partial derivatives. Note: this assumes the SNR in the narrow bandwidth is well above 1. If it is not, the maximum likelihood solution is unstable and it gets really nasty there. It would be difficult to find the solution in the closed form. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
On Sun, 03 Dec 2006 03:00:01 GMT, Vladimir Vassilevsky wrote:

> If you calculate the maximum likelihood estimate by finding the best fit > using M samples, then it is equivalent to the reduction of the noise > bandwidth by M times. Consequently, your accuracy is sqrt(M) times > better. If you need the accuracy of phases/amplitudes separately, then > take the partial derivatives.
Thank you very much for you answer, it's exactly what I needed. Could you point my out to any reference in which the MLE is shown to have such a property? Thank you again, kl31n

kl31n wrote:

> On Sun, 03 Dec 2006 03:00:01 GMT, Vladimir Vassilevsky wrote: > > >>If you calculate the maximum likelihood estimate by finding the best fit >> using M samples, then it is equivalent to the reduction of the noise >>bandwidth by M times. Consequently, your accuracy is sqrt(M) times >>better. If you need the accuracy of phases/amplitudes separately, then >>take the partial derivatives. > > > Thank you very much for you answer, it's exactly what I needed. Could you > point my out to any reference in which the MLE is shown to have such a > property?
Any classic book like Viterbi & Omura or Van Trees should have that. Vladimir Vassilevsky DSP and Mixed Signal Design Consultant http://www.abvolt.com
Vladimit's answer is very good, but note that you are comparing a
parametric
method (with a signal model) with a non-parametric method (only know
that
signal may be bandlimited).  But since the wanted parameter in your
signal
appear linearly, then you are in luck and you can easily apply
least-squares.
If the parameter appears non-linearly, for example if you do not know
the
frequency, then it's a bit more complicated.

Like Vladimir said, the solution to the linear problem is a basic
estimation
theory problem, so any textbook would cover it.

"julius" <juliusk@gmail.com> writes:

> Like Vladimir said, the solution to the linear problem is a basic > estimation > theory problem, so any textbook would cover it.
Even if the OP could guess a few texts that would cover it, why not provide a reference? If anything, it shows what others have been reading lately and implies some favor. -- % Randy Yates % "How's life on earth? %% Fuquay-Varina, NC % ... What is it worth?" %%% 919-577-9882 % 'Mission (A World Record)', %%%% <yates@ieee.org> % *A New World Record*, ELO http://home.earthlink.net/~yatescr
Randy Yates wrote:

> Even if the OP could guess a few texts that would cover it, > why not provide a reference? If anything, it shows what others > have been reading lately and implies some favor.
For basic linear estimation the Van Trees book is good, and I like the Steven Kay books. For the nonlinear, I like the Stoica book.
"julius" <juliusk@gmail.com> writes:

> Randy Yates wrote: > >> Even if the OP could guess a few texts that would cover it, >> why not provide a reference? If anything, it shows what others >> have been reading lately and implies some favor. > > For basic linear estimation the Van Trees book is good, and I like > the Steven Kay books. > > For the nonlinear, I like the Stoica book.
Julius, Did you see this which I posted a few days/weeks back? If so, why are you being so "anonymous"? Is this *THE* Julius Kusuma? Hey man, how's it going?! GREAT to see you back in comp.dsp. If you wanted to fill us in on your wanderings of late, we'd love to hear about it. Last I knew you were at MIT, but I'm thinking you've finished your PhD now. --Randy -- % Randy Yates % "Remember the good old 1980's, when %% Fuquay-Varina, NC % things were so uncomplicated?" %%% 919-577-9882 % 'Ticket To The Moon' %%%% <yates@ieee.org> % *Time*, Electric Light Orchestra http://home.earthlink.net/~yatescr
Il Sun, 03 Dec 2006 23:39:07 GMT, Vladimir Vassilevsky ha scritto:

> kl31n wrote: > >> On Sun, 03 Dec 2006 03:00:01 GMT, Vladimir Vassilevsky wrote: >> >> >>>If you calculate the maximum likelihood estimate by finding the best fit >>> using M samples, then it is equivalent to the reduction of the noise >>>bandwidth by M times. Consequently, your accuracy is sqrt(M) times >>>better. If you need the accuracy of phases/amplitudes separately, then >>>take the partial derivatives. >> >> >> Thank you very much for you answer, it's exactly what I needed. Could you >> point my out to any reference in which the MLE is shown to have such a >> property? > > > Any classic book like Viterbi & Omura or Van Trees should have that.
I couldn't find this discussion about the behaviour of the noise bandwidth in terms that can be exploited to desume what you said in the Van Trees books(the Viterbi & Omura's I don't have). Let's make it even simpler, let's assume that I solved the system I reported in my first message by a total least squares approach with the Moore-Penrose pseudoinverse. Such estimates can be proven to be minimum variance unbiased ones so I cannot ask for better. But still I cannot see why the result is equivalent to reducing the noise bandwidth by M times. I guess it's me who cannot see the obvious, but if you'll be kind enough to help me see it, that would be very kind of you. Thanks in advance for your patience, kl31n
Il Sun, 3 Dec 2006 00:07:12 +0100, kl31n ha scritto:

> If I oversample a signal, I quantize it and then I reduce the sampling rate > with a decimator by a factor M, I can improve the QSNR by a factor of > log2(M).
Writing this I forgot to put (1/2) in front of the log2 funcion, so the QSNR is actually improved by a factor (1/2)*log2(M) = log2(sqrt(M)). I guess the mistake was evident enough and that nobody fell for it, but to not cause some useless confusion, I'll rather make the thread a bit more chaotic and answer myself with such a correction. kl31n