# Quantized least squares estimator and errors...

Started by January 24, 2007
I'll present my problem with an example.

Let's assume I've an ADC sampling a signal and providing a mx1 column of
samples that can be represented as

b + w

being b the vector of the exact values of the signal and w the zero-mean
error due to thermal noise and quantization error.

The acquired signal can be modelled as

A * x = b + w

and I want to estimate the parameters in the vector x. Let's assume for a
moment that I've no error in A. This means that I can obtain the mvu
estimator as

\hat{x} = ((A' * A)^-1 * A') * (b + w)

and it also means that if the covariance matrix of the w vector was \sigma
* I, being I the indentity matrix, the covariance matrix of the error
affecting the output will be

\sigma * (A' * A) ^ -1.

So far so good. If on the other side A is affected by errors and still the
least squares approach is applied the error bound is known and has been
studied. Again so far so good.

Let's assume now instead that A is exact and that it's moore-penrose
pseudoinverse A# is calculated in double precision and then it's quantized,
so that it can be represented as (A# + An), begin An the quantization
error. This means that

(A# + An) * A * x = (A# + An) * (b + w);
(I + An * A) * x = A# * (b + w) + An * (b + w);

which implies that now the estimates are polluted by the additive term

An * (b + w).

What I'm interested in is the statistics of this term to see whether it
pollutes the measures in a "colored way". The term b represents a
sinusoidal signal acquired from a laser and m is such that the estimation
is also done in a multiple of the sinusoid period. I know that An *
b(because of the chosen m) and An * w are generally neglectable, but since
time averaging is performed on the final estimates to search for colored
noise, I'd like to understand whether I'm introducing some myself with my
estimator.

Could anybody please provide some insight on the problem?

kl31n


On Jan 24, 9:14 pm, kl31n <"kl31n(get rid of this to write me
back)"@gmail.com> wrote:
[snip]
> pseudoinverse A# is calculated in double precision and then it's quantized,
> so that it can be represented as (A# + An), begin An the quantization
> error. This means that
>
> (A# + An) * A * x = (A# + An) * (b + w);
> (I + An * A) * x = A# * (b + w) + An * (b + w);
>
> which implies that now the estimates are polluted by the additive term
>
> An * (b + w).
>
> What I'm interested in is the statistics of this term to see whether it
> pollutes the measures in a "colored way". The term b represents a
> sinusoidal signal acquired from a laser and m is such that the estimation
> is also done in a multiple of the sinusoid period. I know that An *
> b(because of the chosen m) and An * w are generally neglectable, but since
> time averaging is performed on the final estimates to search for colored
> noise, I'd like to understand whether I'm introducing some myself with my
> estimator.

I'm not sure if this is good enough for you, but my first try would be
to
look at A and An, and look at the correlation between the rows of the
matrices.  This will give an idea of what correlation it will induce on
the
residual estimation error (A + An)*w.

Are you sure that your assumption that the quantization error will be
zero-mean good enough?  It's hard for me to imagine a case where
the quantization error is close enough to zero mean but not close
enough to be considered iid and orthogonal, unless you are working
with very small block sizes and large dynamic range in your numbers.

Good question, by the way!

Julius


On 24 Jan 2007 13:10:55 -0800, julius wrote:

> I'm not sure if this is good enough for you, but my first try would be
> to
> look at A and An, and look at the correlation between the rows of the
> matrices.  This will give an idea of what correlation it will induce on
> the
> residual estimation error (A + An)*w.

First of all thanks for your reply. I don't understand what you mean, An is
a matrix of iid terms. With A I guess you meant A#, however in both case
neither A# nor A should be bothered as far as me. The additive term is An *
(b + w).

> Are you sure that your assumption that the quantization error will be
> zero-mean good enough?  It's hard for me to imagine a case where
> the quantization error is close enough to zero mean but not close
> enough to be considered iid and orthogonal, unless you are working
> with very small block sizes and large dynamic range in your numbers.

Well, it's zero-mean and iid of course, but still I don't see things
getting much simpler.

> Good question, by the way!

Thanks :)

> Julius

kl31n