# non-uniform quantization problem?

Started by July 21, 2005
Hi all,
I want to quantize Guassian distribution data use non-uniform
quantization. And there are problems:

How to determine the optimal quantization level. Is there any
reference?

How to add two non-uniform quantized data?
Shall I convert it back to uniform quantized data.

How to multiply one non-uniform quantized data with a constant?

Any suggestions will be appreciated!
Best regards,
Davy


"Richard" <zhushenli@gmail.com> wrote in message
> Hi all,
> I want to quantize Guassian distribution data use non-uniform
> quantization. And there are problems:
>
> How to determine the optimal quantization level. Is there any
> reference?
>
> How to add two non-uniform quantized data?
> Shall I convert it back to uniform quantized data.
>
> How to multiply one non-uniform quantized data with a constant?
>
> Any suggestions will be appreciated!

If there are numerical tricks that might help, I'm not aware of any.  That
doesn't mean much...
Did you Google for it?  There are quite a few hits like:
http://www.iccd-conference.org/proceedings/1998/90990286.pdf

You have to declare your own optimization criterion before "optimum" makes
any sense.

On adding: well adding logarithms is multiplicative isn't it?  So, you can
like taking a log.

On multiplication by a constant:  convert the constant to a log and add?

Fred



Fred Marshall wrote:
> "Richard" <zhushenli@gmail.com> wrote in message
> > Hi all,
> > I want to quantize Guassian distribution data use non-uniform
> > quantization. And there are problems:
> >
> > How to determine the optimal quantization level. Is there any
> > reference?
> >
> > How to add two non-uniform quantized data?
> > Shall I convert it back to uniform quantized data.
> >
> > How to multiply one non-uniform quantized data with a constant?
> >
> > Any suggestions will be appreciated!
>
> If there are numerical tricks that might help, I'm not aware of any.  That
> doesn't mean much...
> Did you Google for it?  There are quite a few hits like:
> http://www.iccd-conference.org/proceedings/1998/90990286.pdf
>
> You have to declare your own optimization criterion before "optimum" makes
> any sense.
>
> On adding: well adding logarithms is multiplicative isn't it?  So, you can
> like taking a log.
>
> On multiplication by a constant:  convert the constant to a log and add?
>
> Fred

Also mu law is a common type of non uniform quantization.  You may be
able to search on the term mu law encoder


Hello Davy,

If you are trying to maximize entropy - i.e., maximize information per
sample, then subdivide the probability dist into N regions of equal
area. Then your quantization simply assigns the number x to any sample
that falls within region x. As far as adding and multipying goes, you
may apply standard math rules to your distribution and see that in
general you will convert back to linear before adding or multiplying.

IHTH,

Clay


Clay wrote:
> Hello Davy,
>
> If you are trying to maximize entropy - i.e., maximize information per
> sample, then subdivide the probability dist into N regions of equal
> area. Then your quantization simply assigns the number x to any sample
> that falls within region x. As far as adding and multipying goes, you
> may apply standard math rules to your distribution and see that in
> general you will convert back to linear before adding or multiplying.

Great idea! This criterion gives you a very hand means to define
optimal. I have some problems with the arithmetic, however. Let's say
you want to quantize to N=3 numbers, so you divde the real numbers into
three intervals:

I_1 = ]-infinity, -x_0]
I_2 = ]-x_0, x_0]
I_3 = ]x_0, infinity[

The interval boundaries can be assigned arbitrarily (but consistently),
and the value of x_0 depends only on the variance of the normal
distribution.

Now the quantization returns either 1, 2 or 3, depending on the
interval which contained the original sample. How exactly do you
propose to define addition and multiplication using these three
numbers? And how to solve this for arbitrary N?

I was thinking of another approach. One choooses N points on the real
line (these are the quantized values, so the arithmetic is already
defined) and quantizes the random variable to these points. The points
are chosen such that expected value of some function of the
quantization error becomes minimial, for example (if X is the random
variable and X_q is the quantized value) the mean square error

E[ (X-X_q)^2 ]  (this might also lead to an optimal
signal-power-to-quantization-error-power ratio, but I'm not sure).

This is a function of R^N -> R+ (R denotes the real numbers). I worked
at some of the details yesterday, which I could post if this approach
is interesting to Davy.

Regards,
Andor


Fred wrote:

>On adding: well adding logarithms is multiplicative isn't it?  So, you can
>like taking a log.

How can you tell if you don't know the quantization steps?


Hi Andor,

What's your method's difference with entropy method? I guess they are
similar. Can you post it. I am very interested in it ;-)

BTW, I think Fred want to quantize the signal in logarithm way. So is
it suitable for a Gaussian distribution signal is a interested problem.

Thank you all!

Best regards,
Davy


"Andor" <an2or@mailcircuit.com> wrote in message
> Fred wrote:
>
>>On adding: well adding logarithms is multiplicative isn't it?  So, you can
>>like taking a log.
>
> How can you tell if you don't know the quantization steps?
>

Andor,

I was trying to simply create a contextual framework.

Perhaps I should have said:
"nonuniform quantization is usually like taking a log"
or
"nonuniform quantization can be like taking a log"

I suppose it could also be like taking an antilog....

I hope the contextual framework was understandable as it was.

Fred


"Richard" <zhushenli@gmail.com> wrote in message
> Hi Andor,
>
>
> BTW, I think Fred want to quantize the signal in logarithm way. So is
> it suitable for a Gaussian distribution signal is a interested problem.

If you have used nonlinear quantization then in order to do addition (and
subtraction) in the context of the original data you need to consider the
affect of doing it directly with the quantized numbers.  Since we are all
familiar with logs then that seems a good way to present the issue or set a
context.

If we want to get fancy or precise then that's another matter.  Then the
nonlinear quantization becomes a mapping or a transform and we could talk
about how addition and other operations have duals in the transform domains.

So, once more as an example *and* in "shorthand":
number set A  <-> number set B (which are logs of A)
multiplication in the A domain <-> addition in the B domain
where <-> means a transform pair ...
(subject to certain constraints like the values in A being positive, etc.)

If one uses some form of nonlinear quantization then the same treatment
could be used.
number set A <-> quantized set B (which are nonlinear quantizations of A)
addition in the A domain <-> what the heck in the B domain????

So, I guess if you're clever you might be able to define a mapping that
allows you to do "addition" of the original numbers by some operation in the
B domain that you can define.  Perhaps someone has done that in particular
cases of nonlinear quantization....

In general, you can't add the nonlinearly quantized values to affect
addition of the original values.  That was my point, which I had hoped the
OP would figure out with the "hints".

Fred


Richard wrote:
> Hi Andor,
>
> What's your method's difference with entropy method?

If you have a sequence (X_k) of iid normal distributed random
variables, Clay's quantization results in a squence (X_q_k), where X_q
is iid uniform distributed on the set {1, 2, ..., N}. I still haven't
figured how you are going to mix the X_q with real numbers, so I can't
comment on the quantization error using this method.

> I guess they are
> similar. Can you post it. I am very interested in it ;-)

Let's see: you are allowed to specify N points on the real line, call
them {x_1, ..., x_N}, and you are given a random variable X that is
N(0, 1) distributed (you can transform this to any general normal
distribution if required). The process of quantizing is to choose k
such

| X - x_k |

is minimal, ie. q := argmin_{k \in {1, ..,  N}}  | X - x_k |. The
quantized value is then the random variable X_q := x_q.  An obvious
criterion for minimiziation is the mean squared error, ie. MSE :=  E[
(X - X_q)^2 ]. However, you can use any error weighting function ew(x)
and then minimize E[ ew(|X - X_q|) ] (the MSE is the special case ex(x)
= x^2). Think of this when you calculate the density function f(x)
further down, there might be some ew(x) that makes this particularly
easy.

The task: set the N points so as to minimize the expectation of the
error function. For starters, one can compute the distribution function
of the error.

Notational shortcuts: y_k := (x_{k+1} - x_k) / 2, and
d_k := min( d , y_k )    for k=2, 3, ...., N-1.

Then the distribution function F: R+ -> [0,1[ of the error is

F(d) := P[ |X-X_q| < d ]
= sum_{k=2}^{N-1} ( \Phi(x_k+d_k)-\Phi(x_k - d_{k-1}) ) +
\Phi(x_1 + d_1) - \Phi(x_1 - d) +
\Phi(x_N + d) - \Phi(x_N - d_{N-1})

where \Phi(x) is the distribution function of the standard normal
distribution. You can now compute the density function f(x) of the
quantization error by

f(x) := d/dx F(x)

which you can calculate yourself by noting that

d/dx \Phi(x_k + d_k) = 1/sqrt(2 Pi) exp(-(x_k-d_k)^2) 1_{x < d_k}

and similarly for \Phi(x_k - d_{k-1}). The term 1_{x < d_k} means

1_{x < d_k} = 1, if x < d_k, else 0. Once you have computed the
density, you can then compute the integral

E[ ew(|X - X_q|) ] = integral_{-infty}^{infty} ew(x) f(x) dx,

or, specifically, the mean square error MSE as

integral_{-infty}^{infty} x^2 f(x) dx.

Note that the MSE is a function of the points {x_1, ..., x_N}, ie.

MSE = MSE(x_1, ..., x_N).

Perhaps this is minimizable analytically, otherwise use some numeric
method.

Regards,
Andor