DSPRelated.com
Forums

residual redundancy breakdown

Started by eric...@yahoo.com December 10, 2008
Hi everyone!

Residual redundancies that are extracted from the parameters of a particular source encoder output is said to be the result of (1) the nonuniform distribution of values (quantized levels) and (2) the actual temporal correlation that exists (memory) between successive frames. Based on formula, I am able to compute for the entropy rate of any particular parameter in question. The residual redundancy then could easily be computed by subtracting the number of bits per symbol used for the parameter to the entropy rate calculated earlier.

The question now is, how do you actually quantify the separate values of the residual redundancy due to (1) and (2)? I've seen from the paper by Fazel and Fuja that they were able to separate these values.

I attempted to compute for it by getting the distribution of each possible bit vector values for a particular parameter. Then I counted the total number of occurrences (for each value) and solved for their relative frequencies leading me to solve for the entropy rate. Then I subtract the entropy from the number of bits per symbol. I'm guessing that the value I got is the residual redundancy due to (1) but how about (2)? Is there a clear cut way of getting (1) and (2) separately without using the fact that the sum of the two is simply the total residual redundancy? If anyone understands what I'm saying please feel free to help me out. Thanks!
Hi.

I think the residual redundancy due to (2) can be abtained by computing the
entropy rate conditioned by the same parameter value in successive frames.
Let x denote the particular parameter in question, and let time index
n=0 specify the present time index, accordingly n = -1, -2, .... are time
indices of previous frames or subframes, and n = +1, +2, ... are future time
indices. Then the conditional entropy rate is H( x0 | ..., x-2, x-1, x+1,
x+2, ....). If the symbol consists of M bits, the residual redundancy is M
- H (x0 | ..., x-2, x-1, x+1, x+2, ....).

However, it is very difficult is compute H( x0 | ..., x-2, x-1, x+1, x+2,
....). For simplicity, we can model the parameter as a Markov process of Nth
order, and N is always restricted to 0, 1 or 2. Then the conditional entropy
rate can be easily computed.

Plus, I don't think the sum of the redundancy due to (1) and (2) is simply
the total residual redundancy. In fact, the redundancy due to (1) is
presented in (2) in a much more delicate way. What do you think?

2008/12/10

> Hi everyone!
>
> Residual redundancies that are extracted from the parameters of a
> particular source encoder output is said to be the result of (1) the
> nonuniform distribution of values (quantized levels) and (2) the actual
> temporal correlation that exists (memory) between successive frames. Based
> on formula, I am able to compute for the entropy rate of any particular
> parameter in question. The residual redundancy then could easily be computed
> by subtracting the number of bits per symbol used for the parameter to the
> entropy rate calculated earlier.
>
> The question now is, how do you actually quantify the separate values of
> the residual redundancy due to (1) and (2)? I've seen from the paper by
> Fazel and Fuja that they were able to separate these values.
>
> I attempted to compute for it by getting the distribution of each possible
> bit vector values for a particular parameter. Then I counted the total
> number of occurrences (for each value) and solved for their relative
> frequencies leading me to solve for the entropy rate. Then I subtract the
> entropy from the number of bits per symbol. I'm guessing that the value I
> got is the residual redundancy due to (1) but how about (2)? Is there a
> clear cut way of getting (1) and (2) separately without using the fact that
> the sum of the two is simply the total residual redundancy? If anyone
> understands what I'm saying please feel free to help me out. Thanks!