Reply by Jani Huhtanen●November 18, 20052005-11-18
abariska@student.ethz.ch wrote:
> Jani Huhtanen wrote:
> ...
>> Coefficients in your case are given by:
>> c = R^-1*r, where r is 10x1 autocorrelation vector and R is the
>> autocorrelation matrix.
>>
>> R = V*D*V^T, where D is a diagonal matrix containing the eigenvalues and
>> V is matrix containing the eigenvectors of the R. V is ortogonal so V^-1
>> = V^T.
>> => c = V*D^-1*V^T*r
>> => V^T*c = D^-1*V^T*r
>> => V^T*c = D^-1*s, where s is 10x1 vector.
>> => V^T*c isn't correlated
>
> That's interesting, Jani. How did you arrive at that last implication?
>
> Regards,
> Andor
It seems that about meter of wiring is missing. Sorry about that. To arrive
at the last implication you have to know that V^T will also decorrelate the
r. So when s is uncorrelated and multiplied with diagonal matrix it will
still be uncorrelated.
But let me try a another way around. Consider expressing the square error
like this:
sum(e[n]^2) = sum( ( x[n]-c^T*xvec[n] )^2 ) =
sum( ( x[n]-(c_opt+c_err)^T*xvec[n] )^2 ),
where xvec[n] = [x[n-1], ..., x[n-K]]^T
c_opt is vector containing the optimal predictor coefficients and c is
quantized or some how altered version. c_err gives the difference between
c_opt and c.
Now it can be shown that if R in my previous post is positive definite then
the change in the square error sum(e[n]^2) in function of c_err is given by
delta_e = c_err^T*R*c_err (this follows directly from the above sum with
some algebraic manipulation).
When c_err is zero, then c is optimal and error is the minimun you can
obtain with current prediction order. This makes delta_e zero. R is
positive definite so, no matter what you choose for c_err delta_e is always
positive. That is, square error increases when c_err is not zero.
Now remember that R = V*D*V^T. So if we define v_err = V^T*c_err, then
delta_e = v_err^T*D*v_err. Lets pretend we don't know the minimum of delta_e
(c_err=0): You can solve the minimum by solving the minimum for every
component of v_err independently as opposed to solving them simultaneously
as a group of equations. So clearly v_err is uncorrelated.
c = c_opt + c_err. V^T*c = V^T*c_opt + V^T*c_err = v_opt + v_err. This
doesn't rigorously prove that v_opt is also uncorrelated, but often one is
only intrested in the error. For example if you're quantizing the
coefficients. So you can quantize V^T*c_opt so that you don't have to worry
about anything else than how many bits you will use for a given
coefficient. If the coefficients were correlated you would also have to
take every other coefficient into count and make sure that you quantize in
the direction of the smallest increase in square error.
Perhaps someone with better knowledge on the subject can fill in the
blanks ;)
--
Jani Huhtanen
Tampere University of Technology, Pori
Reply by ●November 18, 20052005-11-18
Jani Huhtanen wrote:
...
> Coefficients in your case are given by:
> c = R^-1*r, where r is 10x1 autocorrelation vector and R is the
> autocorrelation matrix.
>
> R = V*D*V^T, where D is a diagonal matrix containing the eigenvalues and V
> is matrix containing the eigenvectors of the R. V is ortogonal so V^-1 =
> V^T.
> => c = V*D^-1*V^T*r
> => V^T*c = D^-1*V^T*r
> => V^T*c = D^-1*s, where s is 10x1 vector.
> => V^T*c isn't correlated
That's interesting, Jani. How did you arrive at that last implication?
Regards,
Andor
Reply by Jani Huhtanen●November 17, 20052005-11-17
John wrote:
> Hello
>
> I have a question regarding 10th order LPC-analysis of speech segments.
>
> The analysis returns 10 coefficients C1(k),C2(k),....C10(k) where k is the
> number of the speech segment which is being analyzed.
>
> My question is :
>
> Are the coefficients correlated? For example: Is c1(k) correlated with
> c2(k)??
>
Yes they most probably are. The correlation depends on the signal.
Coefficients in your case are given by:
c = R^-1*r, where r is 10x1 autocorrelation vector and R is the
autocorrelation matrix.
R = V*D*V^T, where D is a diagonal matrix containing the eigenvalues and V
is matrix containing the eigenvectors of the R. V is ortogonal so V^-1 =
V^T.
=> c = V*D^-1*V^T*r
=> V^T*c = D^-1*V^T*r
=> V^T*c = D^-1*s, where s is 10x1 vector.
=> V^T*c isn't correlated
> If so, how do I decorrelate them in real time?
Multiply c by V^T. Matrix V depends on the signal so this method is not
practical. But V^T is approximately dct-matrix for audio signals so you can
just use dct to decorrelate the coefficients.
--
Jani Huhtanen
Tampere University of Technology, Pori
Reply by John●November 17, 20052005-11-17
Hello
I have a question regarding 10th order LPC-analysis of speech segments.
The analysis returns 10 coefficients C1(k),C2(k),....C10(k) where k is the
number of the speech segment which is being analyzed.
My question is :
Are the coefficients correlated? For example: Is c1(k) correlated with
c2(k)??
If so, how do I decorrelate them in real time?
I have thought about doing the following:
1) Do LPC analysis of speech segment k
2) Store C1(k),.....C10(k) in buffer (buffer length=100)
3) Send buffer thru whitening filter
4) Obtain 10 latest decorrelated coefficients from filter output
I don't know if this makes any sense?
Thanks in advance....