Reply by Christian Langen November 17, 20052005-11-17
Dear Jeremy,

now you discovered some practical limitations of the discussed
deconvolution-in-time 'algorithm'.

Some years before I used this 'algorithm' to determine impulse
responses of microphones: Both a reference microphone and the DUT were
excited by a step response produced by a loudspeaker - even the
deconvolution of both step responses yields the DUT's impulse response
(I do not prove this since it's trivial).

Most of the microphone measurements did work, some did not. I figured
out that a 'usable' signal-to-noise ratio is essential to receive _any_
result at all, otherwise you get undefined data, maybe caused by the
limited precision of the number representation, I did not observe the
phenomenon in detail then but rather tried to get the signal-to-noise
ratio improved by the measurement itself to get useable results. The
non-working measurements where those with dynamic microphones with
small output signals, all condenser microphones measurements that
output higher voltage signals worked well. I do not know exactly what
happens 'under the hood' of the calculus software exactly but maybe I
have some explanations what does happen probably... In our company we
did not continue the work on this measuerment method so I had no
resources to research the details more deeply anymore.

I do not want to bore the entire comp.dsp community with some Maple
worksheets since most of them cannot make any use of this, I better
send them to your email addess directly if you like. If you have no
direct access to the Maple software, I can convert the worksheet to a
HTML file with embedded pictures where you can see clearly the results
or the error messages respectively.

Since I did not use Matlab for the deconvolution so I cannot help you
with the details on the error messages you get from this software. I
did all the maths using Maple, but I think the basic behaviour is
similar.

In contrast to you measurement vectors as well as your test vectors in
both cases all vectors I used were of identical length as I did show
with my examples in the beginning of this thread. Moreover I converted
all sampled data to float (with Maple one can adjust the number of
digits for improved precision, I do not know if this is possible with
Matlab).

The range of number represenation is essential to solve that problem
since the numbers tend to become _very_ small with the matrix
inversion, if you use integer maybe this does not work. The Matlab
warning you reports leads you to that direction clearly...

The described method works for all convoluted signals, the phase
behaviour does not affect the deconvolution itself since we are _not_
leaving the time domain we have _no_  'complex' signals in any way,
only 'real' samples. We do not evaluate frequencies, group delays or
phase responses in any way, so it is absolutely meaningless if signal
has minimum or non-minimum phase behaviour.

The most essential thing is that h(1) =/= 0, otherwise the matrix
cannot be inverted. But what will happen if h(1) is very small?

The determinant that equals h(1)^n will be smaller by the _power_ of n
than h(1) then. Since the adjugate matrix is multiplied by 1/(h(1)^n),
if h(1) is very small the determinant will 'explode' and maybe leave
the number represenation. Probably this is the reason for the
misbehaviour you describe.

All the best

Christian

Reply by jere...@gmail.com November 16, 20052005-11-16
I was trying this deconvolution in time method using the toeplitz
matrix mentioned here. First I tested my Matlab code using two small
vectors,
x[t]=[ 6     1     3    -2    -1    -1 ] (the convolved signal) and
h[t]=[ 3     2     1] (the impulse response, note that h(1)=\=0).
I built the triangular matrix as described which is the following,

    htoep=[  3     0     0     0     0     0
             2     3     0     0     0     0
             1     2     3     0     0     0
             0     1     2     3     0     0
             0     0     1     2     3     0
             0     0     0     1     2     3 ]

the outcome of the multiplication of the htoep^-1 and x[t] gave back
the source vector s[t]=[2    -1     1    -1].
Since the code seemd to work perfectly under these test vectors I
proceeded to work with some real data.

I had a clean source signal 'S.wav' 0.3ms sampled at 11kHz, a room
mpulse response 'H.wav' of 0.2ms sampled at 11kHz.,
I convolved both to get the vector 'X.wav' of length 0.5ms sampled at
11kHz.

I built the toeplitz matrix (htoep[t]) based on 'H.wav' which is quite
large but within my matlab memory restrictions and
calculated (htoep^-1[t]) * X[t] which should have resulted in S.
Unfortunately this did not happen.
There are three major differences which I could spot between the test
vectors I used and this real data:

1. For the sound data h(1) was very small (0.046), and hence the
determinant was in the region of 10E-20, which resulted in a large
range in S.

2. I had to output the S data I got to a wav file and this required
that all samples lie in the range +/- 1, and in doing so I lost
resolution, I did not have to do this with the test vectors.

3. The test vectors were all integers while the sound data was not.

I tried to tweak the impulse response by changing the value of h(1) to
a larger number like '1', but the output was still unintelligible.
Matlab is also throwing a warning on the htoep matrix:
 'Warning: Matrix is close to singular or badly scaled.
         Results may be inaccurate. RCOND = 6.603108e-024.

Am I correct in saying that this method should work for non-mimimum
phase impulse responses,
i.e. if h(2)>h(1) the htoep matrix is still invertible, the only
limitation that I am seeing mathematically is that h(1)=/=0.

Any comments greatly appreciated...

Jeremy

Reply by Christian Langen November 8, 20052005-11-08
Dear Andor,

now I see the misunderstanding: The complex conjugate matrix is _not_
calculated like you recommend as transposing the matrix and replacing
each entry by it's complex conjugate (in the complex number meaning).

You have to take the factor (=80-1)^(i+j) and calculate each determinant
of the matrix with the corresponding line and column cancelled (there
is a (n-1)x(n-1) matrix left for each determinant to be calculated. The
factor (-1)^(i+j) is used to change the sign if (i+j) is odd and to
leave it as it is if (i+j) is even. All matrix entries are real
numbers:

Let's calculate the adjoint of our example matrix:

First cancel first row and first column and calculate lower right
determinant:
adj(P(1,1)) =3D (-1)^2*det(2,2) =3D 1*1 =3D 1

Second cancel first row and second column and calculate lower left
determinant:
adj(P(1,2)) =3D (-1)^3*det(2,1) =3D (-1)*2 =3D -2

Third cancel second row and first column and calculate upper right
determinant
adj(P(2,1)) =3D (-1)^3*det(1,2) =3D (-1)*0 =3D 0

Forth cancel second row and second column and calculate upper left
determinant
adj(P(2,2)) =3D (1)^2*det(1,1) =3D 1*1 =3D 1

Now fill the adjoint matrix P(k)adj with the calculated values:

|1  -2|
|0   1|

Now you see where I got my funny '-2' from :-).

Please don't forget to tranpose this matrix to P(k)adj^T ('-2' is now
at the correct position):

|1  0|
|-2 1|

before dividing it by the determinant to get the inverse matrix P(k)^1
=3D 1/det(P(k)* (P(k)adj^T).

Since 1/det(P(k) =3D=3D 1 the transposed adjoint matrix equals the inverse
matrix. So our proceeding with the example yields:

|1  0|  * |1  0|   =3D  |1  0|  =3D=3D |1  0|
|2  1|    |-2 1|      |0  1|     |0  1|

It seems my small Maple world is o.k. ;-).

All the best

Christian

Reply by November 8, 20052005-11-08
Christian Langen wrote:
> Dear Andor, > > now you are almost correct: the adjoint is the _conjugate_ (complex) > transpose of the matrix, not only the transpose. This is _exactly_ what > I did wrong with my first posting when I did not note this fine > difference.
It seems we agree (the adjoint of a matrix is equal to the conjugate transpose of that matrix). Carrying on with our example, the transpose of |1 0| |2 1| is equal to |1 2| |0 1| . Now if we take the complex conjugate of that matrix (by taking the complex conjugate of all its entries), we don't change anything, as the matrix is real (that is why I wrote in the post that you answered "for real matrices, the 'adjoint' and 'transpose' operations are equivalent"). So to summarize, I hope we agree that the conjugate transpose, or adjoint, is computed by transposing a matrix and replacing each entry with its complex conjugate value, as in the above example. You will find this definition in all your linear algebra books (even the German ones).
> The definition for this taken from Maple's help (sorry all my math > books are in German only) is: > > 'The function adjoint computes the matrix such that the matrix product > of A and adjoint(A) is the product of det(A) and the matrix identity. > This is done through the computation of minors of A.' For our example > this yields: > > P(k)*P(k)adj = det(P(k))*E //E = unity identity
If we proceed with our example, then we have |1 0| * |1 2| = |1 2| =/= |1 0| |2 1| |0 1| |2 5| |0 1| Now this quite clearly contradicts the Maple definition of the "adjoint matrix". I don't know why Maple chose that name, but it seems a really bad convention. And it is the first time I have seen this definition, contradicting _all_ my linear algebra and functional analysis texts. I'm not quite sure what to make of it. Regards, Andor
Reply by November 8, 20052005-11-08
Christian Langen wrote:
> Dear Andor, > > now you are almost correct: the adjoint is the _conjugate_ (complex) > transpose of the matrix, not only the transpose. This is _exactly_ what > I did wrong with my first posting when I did not note this fine > difference.
It seems we agree (the adjoint of a matrix is equal to the conjugate transpose of that matrix). Carrying on with our example, the transpose of |1 0| |2 1| is equal to |1 2| |0 1| . Now if we take the complex conjugate of that matrix (by taking the complex conjugate of all its entries), we don't change anything, as the matrix is real (that is why I wrote in the post that you answered "for real matrices, the 'adjoint' and 'transpose' operations are equivalent"). So to summarize, I hope we agree that the conjugate transpose, or adjoint, is computed by transposing a matrix and replacing each entry with its complex conjugate value, as in the above example. You will find this definition in all your linear algebra books (even the German ones).
> The definition for this taken from Maple's help (sorry all my math > books are in German only) is: > > 'The function adjoint computes the matrix such that the matrix product > of A and adjoint(A) is the product of det(A) and the matrix identity. > This is done through the computation of minors of A.' For our example > this yields: > > P(k)*P(k)adj = det(P(k))*E //E = unity identity
If we proceed with our example, then we have |1 0| * |1 2| = |1 2| =/= |1 0| |2 1| |0 1| |2 5| |0 1| Now this quite clearly contradicts the Maple definition of the "adjoint matrix". I don't know why Maple chose that name, but it seems a really bad convention. And it is the first time I have seen this definition, contradicting _all_ my linear algebra and functional analysis texts. I'm not quite sure what to make of it. Regards, Andor
Reply by November 8, 20052005-11-08
Christian Langen wrote:
> Dear Andor, > > now you are almost correct: the adjoint is the _conjugate_ (complex) > transpose of the matrix, not only the transpose. This is _exactly_ what > I did wrong with my first posting when I did not note this fine > difference.
It seems we agree (the adjoint of a matrix is equal to the conjugate transpose of that matrix). Carrying on with our example, the transpose of |1 0| |2 1| is equal to |1 2| |0 1| . Now if we take the complex conjugate of that matrix (by taking the complex conjugate of all its entries), we don't change anything, as the matrix is real (that is why I wrote in the post that you answered "for real matrices, the 'adjoint' and 'transpose' operations are equivalent"). So to summarize, I hope we agree that the conjugate transpose, or adjoint, is computed by transposing a matrix and replacing each entry with its complex conjugate value, as in the above example. You will find this definition in all your linear algebra books (even the German ones).
> The definition for this taken from Maple's help (sorry all my math > books are in German only) is: > > 'The function adjoint computes the matrix such that the matrix product > of A and adjoint(A) is the product of det(A) and the matrix identity. > This is done through the computation of minors of A.' For our example > this yields: > > P(k)*P(k)adj = det(P(k))*E //E = unity identity
If we proceed with our example, then we have |1 0| * |1 2| = |1 2| =/= |1 0| |2 1| |0 1| |2 5| |0 1| Now this quite clearly contradicts the Maple definition of the "adjoint matrix". I don't know why Maple chose that name, but it seems a really bad convention. And it is the first time I have seen this definition, contradicting _all_ my linear algebra and functional analysis texts. I'm not quite sure what to make of it. Regards, Andor
Reply by Christian Langen November 8, 20052005-11-08
Dear Andor,

now you are almost correct: the adjoint is the _conjugate_ (complex)
transpose of the matrix, not only the transpose. This is _exactly_ what
I did wrong with my first posting when I did not note this fine
difference.

The definition for this taken from Maple's help (sorry all my math
books are in German only) is:

'The function adjoint computes the matrix such that the matrix product
of A and adjoint(A) is the product of det(A) and the matrix identity.
This is done through the computation of minors of A.' For our example
this yields:

P(k)*P(k)adj = det(P(k))*E    //E = unity identity

You are absolutely right recommending to leave the detailled work to a
math toolbox, _how_ they do this is far out of my horizon ...
nevertheless we are not learning for school but for live and to find
the correct answer in the end is fun :-).

Thanks a lot for your help!

Regards

Christian

Reply by November 8, 20052005-11-08
Christian Langen wrote:

> The adjoint of > > |1 0| > |2 1| > > gives > > |1 0| > |-2 1|
Hi Christian, for real matrices, the "adjoint" and "transpose" operations are equivalent, so the adjoint of the matrix |1 0| |2 1| is just its transpose, ie. |1 2| |0 1|. Perhaps we can just drop the issue and say that when det(P(k)) =/= 0, then P(k) has an inverse (as you correctly noted), which can be found with a math package or math tool. Regards, Andor
Reply by Christian Langen November 8, 20052005-11-08
Dear Andor,

thanks a lot for your hint: Not the matrix P(k) has to be transposed to
P(K)^T and multiplied by 1/det(Pk) to give the inverse matrix P(k)^-1
but the adjoint P(k)adj. Moreover the terms 'mirrowed' should ne named
as 'transposed'. Sorry, I did not realize this when posting my answer
to this thread 8-(.

The adjoint of

|1  0|
|2  1|

gives

|1  0|
|-2 1|

The determinant det(P(k)) is 1 for both the matrix P(k) as well for the
adjoint P(k)adj.

Putting it all together:

P(k)^-1 = 1/det(P(k) P(k)adj =

|1  0|
|-2 1|

Everything correct? Noone wants to calculate the adjoint of a large
matrix by hand...

Moreover using the implemented matrix inversion of Maple, Mathematica,
Matlab or other tools helps to avoid to get involved to that math
basics too deeply ;o).

All the best

Christian

Reply by Real_McCoy November 8, 20052005-11-08
"Julian Stoev" <stoev@deadspam.com> wrote in message
news:436eff65$0$41141$14726298@news.sunsite.dk...
> Real_McCoy wrote: > > > The answer is a straight forward Weiner filter for Deconvolution.You can > > improve on this by doing Smoothing. > > I guess so. This is quite easy to find. But on-line implementation of > Wiener filtering is not so easy. I found some links that this is done in > some particular cases using Kalman filtering. That was the reason for my > question - the online implementation, I know how to do this off-line. > The reason why I asked here is to get info if there is some accepted > solution to such case. > > --JS
This should do it... T.J.Moir "Optimal deconvolution smoother",Proc IEE Part D,1986 ,Vol 133,No 1 pp 13 - 18 McC