DSPRelated.com
Forums

deconvolution in time?

Started by Julian Stoev November 4, 2005
Real_McCoy wrote:

> The answer is a straight forward Weiner filter for Deconvolution.You can > improve on this by doing Smoothing.
I guess so. This is quite easy to find. But on-line implementation of Wiener filtering is not so easy. I found some links that this is done in some particular cases using Kalman filtering. That was the reason for my question - the online implementation, I know how to do this off-line. The reason why I asked here is to get info if there is some accepted solution to such case. --JS
Christian Langen wrote:
> You got me: The matrix P(k) has to be mirrowed, i.e. the rows and > columns have to be changed > > |P(0) 0 0 ... 0| > |P(1) P(0) ... 0| > | .... | > |P(k-1) .... P(0)| > > has to be substituted by > > |P(0) P(1).... P(k-1)| > |0 P(0) ....P(k-2)| > |0 .... | > |0 .... P(0) |
Christian, I'm afraid that won't work neither. Consider the simple case |1 0| |2 1| The inverse of that is |1 0| |-2 1| which is not equal to the transpose of the original. Regards, Andor
Julian Stoev wrote:
> Hello! > I have a output signal y(k) and a plant P(z). The signal y(k) contains > some noise and I know the PSD of the noise. But lets assume that for now > that the noise is qhite and the system looks like this: > > Y(z)=U(z)P(z)+E(z) > > u(k) y(k) > ---------->[P(z)]-->[+]---> > | > e(k) | > -------| > > I want to restore the signal u(k) from y(k). I can do this off-line in > freq domain, no problem, but I need this on-line. I think I need > something related to Wiener filters. > > The deconvolution P^(-1)(z) may be unstable, which is the biggest > problem for me. I am not worried that it may be no casual.
You say that you can do it "off-line in freq domain, no problem" - does the stability of the filter P^-1 not pose any problems with the frequency domain algorithm? How did you arrive at P^-1? What exactly do you mean by "off-line" and "on-line"? What constraints are you facing in the one case which you don't have in the other? If you have a frequency domain algorithm that works "no problem", then you could use block processing in frequency domain using-overlap add or overlap-save for stream processing (is block and stream processing what you mean by off- and on-line?). Wiener filtering (and its adaptive extensions) requires two signals such that you can filter one to statistically match the other. It seems you have only one signal (y(k)) at your disposal, which would rule out Wiener filtering (unless you have a seperate measurement of the error signal e). Or am I missing something? Regards, Andor
Julian Stoev <stoev@deadspam.com> wrote:
> Hello Christian, > > Your approach is based on the assumption, that P(k) is FIR filter? I did > not mention it, but I used P(z). ;-) > > My case is related to IIR description of the system. For example > > 5.641e-006 z^3 + 0.1712 z^2 + 0.8146 z - 0.08121 > P(z)=------------------------------------------------ > z^3 - 2.095 z^2 + 1.231 z - 0.1322 > > It does not look that I can apply your inversion to the IIR case? > > Thanks a lot!
P(z)=B(z)/A(z) I your case B(z) has low influence on filters spectrum ;) You can just use Q(z) = P(z)^-1 ~= A(z) for deconvolution, if more precision is need my first trial gives as better approx Q(z) = conv(A(z),[1 -.1 .025]*1.2) - less then 0.2 dB error in flatness and freqz(conv([1 -.1 .025]*1.2,conv(a,b)),a) shows almost linear phase (on plot, I don't checked values)
"Julian Stoev" <stoev@deadspam.com> wrote in message
news:436eff65$0$41141$14726298@news.sunsite.dk...
> Real_McCoy wrote: > > > The answer is a straight forward Weiner filter for Deconvolution.You can > > improve on this by doing Smoothing. > > I guess so. This is quite easy to find. But on-line implementation of > Wiener filtering is not so easy. I found some links that this is done in > some particular cases using Kalman filtering. That was the reason for my > question - the online implementation, I know how to do this off-line. > The reason why I asked here is to get info if there is some accepted > solution to such case. > > --JS
This should do it... T.J.Moir "Optimal deconvolution smoother",Proc IEE Part D,1986 ,Vol 133,No 1 pp 13 - 18 McC
Dear Andor,

thanks a lot for your hint: Not the matrix P(k) has to be transposed to
P(K)^T and multiplied by 1/det(Pk) to give the inverse matrix P(k)^-1
but the adjoint P(k)adj. Moreover the terms 'mirrowed' should ne named
as 'transposed'. Sorry, I did not realize this when posting my answer
to this thread 8-(.

The adjoint of

|1  0|
|2  1|

gives

|1  0|
|-2 1|

The determinant det(P(k)) is 1 for both the matrix P(k) as well for the
adjoint P(k)adj.

Putting it all together:

P(k)^-1 = 1/det(P(k) P(k)adj =

|1  0|
|-2 1|

Everything correct? Noone wants to calculate the adjoint of a large
matrix by hand...

Moreover using the implemented matrix inversion of Maple, Mathematica,
Matlab or other tools helps to avoid to get involved to that math
basics too deeply ;o).

All the best

Christian

Christian Langen wrote:

> The adjoint of > > |1 0| > |2 1| > > gives > > |1 0| > |-2 1|
Hi Christian, for real matrices, the "adjoint" and "transpose" operations are equivalent, so the adjoint of the matrix |1 0| |2 1| is just its transpose, ie. |1 2| |0 1|. Perhaps we can just drop the issue and say that when det(P(k)) =/= 0, then P(k) has an inverse (as you correctly noted), which can be found with a math package or math tool. Regards, Andor
Dear Andor,

now you are almost correct: the adjoint is the _conjugate_ (complex)
transpose of the matrix, not only the transpose. This is _exactly_ what
I did wrong with my first posting when I did not note this fine
difference.

The definition for this taken from Maple's help (sorry all my math
books are in German only) is:

'The function adjoint computes the matrix such that the matrix product
of A and adjoint(A) is the product of det(A) and the matrix identity.
This is done through the computation of minors of A.' For our example
this yields:

P(k)*P(k)adj = det(P(k))*E    //E = unity identity

You are absolutely right recommending to leave the detailled work to a
math toolbox, _how_ they do this is far out of my horizon ...
nevertheless we are not learning for school but for live and to find
the correct answer in the end is fun :-).

Thanks a lot for your help!

Regards

Christian

Christian Langen wrote:
> Dear Andor, > > now you are almost correct: the adjoint is the _conjugate_ (complex) > transpose of the matrix, not only the transpose. This is _exactly_ what > I did wrong with my first posting when I did not note this fine > difference.
It seems we agree (the adjoint of a matrix is equal to the conjugate transpose of that matrix). Carrying on with our example, the transpose of |1 0| |2 1| is equal to |1 2| |0 1| . Now if we take the complex conjugate of that matrix (by taking the complex conjugate of all its entries), we don't change anything, as the matrix is real (that is why I wrote in the post that you answered "for real matrices, the 'adjoint' and 'transpose' operations are equivalent"). So to summarize, I hope we agree that the conjugate transpose, or adjoint, is computed by transposing a matrix and replacing each entry with its complex conjugate value, as in the above example. You will find this definition in all your linear algebra books (even the German ones).
> The definition for this taken from Maple's help (sorry all my math > books are in German only) is: > > 'The function adjoint computes the matrix such that the matrix product > of A and adjoint(A) is the product of det(A) and the matrix identity. > This is done through the computation of minors of A.' For our example > this yields: > > P(k)*P(k)adj = det(P(k))*E //E = unity identity
If we proceed with our example, then we have |1 0| * |1 2| = |1 2| =/= |1 0| |2 1| |0 1| |2 5| |0 1| Now this quite clearly contradicts the Maple definition of the "adjoint matrix". I don't know why Maple chose that name, but it seems a really bad convention. And it is the first time I have seen this definition, contradicting _all_ my linear algebra and functional analysis texts. I'm not quite sure what to make of it. Regards, Andor
Christian Langen wrote:
> Dear Andor, > > now you are almost correct: the adjoint is the _conjugate_ (complex) > transpose of the matrix, not only the transpose. This is _exactly_ what > I did wrong with my first posting when I did not note this fine > difference.
It seems we agree (the adjoint of a matrix is equal to the conjugate transpose of that matrix). Carrying on with our example, the transpose of |1 0| |2 1| is equal to |1 2| |0 1| . Now if we take the complex conjugate of that matrix (by taking the complex conjugate of all its entries), we don't change anything, as the matrix is real (that is why I wrote in the post that you answered "for real matrices, the 'adjoint' and 'transpose' operations are equivalent"). So to summarize, I hope we agree that the conjugate transpose, or adjoint, is computed by transposing a matrix and replacing each entry with its complex conjugate value, as in the above example. You will find this definition in all your linear algebra books (even the German ones).
> The definition for this taken from Maple's help (sorry all my math > books are in German only) is: > > 'The function adjoint computes the matrix such that the matrix product > of A and adjoint(A) is the product of det(A) and the matrix identity. > This is done through the computation of minors of A.' For our example > this yields: > > P(k)*P(k)adj = det(P(k))*E //E = unity identity
If we proceed with our example, then we have |1 0| * |1 2| = |1 2| =/= |1 0| |2 1| |0 1| |2 5| |0 1| Now this quite clearly contradicts the Maple definition of the "adjoint matrix". I don't know why Maple chose that name, but it seems a really bad convention. And it is the first time I have seen this definition, contradicting _all_ my linear algebra and functional analysis texts. I'm not quite sure what to make of it. Regards, Andor