# Kalman filter equation

Started by July 29, 2008
```Hi

I've got a question on one equation of the Kalman filter.
First off, a quick sync of the terminology (just like on Wikipedia)

x: System state estimate
P: System state estimation error covariance

F: Process Model
H: Measurement Model

z: Measurement
y: Innovation
S: Innovation covariance

K: Kalman Gain

Consider the first equations of the Kalman filter first (leaving out
subscripts etc):
x=Fx + w (leaving out control input u, w is the zero mean noise)
P=F * P * F^T + Q

The second Formula can be derived from the first using knowledge about
covariance matrices.
since P is just cov(x) we can compute P as:
P=cov(x) = cov(Fx + w) = cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P *
F^T + Q

The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is true
for all covariance matrices.
So far so good... Now i tried to do the same thing on the last equation of
the Kalman Filter.

The Kalman gain is computed as K = P * H^T * S^-1

The update step looks like this:

x=x + K*y

Again we want P=cov(x)

So we write:

P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
= P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T
= P + P * H^T * S^-1 * S * S^-T * H * P^T

Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T = Identity

= P + P * H^T * S^-1 * H * P
= P + K * H * P
= (Identity + K*H)*P

So as you can see my result is P = (Identity + K*H)*P. When looking up the
formula in other sources, the result is (Identity !!-!! K*H)*P
I dont see where that - sign comes from though. Can anyone explain that?

Thanks alot :-)
Frank

```
```On Tue, 29 Jul 2008 19:21:12 +0200, Frank Neuhaus wrote:

> Hi
>
> I've got a question on one equation of the Kalman filter. First off, a
> quick sync of the terminology (just like on Wikipedia)
>
> x: System state estimate
> P: System state estimation error covariance
>
> F: Process Model
> H: Measurement Model
>
> z: Measurement
> y: Innovation
> S: Innovation covariance
>
> K: Kalman Gain
>
> Consider the first equations of the Kalman filter first (leaving out
> subscripts etc):
> x=Fx + w (leaving out control input u, w is the zero mean noise) P=F * P
> * F^T + Q
>
> The second Formula can be derived from the first using knowledge about
> covariance matrices.
> since P is just cov(x) we can compute P as: P=cov(x) = cov(Fx + w) =
> cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P * F^T + Q
>
> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is
> true for all covariance matrices.
> So far so good... Now i tried to do the same thing on the last equation
> of the Kalman Filter.
>
> The Kalman gain is computed as K = P * H^T * S^-1
>
> The update step looks like this:
>
> x=x + K*y
>
> Again we want P=cov(x)
>
> So we write:
>
> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
>   = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T = P
>   + P * H^T * S^-1 * S * S^-T * H * P^T
>
> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T =
> Identity
>
>   = P + P * H^T * S^-1 * H * P
>   = P + K * H * P
>   = (Identity + K*H)*P
>
> So as you can see my result is P = (Identity + K*H)*P. When looking up
> the formula in other sources, the result is (Identity !!-!! K*H)*P I
> dont see where that - sign comes from though. Can anyone explain that?
>
> Thanks alot :-)
>    Frank

I think your notation may be a little confusing since x and P in the
first half of your post are not the same as the x and P in the second
half. The Kalman filter equations are usually split into two steps; prior
and posterior. The first half of your post is the prior step and the
second half is the posterior step.
I would derive the equation you are looking for as follows:

x   true state
x^  posterior state estimate
x^^ prior state estimate
P   prior state error covariance
PP  posterior state error covariance
I   identity matrix
z   measurement
H   measurement matrix
v   measurement noise
R   measurement noise covariance

x~ = x - x^
x~~ = x - x^^

P = E{x~ * x~'} = E{(x - x^)(x - x^)'
= E{(x~~ - K(z - H'*x^^))(x~~ - K(z - H'*x^^))'}
= E{(x~~ - K(H'*x + v - H'*x^^))(x~~ - K(H'*x + v - H'*x^^))'}
= E{((I - K*H')*x~~ + K*v)((I - K*H')*x~~ + K*v)'}
= (I - K*H')*PP*(I - K*H')' + K*R*K'
= (I - K*H')*PP - (I - K*H')*PP*H*K' + K*R*K'
= (I - K*H')*PP - PP*H*K' + K*(H'*PP*H + R)*K'
= (I - K*H')*PP - PP*H*K' + PP*H*(H'*PP*H + R)^-1*(H'*PP*H + R)*K'
= (I - K*H')*PP - PP*H*K' + PP*H*K'
= (I - K*H')*PP

Best regards,

Thomas Arildsen
--
All email to sender address is lost.
My real adress is at es dot aau dot dk for user tha.
```
```Hi

The derivation you posted is pretty much what can also be found on Wikipedia
and I obviously believe its correct :). I was just curious why _my_
computation, based on cov(x) (which works well for the computation of the
first P as i showed, and works just as well for the innovation covariance S,
only NEARLY works for this last equation. I believe there is just some small
thing that I am missing that makes it slightly wrong in this last step :(

Thanks again
Frank

"Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
news:48902ee7\$0\$90263\$14726298@news.sunsite.dk...
> On Tue, 29 Jul 2008 19:21:12 +0200, Frank Neuhaus wrote:
>
>> Hi
>>
>> I've got a question on one equation of the Kalman filter. First off, a
>> quick sync of the terminology (just like on Wikipedia)
>>
>> x: System state estimate
>> P: System state estimation error covariance
>>
>> F: Process Model
>> H: Measurement Model
>>
>> z: Measurement
>> y: Innovation
>> S: Innovation covariance
>>
>> K: Kalman Gain
>>
>> Consider the first equations of the Kalman filter first (leaving out
>> subscripts etc):
>> x=Fx + w (leaving out control input u, w is the zero mean noise) P=F * P
>> * F^T + Q
>>
>> The second Formula can be derived from the first using knowledge about
>> covariance matrices.
>> since P is just cov(x) we can compute P as: P=cov(x) = cov(Fx + w) =
>> cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P * F^T + Q
>>
>> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is
>> true for all covariance matrices.
>> So far so good... Now i tried to do the same thing on the last equation
>> of the Kalman Filter.
>>
>> The Kalman gain is computed as K = P * H^T * S^-1
>>
>> The update step looks like this:
>>
>> x=x + K*y
>>
>> Again we want P=cov(x)
>>
>> So we write:
>>
>> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
>>   = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T = P
>>   + P * H^T * S^-1 * S * S^-T * H * P^T
>>
>> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T =
>> Identity
>>
>>   = P + P * H^T * S^-1 * H * P
>>   = P + K * H * P
>>   = (Identity + K*H)*P
>>
>> So as you can see my result is P = (Identity + K*H)*P. When looking up
>> the formula in other sources, the result is (Identity !!-!! K*H)*P I
>> dont see where that - sign comes from though. Can anyone explain that?
>>
>> Thanks alot :-)
>>    Frank
>
> I think your notation may be a little confusing since x and P in the
> first half of your post are not the same as the x and P in the second
> half. The Kalman filter equations are usually split into two steps; prior
> and posterior. The first half of your post is the prior step and the
> second half is the posterior step.
> I would derive the equation you are looking for as follows:
>
> x   true state
> x^  posterior state estimate
> x^^ prior state estimate
> P   prior state error covariance
> PP  posterior state error covariance
> I   identity matrix
> z   measurement
> H   measurement matrix
> v   measurement noise
> R   measurement noise covariance
>
> x~ = x - x^
> x~~ = x - x^^
>
> P = E{x~ * x~'} = E{(x - x^)(x - x^)'
>  = E{(x~~ - K(z - H'*x^^))(x~~ - K(z - H'*x^^))'}
>  = E{(x~~ - K(H'*x + v - H'*x^^))(x~~ - K(H'*x + v - H'*x^^))'}
>  = E{((I - K*H')*x~~ + K*v)((I - K*H')*x~~ + K*v)'}
>  = (I - K*H')*PP*(I - K*H')' + K*R*K'
>  = (I - K*H')*PP - (I - K*H')*PP*H*K' + K*R*K'
>  = (I - K*H')*PP - PP*H*K' + K*(H'*PP*H + R)*K'
>  = (I - K*H')*PP - PP*H*K' + PP*H*(H'*PP*H + R)^-1*(H'*PP*H + R)*K'
>  = (I - K*H')*PP - PP*H*K' + PP*H*K'
>  = (I - K*H')*PP
>
> Best regards,
>
> Thomas Arildsen
> --
> All email to sender address is lost.
> My real adress is at es dot aau dot dk for user tha.

```
```On Wed, 30 Jul 2008 11:13:34 +0200, Frank Neuhaus wrote:

> Hi
>
>
> The derivation you posted is pretty much what can also be found on
> Wikipedia and I obviously believe its correct :). I was just curious why
> _my_ computation, based on cov(x) (which works well for the computation
> of the first P as i showed, and works just as well for the innovation
> covariance S, only NEARLY works for this last equation. I believe there
> is just some small thing that I am missing that makes it slightly wrong
> in this last step :(
>
> Thanks again
>    Frank
>
> "Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
> news:48902ee7\$0\$90263\$14726298@news.sunsite.dk...
>> On Tue, 29 Jul 2008 19:21:12 +0200, Frank Neuhaus wrote:
>>
>>> Hi
>>>
>>> I've got a question on one equation of the Kalman filter. First off, a
>>> quick sync of the terminology (just like on Wikipedia)
>>>
>>> x: System state estimate
>>> P: System state estimation error covariance
>>>
>>> F: Process Model
>>> H: Measurement Model
>>>
>>> z: Measurement
>>> y: Innovation
>>> S: Innovation covariance
>>>
>>> K: Kalman Gain
>>>
>>> Consider the first equations of the Kalman filter first (leaving out
>>> subscripts etc):
>>> x=Fx + w (leaving out control input u, w is the zero mean noise) P=F *
>>> P * F^T + Q
>>>
>>> The second Formula can be derived from the first using knowledge about
>>> covariance matrices.
>>> since P is just cov(x) we can compute P as: P=cov(x) = cov(Fx + w) =
>>> cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P * F^T + Q
>>>
>>> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is
>>> true for all covariance matrices.
>>> So far so good... Now i tried to do the same thing on the last
>>> equation of the Kalman Filter.
>>>
>>> The Kalman gain is computed as K = P * H^T * S^-1
>>>
>>> The update step looks like this:
>>>
>>> x=x + K*y
>>>
>>> Again we want P=cov(x)
>>>
>>> So we write:
>>>
>>> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
>>>   = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T =
>>>   P + P * H^T * S^-1 * S * S^-T * H * P^T
>>>
>>> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T =
>>> Identity
>>>
>>>   = P + P * H^T * S^-1 * H * P
>>>   = P + K * H * P
>>>   = (Identity + K*H)*P
>>>
>>> So as you can see my result is P = (Identity + K*H)*P. When looking up
>>> the formula in other sources, the result is (Identity !!-!! K*H)*P I
>>> dont see where that - sign comes from though. Can anyone explain that?
>>>
>>> Thanks alot :-)
>>>    Frank
[cut...]

OK, I see what you mean. Looking at it again now, I think you may be
wrong in your assumption that cov(x + K*y) = cov(x) + cov(K*y). Since y =
z - H^T * x (in your notation), you are forgetting cross-covariance
between x and y when you split the term up as you do. See if that will
correct the problem.
Best regards,

Thomas Arildsen

--
All email to sender address is lost.
My real adress is at es dot aau dot dk for user tha.
```
```"Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
news:48903eb1\$0\$90263\$14726298@news.sunsite.dk...
> On Wed, 30 Jul 2008 11:13:34 +0200, Frank Neuhaus wrote:
>
>> Hi
>>
>>
>> The derivation you posted is pretty much what can also be found on
>> Wikipedia and I obviously believe its correct :). I was just curious why
>> _my_ computation, based on cov(x) (which works well for the computation
>> of the first P as i showed, and works just as well for the innovation
>> covariance S, only NEARLY works for this last equation. I believe there
>> is just some small thing that I am missing that makes it slightly wrong
>> in this last step :(
>>
>> Thanks again
>>    Frank
>>
>> "Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
>> news:48902ee7\$0\$90263\$14726298@news.sunsite.dk...
>>> On Tue, 29 Jul 2008 19:21:12 +0200, Frank Neuhaus wrote:
>>>
>>>> Hi
>>>>
>>>> I've got a question on one equation of the Kalman filter. First off, a
>>>> quick sync of the terminology (just like on Wikipedia)
>>>>
>>>> x: System state estimate
>>>> P: System state estimation error covariance
>>>>
>>>> F: Process Model
>>>> H: Measurement Model
>>>>
>>>> z: Measurement
>>>> y: Innovation
>>>> S: Innovation covariance
>>>>
>>>> K: Kalman Gain
>>>>
>>>> Consider the first equations of the Kalman filter first (leaving out
>>>> subscripts etc):
>>>> x=Fx + w (leaving out control input u, w is the zero mean noise) P=F *
>>>> P * F^T + Q
>>>>
>>>> The second Formula can be derived from the first using knowledge about
>>>> covariance matrices.
>>>> since P is just cov(x) we can compute P as: P=cov(x) = cov(Fx + w) =
>>>> cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P * F^T + Q
>>>>
>>>> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is
>>>> true for all covariance matrices.
>>>> So far so good... Now i tried to do the same thing on the last
>>>> equation of the Kalman Filter.
>>>>
>>>> The Kalman gain is computed as K = P * H^T * S^-1
>>>>
>>>> The update step looks like this:
>>>>
>>>> x=x + K*y
>>>>
>>>> Again we want P=cov(x)
>>>>
>>>> So we write:
>>>>
>>>> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
>>>>   = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T =
>>>>   P + P * H^T * S^-1 * S * S^-T * H * P^T
>>>>
>>>> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T =
>>>> Identity
>>>>
>>>>   = P + P * H^T * S^-1 * H * P
>>>>   = P + K * H * P
>>>>   = (Identity + K*H)*P
>>>>
>>>> So as you can see my result is P = (Identity + K*H)*P. When looking up
>>>> the formula in other sources, the result is (Identity !!-!! K*H)*P I
>>>> dont see where that - sign comes from though. Can anyone explain that?
>>>>
>>>> Thanks alot :-)
>>>>    Frank
> [cut...]
>
> OK, I see what you mean. Looking at it again now, I think you may be
> wrong in your assumption that cov(x + K*y) = cov(x) + cov(K*y). Since y =
> z - H^T * x (in your notation), you are forgetting cross-covariance
> between x and y when you split the term up as you do. See if that will
> correct the problem.

Hi again

I think you are right. I looked up the rules for cov computations again and
came up with this:

cov(x + K*y)
=
cov(x + K*y,x + K*y)
=
cov(x,x) + cov(x,K*y) + cov(K*y,x) + cov(K*y,K*y)
=
P + cov(x,y)*K^T + K*cov(y,x) + K*cov(y,y)*K^T
=
P + K*S*K^T + cov(x,y)*K^T + K*cov(y,x)
=
P + K*S*K^T + cov(x,z-H*x)*K^T + K*cov(z-H*x,x)
=
P + K*S*K^T + (cov(x,z)-cov(x,H*x))*K^T + K*(cov(z,x)-cov(H*x,x))
=
P + K*S*K^T + (cov(x,z)-P*H^T)*K^T + K*(cov(z,x)-H*P)
= (assuming cov(x,z)=0, cov(z,x)=0)
P + K*S*K^T - P*H^T*K^T - K*H*P
=
P + K*S*(S^-T*H*P^T) - P*H^T*K^T - K*H*P
= (S*S^-T=Identity)
P + K*H*P^T - P*H^T*K^T - K*H*P
= (P symmetric)
P + K*H*P - P*H^T*K^T - K*H*P
=
P-P*H^T*K^T
=
(I-H^T*K^T)*P

The result I am getting is the transpose of the correct formula. That
probably does not matter since the result is symmetric anyway, right?

Thanks
Frank

```
```"Frank Neuhaus" <fneuhaus@uni-koblenz.de> schrieb im Newsbeitrag
news:g6pl7p\$l7k\$1@cache.uni-koblenz.de...
> "Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
> news:48903eb1\$0\$90263\$14726298@news.sunsite.dk...
>> On Wed, 30 Jul 2008 11:13:34 +0200, Frank Neuhaus wrote:
>>
>>> Hi
>>>
>>>
>>> The derivation you posted is pretty much what can also be found on
>>> Wikipedia and I obviously believe its correct :). I was just curious why
>>> _my_ computation, based on cov(x) (which works well for the computation
>>> of the first P as i showed, and works just as well for the innovation
>>> covariance S, only NEARLY works for this last equation. I believe there
>>> is just some small thing that I am missing that makes it slightly wrong
>>> in this last step :(
>>>
>>> Thanks again
>>>    Frank
>>>
>>> "Thomas Arildsen" <tha.es-aau-dk@spamgourmet.com> schrieb im Newsbeitrag
>>> news:48902ee7\$0\$90263\$14726298@news.sunsite.dk...
>>>> On Tue, 29 Jul 2008 19:21:12 +0200, Frank Neuhaus wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> I've got a question on one equation of the Kalman filter. First off, a
>>>>> quick sync of the terminology (just like on Wikipedia)
>>>>>
>>>>> x: System state estimate
>>>>> P: System state estimation error covariance
>>>>>
>>>>> F: Process Model
>>>>> H: Measurement Model
>>>>>
>>>>> z: Measurement
>>>>> y: Innovation
>>>>> S: Innovation covariance
>>>>>
>>>>> K: Kalman Gain
>>>>>
>>>>> Consider the first equations of the Kalman filter first (leaving out
>>>>> subscripts etc):
>>>>> x=Fx + w (leaving out control input u, w is the zero mean noise) P=F *
>>>>> P * F^T + Q
>>>>>
>>>>> The second Formula can be derived from the first using knowledge about
>>>>> covariance matrices.
>>>>> since P is just cov(x) we can compute P as: P=cov(x) = cov(Fx + w) =
>>>>> cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P * F^T + Q
>>>>>
>>>>> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is
>>>>> true for all covariance matrices.
>>>>> So far so good... Now i tried to do the same thing on the last
>>>>> equation of the Kalman Filter.
>>>>>
>>>>> The Kalman gain is computed as K = P * H^T * S^-1
>>>>>
>>>>> The update step looks like this:
>>>>>
>>>>> x=x + K*y
>>>>>
>>>>> Again we want P=cov(x)
>>>>>
>>>>> So we write:
>>>>>
>>>>> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
>>>>>   = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T =
>>>>>   P + P * H^T * S^-1 * S * S^-T * H * P^T
>>>>>
>>>>> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T =
>>>>> Identity
>>>>>
>>>>>   = P + P * H^T * S^-1 * H * P
>>>>>   = P + K * H * P
>>>>>   = (Identity + K*H)*P
>>>>>
>>>>> So as you can see my result is P = (Identity + K*H)*P. When looking up
>>>>> the formula in other sources, the result is (Identity !!-!! K*H)*P I
>>>>> dont see where that - sign comes from though. Can anyone explain that?
>>>>>
>>>>> Thanks alot :-)
>>>>>    Frank
>> [cut...]
>>
>> OK, I see what you mean. Looking at it again now, I think you may be
>> wrong in your assumption that cov(x + K*y) = cov(x) + cov(K*y). Since y =
>> z - H^T * x (in your notation), you are forgetting cross-covariance
>> between x and y when you split the term up as you do. See if that will
>> correct the problem.
>
>
> Hi again
>
> I think you are right. I looked up the rules for cov computations again
> and came up with this:
>
> cov(x + K*y)
> =
> cov(x + K*y,x + K*y)
> =
> cov(x,x) + cov(x,K*y) + cov(K*y,x) + cov(K*y,K*y)
> =
> P + cov(x,y)*K^T + K*cov(y,x) + K*cov(y,y)*K^T
> =
> P + K*S*K^T + cov(x,y)*K^T + K*cov(y,x)
> =
> P + K*S*K^T + cov(x,z-H*x)*K^T + K*cov(z-H*x,x)
> =
> P + K*S*K^T + (cov(x,z)-cov(x,H*x))*K^T + K*(cov(z,x)-cov(H*x,x))
> =
> P + K*S*K^T + (cov(x,z)-P*H^T)*K^T + K*(cov(z,x)-H*P)
> = (assuming cov(x,z)=0, cov(z,x)=0)
> P + K*S*K^T - P*H^T*K^T - K*H*P
> =
> P + K*S*(S^-T*H*P^T) - P*H^T*K^T - K*H*P
> = (S*S^-T=Identity)
> P + K*H*P^T - P*H^T*K^T - K*H*P
> = (P symmetric)
> P + K*H*P - P*H^T*K^T - K*H*P
> =
> P-P*H^T*K^T
> =
> (I-H^T*K^T)*P
>
> The result I am getting is the transpose of the correct formula. That
> probably does not matter since the result is symmetric anyway, right?

Oh cool I just got it without the transposition by not replacing K^T but by
replacing K and then canceling out S*S^-1 instead of S*S^-T:

P + K*S*K^T - P*H^T*K^T - K*H*P
=
P + P*H^T*S^-1*S*K^T - P*H^T*K^T - K*H*P
=
P + P*H^T*K^T - P*H^T*K^T - K*H*P
=
P - K*H*P
=
(I-KH)*P

Ok seems to be right now :-) Thanks alot again

```
```On Jul 29, 7:21&#2013266080;pm, "Frank Neuhaus" <fneuh...@uni-koblenz.de> wrote:
> Hi
>
> I've got a question on one equation of the Kalman filter.
> First off, a quick sync of the terminology (just like on Wikipedia)
>
> x: System state estimate
> P: System state estimation error covariance
>
> F: Process Model
> H: Measurement Model
>
> z: Measurement
> y: Innovation
> S: Innovation covariance
>
> K: Kalman Gain
>
> Consider the first equations of the Kalman filter first (leaving out
> subscripts etc):
> x=Fx + w (leaving out control input u, w is the zero mean noise)
> P=F * P * F^T + Q
>
> The second Formula can be derived from the first using knowledge about
> covariance matrices.
> since P is just cov(x) we can compute P as:
> P=cov(x) = cov(Fx + w) = cov(Fx) + cov(w) = F * cov(x) * F^T + Q = F * P *
> F^T + Q
>
> The fact that cov(Fx)=F * cov(x) * F^T is just some identity that is true
> for all covariance matrices.
> So far so good... Now i tried to do the same thing on the last equation of
> the Kalman Filter.
>
> The Kalman gain is computed as K = P * H^T * S^-1
>
> The update step looks like this:
>
> x=x + K*y
>
> Again we want P=cov(x)
>
> So we write:
>
> P = cov(x) = cov(x + K*y) = cov(x) + cov(K*y) = P + K * cov(y) * K^T
> &#2013266080; = P + K * S * K^T = P + (P * H^T * S^-1) * S * (P * H^T * S^-1)^T
> &#2013266080; = P + P * H^T * S^-1 * S * S^-T * H * P^T
>
> Now since P is symmetric, P^T=P, S is also symmetric, thus S*S^-T = Identity
>
> &#2013266080; = P + P * H^T * S^-1 * H * P
> &#2013266080; = P + K * H * P
> &#2013266080; = (Identity + K*H)*P
>
> So as you can see my result is P = (Identity + K*H)*P. When looking up the
> formula in other sources, the result is (Identity !!-!! K*H)*P
> I dont see where that - sign comes from though. Can anyone explain that?
>
> Thanks alot :-)
> &#2013266080; &#2013266080;Frank

I would say that the equation K = P * H^T * S^-1 uses the posterior
(analysed) covariance. Use K = P * H^T * (H * P * H^T + R)^-1 instead.

Take care
Pavel
```