> Thanks Tim,
>
> You have given me a rather informative lesson on the filters! :-)
>
> Well, I was surprised because the book, by Chui and Chen, explicitly
> states the condition that unbiasedness exists **ONLY IF** x_{0/0} =
> E[x_{0} is satistified. Is this being too stringent?
>
> I always think that, like you said, "the "sub-optimal" solution will
> converge on the actual solution fairly rapidly, and ..... any error
> from your starting values will get lost in the noise", thereby making
> the estimates unbiased as time goes on.
>
> Q1. So who is right? Chui and Chen seem to think that if the initial
> estimates are given wrongly, the all estimates from the Kalman filter
> will be biased. I always thought that, even if the initial estimates
> are given wrongly (or arbitrarily assigned 0 with large variance) ,
> after some time the estimates will be unbiased again.
>
> Please enlighten.
>
> Thank you once again.
>
I think we're all correct, except how we're interpreting "bias" and
"always". You and I agree that for a stable filter the bias will
eventually be lost in the noise. I suspect that Chui and Chen will
point out that even if it's down in the noise the bias is still _there_,
mathematically. If you tied them down and threatened them with lit
cigarettes I suspect that they would agree that the bias may not matter
after a while, but they would still insist that it's _there_.
Note that I only see the bias diminishing for a _stable_ filter! If
your filter is metastable (i.e. if you're trying to track an integrating
process, such as you do with an inertial nav system) then the bias
_never_ goes away, and you must start with good initial conditions.
Aircraft with inertial nav systems must spend some time sitting on the
pad at a roughly correct heading and a well-surveyed location while the
filters find their brains -- if this isn't done then the nav solution
has a permanent bias that can't be taken out without repeated external
fixes.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Reply by jionglong●August 8, 20052005-08-08
Thanks Tim,
You have given me a rather informative lesson on the filters! :-)
Well, I was surprised because the book, by Chui and Chen, explicitly
states the condition that unbiasedness exists **ONLY IF** x_{0/0} =
E[x_{0} is satistified. Is this being too stringent?
I always think that, like you said, "the "sub-optimal" solution will
converge on the actual solution fairly rapidly, and ..... any error
from your starting values will get lost in the noise", thereby making
the estimates unbiased as time goes on.
Q1. So who is right? Chui and Chen seem to think that if the initial
estimates are given wrongly, the all estimates from the Kalman filter
will be biased. I always thought that, even if the initial estimates
are given wrongly (or arbitrarily assigned 0 with large variance) ,
after some time the estimates will be unbiased again.
Please enlighten.
Thank you once again.
Reply by Tim Wescott●August 8, 20052005-08-08
jionglong wrote:
> Hi,
>
> Thanks for your help.
>
> My problem is that in Kalman Filtering with Real-Time Applications by
> Chui and Chen,
>
> x_{k/k} is an unbiased estimate of x_{k} ONLY IF x_{0/0} = E[x_{0}]
>
> In its proof, E{ x_{k/k} - x_{k} } is a product of known matrices and
> E{ x_{0} - x_{0/0} }. Only by setting E{ x_{0} } = E{ x_{0/0} } that
> we can get
>
> E{ x_{k/k} - x_{k} } = 0 i.e. unbiased
>
> ------------------------------------------------------------------------------------------------------------------------------------
>
> I am indeed surprised by the stringent requirement here!
>
I'm surprised by your surprise. It seems rather obvious to me. Indeed,
I suspect one of the reasons for the formalism of the Kalman filter is
to incorporate one's a-priori knowledge of the initial states into the
solution.
What are you trying to do, and is a biased estimate a problem or not?
You are, in general, going to see one of three conditions:
1. The filter will be stable and the estimate will converge fast enough
that you won't care about the asymptotically decreasing bias. Think
paper plant where you have to throw out the first 100 feet of paper anyway.
In nearly all of the problems covered by this case you may not see a
"Kalman" filter at all -- the problem may only require a Wiener filter,
or it may not need anything close to that level of formalism.
2. The filter will be stable and the estimate will converge slowly
enough that the bias _will_ cause you problems. Think of an inverted
pendulum that will fall over if you don't control it correctly from the
very start.
3. The filter is unstable or metastable and the estimate _won't_
converge _ever_. Think of inertial nav systems without help from GPS or
other outside references.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Reply by jionglong●August 8, 20052005-08-08
Hi,
Thanks for your help.
My problem is that in Kalman Filtering with Real-Time Applications by
Chui and Chen,
x_{k/k} is an unbiased estimate of x_{k} ONLY IF x_{0/0} = E[x_{0}]
In its proof, E{ x_{k/k} - x_{k} } is a product of known matrices and
E{ x_{0} - x_{0/0} }. Only by setting E{ x_{0} } = E{ x_{0/0} } that
we can get
E{ x_{k/k} - x_{k} } = 0 i.e. unbiased
------------------------------------------------------------------------------------------------------------------------------------
I am indeed surprised by the stringent requirement here!
Reply by Tim Wescott●August 8, 20052005-08-08
jionglong wrote:
> Hi,
>
> I understand that
>
> The Kalman filter is a linear, recursive estimator that
> produces the minimum variance estimate in a least squares
> sense under the assumption of white, Gaussian noise processes.
>
> Now, some books eg. Kalman Filtering with Real-Time Applications by
> Chui and Chen, has added this requirement that
>
> the updated estimate x_{k/k} is an unbiased estimate of x_{k} by
> choosing appropriate initial estiamte, i.e. x_{0/0} = E[x_{0}]
>
> Note that in the textbook by Chui and Chen, normality is not assumed
> for the variables.
>
> Q1. Well, if normality is assumed, then do we still need to require
> x_{0/0} = E[x_{0}] ?
>
> Q2. If we do not know E[x_{0}], and we set x_{0/0} to be zero (E[x_{0}]
> <> 0 but we do not know this fact) with a large initial variance.
> Assuming no round-off error, does that mean that our solution is always
> going to be sub-optimal since we guessed x_{0/0} wrongly?
>
> Thanks and best regards.
>
I assume by "normality" you assume "Gaussian". If you don't assume that
your noise is Gaussian then your optimal signal processing is not, in
general, linear -- but the best _linear_ signal processor will be a
Kalman filter (Van Trees, "Detection, Estimation and Modulation Theory").
If you know the expected value of x at time 0 and it's variance then
failing to use it will degrade the optimality of your Kalman filter, of
course.
If you _don't_ have the vaguest notion then your technique of setting x
to zero at zero time with very high initial variance is a more-or-less
correct reflection in math of the physical situation. Your
"sub-optimal" solution will converge on the actual solution fairly
rapidly, and if you know your way around the Kalman filter equations you
can see where any error from your starting values will get lost in the
noise.
If you _do_ know x at time zero and it's important that you have a good
estimate shortly after the whole thing starts going then of course you
should initialize things correctly -- you don't want that Apollo rocket
with the three guys on top to start out by trying to go horizontal,
after all.
--
Tim Wescott
Wescott Design Services
http://www.wescottdesign.com
Reply by jionglong●August 7, 20052005-08-07
Hi,
I understand that
The Kalman filter is a linear, recursive estimator that
produces the minimum variance estimate in a least squares
sense under the assumption of white, Gaussian noise processes.
Now, some books eg. Kalman Filtering with Real-Time Applications by
Chui and Chen, has added this requirement that
the updated estimate x_{k/k} is an unbiased estimate of x_{k} by
choosing appropriate initial estiamte, i.e. x_{0/0} = E[x_{0}]
Note that in the textbook by Chui and Chen, normality is not assumed
for the variables.
Q1. Well, if normality is assumed, then do we still need to require
x_{0/0} = E[x_{0}] ?
Q2. If we do not know E[x_{0}], and we set x_{0/0} to be zero (E[x_{0}]
<> 0 but we do not know this fact) with a large initial variance.
Assuming no round-off error, does that mean that our solution is always
going to be sub-optimal since we guessed x_{0/0} wrongly?
Thanks and best regards.