DSPRelated.com
Forums

Indirect Kalman filter vs. direct Kalman filter

Started by adelaide July 9, 2012
Hi, 
I have implemented one direct and one indirect Kalman filters to integrate
INS with wheel encoder. They work fine. The results of my indirect KF is
more accurate than those of the direct KF. I've search quite a bit about
why that is. I couldn't find a sensible reason though. I'd appreciate it a
lot if anyone here could help me with that. 

Ladan Sahafi
On Mon, 09 Jul 2012 00:59:47 -0500, adelaide wrote:

> Hi, > I have implemented one direct and one indirect Kalman filters to > integrate INS with wheel encoder. They work fine. The results of my > indirect KF is more accurate than those of the direct KF. I've search > quite a bit about why that is. I couldn't find a sensible reason though. > I'd appreciate it a lot if anyone here could help me with that.
I should know, but I don't: What's an indirect Kalman filter? -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
>On Mon, 09 Jul 2012 00:59:47 -0500, adelaide wrote: > >> Hi, >> I have implemented one direct and one indirect Kalman filters to >> integrate INS with wheel encoder. They work fine. The results of my >> indirect KF is more accurate than those of the direct KF. I've search >> quite a bit about why that is. I couldn't find a sensible reason
though.
>> I'd appreciate it a lot if anyone here could help me with that. > >I should know, but I don't: > >What's an indirect Kalman filter? > >-- >My liberal friends think I'm a conservative kook. >My conservative friends think I'm a liberal kook. >Why am I not happy that they have found common ground? > >Tim Wescott, Communications, Control, Circuits & Software >http://www.wescottdesign.com >
An indirect Kalman filter is the one that uses error states as opposed to the states in the state vector.
On Mon, 09 Jul 2012 20:37:32 -0500, adelaide wrote:

>>On Mon, 09 Jul 2012 00:59:47 -0500, adelaide wrote: >> >>> Hi, >>> I have implemented one direct and one indirect Kalman filters to >>> integrate INS with wheel encoder. They work fine. The results of my >>> indirect KF is more accurate than those of the direct KF. I've search >>> quite a bit about why that is. I couldn't find a sensible reason > though. >>> I'd appreciate it a lot if anyone here could help me with that. >> >>I should know, but I don't: >> >>What's an indirect Kalman filter? >> >>-- >>My liberal friends think I'm a conservative kook. My conservative >>friends think I'm a liberal kook. Why am I not happy that they have >>found common ground? >> >>Tim Wescott, Communications, Control, Circuits & Software >>http://www.wescottdesign.com >> > An indirect Kalman filter is the one that uses error states as opposed > to the states in the state vector.
It seems like an odd distinction -- I didn't realize there was any other kind. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Monday, July 9, 2012 1:59:47 AM UTC-4, adelaide wrote:
> Hi, > I have implemented one direct and one indirect Kalman filters to integrate > INS with wheel encoder. They work fine. The results of my indirect KF is > more accurate than those of the direct KF. I've search quite a bit about > why that is. I couldn't find a sensible reason though. I'd appreciate it a > lot if anyone here could help me with that. > > Ladan Sahafi
Superior performance of error states can arise from different reasons in different applications. Not knowing your specific configuration, I can envision two possibilities -- 1) feedforward and 2) linearization. I'll describe tho role of those, in that order. FEEDFORWARD For simplicity, first consider an elementary example such as Figure 2.2 shown in http://jameslfarrell.com/wp-content/uploads/2012/07/ERRATAA.pdf The diagram could represent translational motion along a line or rotation about one axis. In either case X1 and X2 are the complete excursion and its rate, respectively, with circumflexes ( ^ ) above to denote estimated values; lower case x1 and x2 are the error state adjustments. The derivative information (in the figure, supplied by speed data) helps the estimator to keep up with its dynamics. Because the product (speed * time increment) is fed forward -- independent of the estimation -- to the summation every updating interval, the Kalman estimator doesn't have to deep up with the dynamics; it needs only enough responsiveness to keep up with the error in its own perception of speed (a far less demanding requirement). For the case of rotation about one axis, the rate information often comes from a tachometer. In your system it evidently comes from inertial data; the feedforward principle still holds. I could speculate that one of your configurations (sight unseen) capitalized on the aiding just described while the other doesn't fully exploit it. If that speculation doesn't apply then, in a full 3-dimensional attitude algorithm, inexact linearization could play a role. I'll briefly address that subsequently but, before leaving the aiding topic, its importance is worth stressing. In many mechanizations, the improvement offered by feedforward is dramatic. GPS/inertial integration, for example, can have loose coupling, tight coupling, or ultra-tight coupling. Another possibility, deep integration with FFTs (http://jameslfarrell.com/gps-gnss/gpsfft), is even more sophisticated, but the immediate discussion will concern only the difference between tight and ultra-tight. GPS measurements aid the INS in both cases but, in the latter, the short-term performance of the INS aids the receiver track loops. That allows those loops to be narrowband, thereby averaging the noise over much longer durations. The improvement in signal-to-noise ratio can exceed 20 dB. Your system, with an INS, includes a 3-D attitude computation. Volumes have been written on those algorithms, truncation, noncommutativity, coning, pseudoconing, etc. Books noted on the site cited above (http://jameslfarrell.com/published-books-gnss-aided-navigation-and-tracking) are among those volumes; whatever is in your company's library can clarify the ramifications. Without attempting to reproduce that depth here, this much can be said: that 3-D computation inevitably contains linearization approximations. They work successfully -- and they are most accurate when the angular increments being processed are smallest. Immediately that favors error states. I've experienced that benefit in trackers also; it is described in Chapter 9 of the 2007 book just cited.