kalman filter - python implementation

Started by grzegorz g. June 8, 2010
On 6/9/2010 11:24 AM, Michael wrote:
> Jerry, > > I did not insult the poster or the fact the he is using Python as his programming language of choice - "brickbats" is harsh to say the least. I am, however, keenly aware of the tone of his last message. > > It is okay to be frustrated with a problem you are trying to solve, but extending it into a public forum where you ask individuals for their time and their experiences to help is another. > > > Michael. > -- > Finding Nemo, Sharkbait - "He's probably American!" > > --- > frmsrcurl:
Michael, Yours was neither the first nor the more offensive letter critical of his choice of language. Assume that his annoyance arose from the accumulation, not from your response alone. We tend to observe certain standards here, communicating in English and Matlab chief among them. Nevertheless, we all should recognize that this is an international forum and that there are many legitimate programming languages. All that said, nobody here is under any obligation to read a long program or routine. When a question begins with a few pages of code in any language, I usually ignore it. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Tim Wescott wrote:

> Matlab is _expensive_.
Octave is a cheap alternative.
> While it delivers considerable value, there are > free alternatives that do as well or better than Matlab. From a > standpoint of how much you gain vs. how much you pay, Matlab is _horrible_.
I agree. For prototyping in a high level language I usually use Ocaml or Haskell, but I would never expect anyone not familiar with those languages to look at my code. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo
On 06/09/2010 03:34 PM, Erik de Castro Lopo wrote:
> Tim Wescott wrote: > >> Matlab is _expensive_. > > Octave is a cheap alternative. > >> While it delivers considerable value, there are >> free alternatives that do as well or better than Matlab. From a >> standpoint of how much you gain vs. how much you pay, Matlab is _horrible_. > > I agree. >
I started using Scilab when the Octave project was in a shambles. Now the Scilab project is behind (still no debugger...) but I'm used to it. -- Tim Wescott Control system and signal processing consulting

grzegorz (Original Poster),

With respect to your Python code, I suggest you have a look at the line, "IS = R + dot(H, dot(P, H.T))", in your "update" function definition.  This goes back to a previous idea of this thread.

As suggested by Michael (Plante), taking time to "Describe the problem (inputs/outputs/model) of the filter..." would help us understand the framework of your problem.

Your code fails, I believe, because the "IS" matrix becomes ill-conditioned as the observations are processed.  From what I can tell it stops at the 5th iteration.  I think the root-problem lies in the respective sizes of the principle process/model vectors and matrices you are using.

The "dot(H, dot(P, H.T))" portion of the above equation determines the innovation or residual covariance update.  Looking at the sizes of the measurement/observation model vector (H) and the predicted (a priori) covariance estimate (P), might indicate the potential underlying issue.

"H", as you've defined in your code, is 1-by-3 and "P" is 3-by-3.  With reference to, "", the size of the innovation covariance update is then:

1-by-3 times 3-by-3 times 3-by-1 yielding a 1-by-1 or scalar result.  Should it not be a 3-by-3 matrix result in light of the 3-by-3 measurement covariance matrix, "R"?

I am not 100% sure how Python handles the dimensions of the "array" and "mat" (matrix) types.  In the event the measurement model vector's dimensions are reversed, then you have:

3-by-1 times 3-by-3 times 1-by-3 yielding a no-go invalid matrix result.

With the scalar result, the same value is added to all elements of the measurement covariance matrix, "R".  With continued processing, your innovation covariance grows resulting in an ill-conditioned matrix "IS" that can not be inverted.

Perhaps you can flesh-out the vectors and matrices involved with your process and measurement models as suggested by Michael (Plante).  From what I can see, you have:


State Transition Matrix (A):  
[[ 1.   1.   0.5]
 [ 0.   1.   1. ]
 [ 0.   0.   1. ]]

Process Noise Covariance Matrix (Q):  
[[ 1.0  0.   0. ]
 [ 0.   1.0  0. ]
 [ 0.   0.  1.0]]

Measurement/Observation Model (H):  
[1 0 0]

Measurement Noise Covariance Model (R):  
[[ 1.0  0.   0. ]
 [ 0.   1.0  0. ]
 [ 0.   0.   1.0]]

Initial Covariance Estimate:  
[[ 0.01  0.  0.]
 [ 0.  0.01  0.]
 [ 0.  0.  0.01]]

Initial State Estimate:
[0 0 0]


Application details would be helpful...  There are sections of your code that you may also what to clarify.  For example:

    	# temporary freq. and modulation factor - not important if H= [1 0 0]      
	for n in range(1,n_iter):# dla n = 0 -> omega,alfa=0
	for n in range(n_iter):
        	# X[n][1]=omega_list[n]
        	# X[n][2]=alfa_list[n]

This is used to generate the vector "X" later on in your code to be used as your measurement or observations vector.

I hope all or part of this helps...


thx for help. the problem was a state matrix (A). after transposition it's
ok (stupid error however difficult to find ... first good predictions  made
me think that matrices were ok)