Dear all, I thought to post the message here, because I asked everybody whom I could ask and looked in the books I could find and ... because I just want to talk about it. :) If you encountered a similar problem, I would really appreciate any discussion. I'm working on implementing Kalman filter in the the numerical code (fortran). I have a very large linear system (thousands of equations). I programed steady-state Kalman filter that works more or less OK. I did not work on model reduction yet, but I'm planning to do it very soon. The question, that I want to ask, is about time step choice. To get a nice noise reduction in my calculations I need to set very small time step (~ 1 microseconds) . The smaller the time step - the better results are. I would like to get the same good results for the time step of 100 microseconds, if I'm successful then my approach can be used in real experiments. My idea is to try tuned up a plant noise (input noise) covariance matrix. Because I work with the full size model, I know input noise covariance matrix exactly. it's 1000*1000 matrix. However, large time step introduces some numerical error that may need to be taken into account too in the plant covariance matrix. I'm tthinking in this direction at the moment. Well, I can talk about it alot. But I think I gave enough information for the expert to tell me if I'm on the right track or not. Maybe I need to focus on model reduction first and then play with time step choice? I do not know. Thanks OKH PS English is my second language, if I was not clear in my writings, please let me know, I'll try to explain the problem better.
Very small time step in Discrete Kalman filter
Started by ●June 24, 2006
Reply by ●June 27, 20062006-06-27
I would look at the noise model first. I would guess that you are using a guassian noise model. As you run the KF faster, the noise it sees appears more guassian (that is probably what the ADC noise looks like). What happens is that most systems will have some sort of bandlimited noise, and as you sample faster, you need many more data points to see the true noise structure. Most KF experts will tell you that if the noise model is correct (and generally non-guassian), the KF should run better at a slower sampling rate, and performance will degrade as you increase the sampling rate (since the noise will start to look guassian). OKH wrote:> Dear all, > > I thought to post the message here, because I asked everybody whom I could > ask and looked in the books I could find and ... because I just want to > talk about it. :) If you encountered a similar problem, I would really > appreciate any discussion. > > I'm working on implementing Kalman filter in the the numerical code > (fortran). I have a very large linear system (thousands of equations). I > programed steady-state Kalman filter that works more or less OK. I did not > work on model reduction yet, but I'm planning to do it very soon. The > question, that I want to ask, is about time step choice. To get a nice > noise reduction in my calculations I need to set very small time step (~ > 1 microseconds) . The smaller the time step - the better results are. I > would like to get the same good results for the time step of 100 > microseconds, if I'm successful then my approach can be used in real > experiments. My idea is to try tuned up a plant noise (input noise) > covariance matrix. Because I work with the full size model, I know input > noise covariance matrix exactly. it's 1000*1000 matrix. However, large > time step introduces some numerical error that may need to be taken into > account too in the plant covariance matrix. I'm tthinking in this > direction at the moment. > > Well, I can talk about it alot. But I think I gave enough information for > the expert to tell me if I'm on the right track or not. Maybe I need to > focus on model reduction first and then play with time step choice? I do > not know. > > Thanks > OKH > > PS English is my second language, if I was not clear in my writings, > please let me know, I'll try to explain the problem better.