Simple Kalman filter
In a sense, Kalman filter can predict a near future from the past and current condition. (Maybe this is a bit too much to say, there are of course only some extent and limitations.) Let's write it in a eqation.
This means that the future is somewhat the extension of previous state with a modification, and the modification is now hidden in the function f. OK, I am not an expert about Kalman filter, so, I will write down only a simple one that I could handle.
In a simple Kalman filter, x_{new} is predicted by past x_i-s. The past data span a subspace of the data and the new value is just the best projection onto the past subspace. That how I understand it. (This might be wrong.) In the Strang's book, an interesting computation method is explained, but, it is just a sketch. Here I will write the overview of the idea of Strang's book.
First, I need a preparation about 1/k.
Therefore,
We saw the last two blog entries that the best expectation of sequence of the data is the average in the sense of least square. Using the above equation, I will rewrite the average equation. You will say why? I will explain the reason shortly.
Now I have enough materials to explain what I want to do. We have a sequence of data from i = 1 to n. We will have more data n+1, n+2, ... later. But, if we need to compute all the history of the data to predict the near future, this is not so good. Because, the time passed, we need linearly more computation. What we need is some summary of history state and the current state, then using these two states, we compute the best next state (the best is in the least square sense). If we could do that, the computation cost is always the same, this is nice for a realtime system. It just return to the x_{new} = x_{old} + f(x_{current}), we want to know the near future by old and current states. Therefore, we made a equation that has n and n-1 (please see the equation again, you see (i=1 to n-1) + n.) Here, \frac{1}{n-1} \sum_{i=1}^{n-1} x_i is the average of the past, so I rewrite this as x_{old}.
Now you see this is the best prediction in the least square sense with the history of the last step and the current state. This is Wow.
This three articles, we saw the least square from two point of views: calculus (analysis) and linear algebra. They are the same. We also see its application, Kalman filter.
In a sense, Kalman filter can predict a near future from the past and current condition. (Maybe this is a bit too much to say, there are of course only some extent and limitations.) Let's write it in a eqation.
x_{new} = x_{old} + f(x_{current})
This means that the future is somewhat the extension of previous state with a modification, and the modification is now hidden in the function f. OK, I am not an expert about Kalman filter, so, I will write down only a simple one that I could handle.
In a simple Kalman filter, x_{new} is predicted by past x_i-s. The past data span a subspace of the data and the new value is just the best projection onto the past subspace. That how I understand it. (This might be wrong.) In the Strang's book, an interesting computation method is explained, but, it is just a sketch. Here I will write the overview of the idea of Strang's book.
First, I need a preparation about 1/k.
Therefore,
We saw the last two blog entries that the best expectation of sequence of the data is the average in the sense of least square. Using the above equation, I will rewrite the average equation. You will say why? I will explain the reason shortly.
Now I have enough materials to explain what I want to do. We have a sequence of data from i = 1 to n. We will have more data n+1, n+2, ... later. But, if we need to compute all the history of the data to predict the near future, this is not so good. Because, the time passed, we need linearly more computation. What we need is some summary of history state and the current state, then using these two states, we compute the best next state (the best is in the least square sense). If we could do that, the computation cost is always the same, this is nice for a realtime system. It just return to the x_{new} = x_{old} + f(x_{current}), we want to know the near future by old and current states. Therefore, we made a equation that has n and n-1 (please see the equation again, you see (i=1 to n-1) + n.) Here, \frac{1}{n-1} \sum_{i=1}^{n-1} x_i is the average of the past, so I rewrite this as x_{old}.
Now you see this is the best prediction in the least square sense with the history of the last step and the current state. This is Wow.
This three articles, we saw the least square from two point of views: calculus (analysis) and linear algebra. They are the same. We also see its application, Kalman filter.
Comments