简体   繁体   中英

Implementing Kalman filter for prediction

I have data of people moving a cursor towards a target on a screen (I actually have a lot of data). For example:

位置数据

The movement is always around ~9cm. So in the case above the individual overshot the target and corrected for it at the end. This tends to happen.

What I want to do is use a Kalman filter to predict the position of the cursor not at t+1, but t+15 (so 15 time steps ahead). Moreover, I think there is an error in my current implementation, this is what I'm getting for the above case:

卡尔曼滤波器

You can see it really gets messed up towards the end, which makes me doubt my implementation. So here are my questions:

1) Can someone look at my code and see if I made an error? Any suggestions for improvements?

2) How would I amend my code so that I can predict many steps ahead (eg, 15 steps) instead of just one step? On a given iteration I essentially want to predict x(t+15) instead of x(t+1).

I'd really appreciate any help here, I've been stuck on this for a while.

Here's the data: http://s000.tinyupload.com/index.php?file_id=16556755793392871980

Note that you need to divide by 1000 to get it in cm.

Here's my code:

xp=position_data(:,1);
yp=position_data(:,2);

N=length(xp);

Q=eye(4);
motion=zeros(4,1);

H=[1 0 0 0
    0 1 0 0];
F=[1 0 1 0
    0 1 0 1
    0 0 1 0
    0 0 0 1];

x=zeros(4,1);
P = eye(4)*1000; %initial uncertainty

observed_x=xp+0.05*rand(N,1).*xp;
observed_y=yp+0.05*rand(N,1).*yp;

R=0.01^2;

pos=[observed_x,observed_y];

start=0;
jj=zeros(N,2); %%jj will be the final result


for k=start+1:length(observed_x)
    measurement=pos(k,:);
    y = measurement' - H * x;
    S = H * P * H' + R; % residual convariance
    K = P * H' * inv(S);    % Kalman gain
    x = x + K*y;
    I = eye(4); % identity matrix
    P = (I - K*H)*P;

    % predict x and P
    x = F*x + motion;
    P = F*P*F' + Q;
    jj(k,:)=x(1:2);
end

I haven't checked the code of your Kalman filter but, assuming it is correct, what you call residual covariance, and the resulting gain, do not depend on the observations, but only on the original settings and how long it has been running. I like to think of the Kalman filter as keeping track of an unknown quantity by updating its mean and variance. The mean depends on the observations, but the variance does not.

What is happening in real life is that the user's cursor is pretty predictable until it hits the end of the screen, and then it changes drastically, and then it becomes predictable again. There is no one setting for gain / observation variance that will work well in all three regions. I would consider trying to detect periods when the error rate has suddenly increased and then drastically increasing the uncertainty, so that the Kalman filter no longer thinks that just because it has been tracking the cursor all the way along the screen the residual uncertainty is very small and it will continue past the edge of the screen. For that matter, having predictions that continue past the edge of the screen is a pretty good tip-off that these predictions are not going to be accurate and you should increase the uncertainty.

You could also try using all the data you have collected to find the initial uncertainty settings that lead to the smallest error - or, if you reset the uncertainty when the predictions start to go off the end of the screen, the initial and reset uncertainty settings that lead to the smallest errors.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM