繁体   English   中英

粒子过滤器的概率密度不符合预期

[英]Particle filter probability density not as expected

用粒子过滤器玩一点我想知道为什么概率密度看起来不像我期望的那样:

我试图实现一个非常简单的模型,其中$ x_k = x_k-1 + noise $,而测量值为$ z = x_k + noise $,并且测量值始终在0和1之间切换(切换)。

我的期望:

  • 平均值= 0.5 ---符合预期
  • (正态分布)峰值位于0和1且其余大致为零的概率密度函数-根本不起作用

产生的概率密度只是0.5左右的正态分布: 在此处输入图片说明

那么该分发是正确的还是我的代码中有错误?
我需要在代码中进行哪些更改以获得所需的二进制发行版?

#!/usr/bin/python3

import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation

xMin = -1.15
xMax =  2.15
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], frameon=True, xlim=( xMin, xMax ), ylim=( -0.1, 0.5 ) )
color = 'k'
ims = []

stdModel = 0.05
stdMeasure = 0.15

# Number of particles
N = 1000
x_Particles = np.random.uniform( xMin, xMax, size=N )
x_weightsLn = np.ones(N) * math.log(1/N)

for i in range( 100 ):
    measure = i%2 # toggle between 0 and 1

    # predict:
    # Stationary model: x_k = x_k-1 + noise
    x_Particles[:] += np.random.randn(N) * stdModel

    ### calculate and display probability density at this point
    x_ParticlesSortIndices = np.argsort( x_Particles )
    x_ParticlesSort = x_Particles[x_ParticlesSortIndices]
    x_weightsSort = np.exp( x_weightsLn[x_ParticlesSortIndices] )
    x_weightsSortCumSum = np.cumsum( x_weightsSort )
    samplePos = np.linspace( xMin, xMax, 201 )
    sampleValIndices = np.minimum( np.searchsorted( x_ParticlesSort, samplePos ), N-1 )
    sampleVal = x_weightsSortCumSum[sampleValIndices]
    sampleVal = sampleVal[1:] - sampleVal[:-1]
    samplePos = samplePos[1:]
    sampleVal /= sum( sampleVal )
    thisplot = ax.plot(
        samplePos,sampleVal,'-'+color+'',
        x_Particles,np.random.uniform( -0.09, -0.01, size=N),'k.',
        [measure], 0, 'bx'
    )
    ims.append( thisplot )
    ###

    # measure:
    # direct measurement: z = z + noise
    z_Particles = x_Particles + np.random.randn(N) * stdMeasure
    # Normal Gauss:
    #x_weights *= (1/math.sqrt(2*math.pi*stdMeasure)) * np.exp( -(measure-z_Particles)**2/(2*stdMeasure) )
    # Logarithmic version, ignoring prefactor as normalisation will get rid of it anyway
    x_weightsLn += -(measure-z_Particles)**2/(2*stdMeasure)
    x_weightsLn -= np.log(sum(np.exp(x_weightsLn))) # normalize

    # resample:
    doResample = (1. / np.sum(np.exp(2*x_weightsLn))) < N/2
    if doResample:
        # stratified_resample
        positions = (np.random.random(N) + range(N)) / N
        indexes = np.zeros(N, 'i')
        cumulative_sum = np.cumsum(np.exp(x_weightsLn))
        i, j = 0, 0
        while i < N:
            if positions[i] < cumulative_sum[j]:
                indexes[i] = j
                i += 1
            else:
                j += 1
        x_Particles[:] = x_Particles[indexes]
        x_weightsLn.fill(math.log(1.0 / N))
    if doResample:
        if 'k' == color:
            color = 'r'
        else:
            color = 'k'

im_ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True )
plt.show()

您的期望是错误的。 只需(手动)计算两次迭代后(如果它们之间没有移动)在0.0、0.5和1.0处粒子将发生什么情况。

为了获得理想的效果,请尝试使用以下类似的测量功能:

x_weightsLn += -min((0-z_Particles)**2,(1-z_Particles)**2)/(2*stdMeasure)

随着时间的推移,这将增加接近0或接近1的粒子的权重。但是,如果粒子最初分布不正确,最终可能只有一个峰,或者两个峰的大小明显不同。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM