簡體   English   中英

粒子過濾器的概率密度不符合預期

[英]Particle filter probability density not as expected

用粒子過濾器玩一點我想知道為什么概率密度看起來不像我期望的那樣:

我試圖實現一個非常簡單的模型,其中$ x_k = x_k-1 + noise $,而測量值為$ z = x_k + noise $,並且測量值始終在0和1之間切換(切換)。

我的期望:

  • 平均值= 0.5 ---符合預期
  • (正態分布)峰值位於0和1且其余大致為零的概率密度函數-根本不起作用

產生的概率密度只是0.5左右的正態分布: 在此處輸入圖片說明

那么該分發是正確的還是我的代碼中有錯誤?
我需要在代碼中進行哪些更改以獲得所需的二進制發行版?

#!/usr/bin/python3

import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation

xMin = -1.15
xMax =  2.15
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], frameon=True, xlim=( xMin, xMax ), ylim=( -0.1, 0.5 ) )
color = 'k'
ims = []

stdModel = 0.05
stdMeasure = 0.15

# Number of particles
N = 1000
x_Particles = np.random.uniform( xMin, xMax, size=N )
x_weightsLn = np.ones(N) * math.log(1/N)

for i in range( 100 ):
    measure = i%2 # toggle between 0 and 1

    # predict:
    # Stationary model: x_k = x_k-1 + noise
    x_Particles[:] += np.random.randn(N) * stdModel

    ### calculate and display probability density at this point
    x_ParticlesSortIndices = np.argsort( x_Particles )
    x_ParticlesSort = x_Particles[x_ParticlesSortIndices]
    x_weightsSort = np.exp( x_weightsLn[x_ParticlesSortIndices] )
    x_weightsSortCumSum = np.cumsum( x_weightsSort )
    samplePos = np.linspace( xMin, xMax, 201 )
    sampleValIndices = np.minimum( np.searchsorted( x_ParticlesSort, samplePos ), N-1 )
    sampleVal = x_weightsSortCumSum[sampleValIndices]
    sampleVal = sampleVal[1:] - sampleVal[:-1]
    samplePos = samplePos[1:]
    sampleVal /= sum( sampleVal )
    thisplot = ax.plot(
        samplePos,sampleVal,'-'+color+'',
        x_Particles,np.random.uniform( -0.09, -0.01, size=N),'k.',
        [measure], 0, 'bx'
    )
    ims.append( thisplot )
    ###

    # measure:
    # direct measurement: z = z + noise
    z_Particles = x_Particles + np.random.randn(N) * stdMeasure
    # Normal Gauss:
    #x_weights *= (1/math.sqrt(2*math.pi*stdMeasure)) * np.exp( -(measure-z_Particles)**2/(2*stdMeasure) )
    # Logarithmic version, ignoring prefactor as normalisation will get rid of it anyway
    x_weightsLn += -(measure-z_Particles)**2/(2*stdMeasure)
    x_weightsLn -= np.log(sum(np.exp(x_weightsLn))) # normalize

    # resample:
    doResample = (1. / np.sum(np.exp(2*x_weightsLn))) < N/2
    if doResample:
        # stratified_resample
        positions = (np.random.random(N) + range(N)) / N
        indexes = np.zeros(N, 'i')
        cumulative_sum = np.cumsum(np.exp(x_weightsLn))
        i, j = 0, 0
        while i < N:
            if positions[i] < cumulative_sum[j]:
                indexes[i] = j
                i += 1
            else:
                j += 1
        x_Particles[:] = x_Particles[indexes]
        x_weightsLn.fill(math.log(1.0 / N))
    if doResample:
        if 'k' == color:
            color = 'r'
        else:
            color = 'k'

im_ani = animation.ArtistAnimation(fig, ims, interval=50, blit=True )
plt.show()

您的期望是錯誤的。 只需(手動)計算兩次迭代后(如果它們之間沒有移動)在0.0、0.5和1.0處粒子將發生什么情況。

為了獲得理想的效果,請嘗試使用以下類似的測量功能:

x_weightsLn += -min((0-z_Particles)**2,(1-z_Particles)**2)/(2*stdMeasure)

隨着時間的推移,這將增加接近0或接近1的粒子的權重。但是,如果粒子最初分布不正確,最終可能只有一個峰,或者兩個峰的大小明顯不同。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM