简体   繁体   English

如何使用pymc制作离散状态Markov模型?

[英]How can I make a discrete state Markov model with pymc?

I am trying to figure out how to properly make a discrete state Markov chain model with pymc . 我正在尝试找出如何使用pymc正确制作离散状态Markov链模型。

As an example (view in nbviewer ), lets make a chain of length T=10 where the Markov state is binary, the initial state distribution is [0.2, 0.8] and that the probability of switching states in state 1 is 0.01 while in state 2 it is 0.5 作为示例(在nbviewer中查看),让我们制作一条长度为T = 10的链,其中马尔可夫状态为二进制,初始状态分布为[ 0.2,0.8 ],并且在状态1中切换状态1的概率为0.01。 2它是0.5

import numpy as np
import pymc as pm
T = 10
prior0 = [0.2, 0.8]
transMat = [[0.99, 0.01], [0.5, 0.5]]

To make the model, I make an array of state variables and an array of transition probabilities that depend on the state variables (using the pymc.Index function) 为了制作模型,我制作了一个状态变量数组和一个依赖状态变量的转移概率数组(使用pymc.Index函数)

states = np.empty(T, dtype=object)
states[0] = pm.Categorical('state_0', prior0)
transPs = np.empty(T, dtype=object)
transPs[0] = pm.Index('trans_0', transMat, states[0])

for i in range(1, T):
    states[i] = pm.Categorical('state_%i' % i, transPs[i-1])
    transPs[i] = pm.Index('trans_%i' %i, transMat, states[i])

Sampling the model shows that the states marginals are what they should be (compared with model built with Kevin Murphy's BNT package in Matlab) 对模型进行抽样表明,状态边际是它们应具有的(与在Matlab中用Kevin Murphy的BNT软件包构建的模型相比)

model = pm.MCMC([states, transPs])
model.sample(10000, 5000)
[np.mean(model.trace('state_%i' %i)[:]) for i in range(T)]    

prints out: 打印出:

[-----------------100%-----------------] 10000 of 10000 complete in 7.5 sec

[0.80020000000000002,
 0.39839999999999998,
 0.20319999999999999,
 0.1118,
 0.064199999999999993,
 0.044600000000000001,
 0.033000000000000002,
 0.026200000000000001,
 0.024199999999999999,
 0.023800000000000002]

My question is - this does not seem like the most elegant way to build a Markov chain with pymc. 我的问题是-这似乎不是用pymc构建Markov链的最优雅方法。 Is there a cleaner way that does not require the array of deterministic functions? 有没有一种更清洁的方式不需要确定性函数数组?

My goal is to write a pymc based package for more general dynamic Bayesian networks. 我的目标是为更通用的动态贝叶斯网络编写一个基于pymc的程序包。

As far as I know you have to encode the distribution of each time step as a deterministic function of the previous time step, because that's what it is--there's no randomness involved in the parameters because you defined them in the problem set-up. 据我所知,您必须将每个时间步长的分布编码为上一个时间步长的确定性函数,因为这就是事实—参数​​中没有随机性,因为您在问题设置中定义了它们。 However, I think you're question may have been more towards finding a more intuitive way to represent the model. 但是,我认为您的问题可能更多是在寻找一种更直观的方式来表示模型。 One alternative way would be to directly encode the time step transitions as a function of the previous time step. 一种替代方式是根据前一时间步长直接对时间步长进行编码。

from pymc import Bernoulli, MCMC

def generate_timesteps(N,p_init,p_trans):
    timesteps=np.empty(N,dtype=object)
    # A success denotes being in state 2, a failure being in state 1
    timesteps[0]=Bernoulli('T0',p_init)
    for i in xrange(1,N):
        # probability of being in state 1 at time step `i` given time step `i-1`
        p_i = p_trans[1]*timesteps[i-1]+p_trans[0]*(1-timesteps[i-1])
        timesteps[i] = Bernoulli('T%d'%i,p_i)
    return timesteps

timesteps = generate_timesteps(10,0.8,[0.001,0.5])
model = MCMC(timesteps)
model.sample(10000) # no burn in necessary since we're sampling directly from the distribution
[np.mean( model.trace(t).gettrace() ) for t in timesteps]

In case you want to look at the long run behaviour of your Markov chain, the discreteMarkovChain package may be useful. 如果您想看一下Markov链的长期行为,则离散MarkovChain包可能会很有用。 The examples show some ideas for building up a discrete state Markov chain by defining a transition function that tells you for each state the reachable states in the next step and their probabilities. 这些示例通过定义一个转换函数来构造离散状态马尔可夫链的一些思想,该转换函数告诉您每个状态下一步的可到达状态及其概率。 You could use that same function to simulate the process. 您可以使用相同的功能来模拟该过程。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM