[英]Calculate maximum likelihood using PyMC3
There are cases when I'm not actually interested in the full posterior of a Bayesian inference, but simply the maximum likelihood (or maximum a posterior for suitably chosen priors), and possibly it's Hessian. 有些情况下,我实际上并不感兴趣贝叶斯推断的完全后验,而只是最大可能性(或适当选择的先验的最大后验),并且可能是Hessian。 PyMC3 has functions to do that, but
find_MAP
seems to return the model parameters in transformed form depending on the prior distribution on them. PyMC3具有这样做的功能,但
find_MAP
似乎以转换形式返回模型参数,具体取决于它们之前的分布。 Is there an easy way to get the untransformed values from these? 有没有一种简单的方法可以从这些中获得未转换的值? The output of
find_hessian
is even less clear to me, but it's most likely in the transformed space too. find_hessian
的输出对我来说更不清楚,但它最有可能在变换空间中。
May be the simpler solution will be to pass the argument transform=None
, to avoid PyMC3 doing the transformation and then using find_MAP
可能更简单的解决方案是传递参数
transform=None
,以避免PyMC3进行转换然后使用find_MAP
I let you and example for a simple model. 我让你和一个简单模型的例子。
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1, transform=None)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1/pm.find_hessian(mean_q))**0.5)[0]
print(mean_q['p'], std_q)
I came across this once more and found a way to get the untransformed values from the transformed ones. 我再次遇到这个问题,并找到了一种方法来从转换后的值中获得未转换的值。 Just in case somebody else needs this as-well.
以防其他人需要这样做。 The gist of it is that the untransformed values are essentially theano expressions that can be evaluated given the transformed values.
它的要点是未转换的值基本上是可以在给定转换值的情况下评估的theano表达式。 PyMC3 helps here a little by providing the
Model.fn()
function which creates such an evaluation function accepting values by name. PyMC3通过提供
Model.fn()
函数来帮助实现这一点,该函数创建了一个按名称接受值的评估函数。 Now you only need to supply the untransformed variables of interest to the outs
argument. 现在,您只需要为
outs
参数提供感兴趣的未转换变量。 A complete example: 一个完整的例子:
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform('p', 0, 1)
w = pm.Binomial('w', n=len(data), p=p, observed=data.sum())
map_estimate = pm.find_MAP()
# create a function that evaluates p, given the transformed values
evalfun = normal_aproximation.fn(outs=p)
# create name:value mappings for the free variables (e.g. transformed values)
inp = {v:map_estimate[v.name] for v in model.free_RVs}
# now use that input mapping to evaluate p
p_estimate = evalfun(inp)
outs
can also receive a list of variables, evalfun
will then output the values of the corresponding variables in the same order. outs
也可以接收变量列表,然后evalfun
将以相同的顺序输出相应变量的值。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.