简体   繁体   English

Python使用Kalman过滤器改善仿真,但得到的结果更糟

[英]Python using Kalman Filter to improve simulation but getting worse results

I have questions on the behavior I am seeing with applying Kalman Filter (KF) to the following forecast problem. 我对将卡尔曼滤波器(KF)应用于以下预测问题时所遇到的行为有疑问。 I have included a simple code sample. 我提供了一个简单的代码示例。

Goal: I would like to know if KF is suitable for improving forecast/simulation result for a day ahead (at t+24 hours), using the measurement result obtained now (at t). 目标:我想知道KF是否适合使用现在(在t时刻)获得的测量结果来改进前一天(在t + 24小时)的预测/模拟结果。 The goal is to get the forecast as close to measurement as possible 目标是使预测尽可能接近测量值

Assumption: We assume the measurement is perfect (ie. if we can get the forecast matches the measurement perfectly, we are happy). 假设:我们认为测量是完美的(即,如果我们能够使预测与测量完美匹配,那么我们很高兴)。

We have a single measurement variable (z, real wind speed), and a single simulated variable (x, predicted wind speed). 我们只有一个测量变量(z,实际风速)和一个模拟变量(x,预测风速)。

The simulated wind speed x is produced from a NWP (numerical weather prediction) software using a variety meteorological data (black box to me). 使用各种气象数据(对我来说是黑盒)从NWP(数值天气预报)软件生成模拟风速x。 Simulation file is produced daily, containing data every half an hour. 仿真文件每天产生一次,每半小时包含一次数据。

I attempt to correct the t+24hr forecast, using the measurement I obtained now and the forecast data now (generated t-24 hr ago) using a scalar Kalman filter. 我尝试使用现在获得的测量值和现在使用标量卡尔曼滤波器的预测数据(现在的t-24小时生成)来校正t + 24小时的预测。 For reference, I used: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html 作为参考,我使用了: http : //www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html

Code: 码:

#! /usr/bin/python

import numpy as np
import pylab

import os


def main():

    # x = 336 data points of simulated wind speed for 7 days * 24 hour * 2 (every half an hour)
    # Imagine at time t, we will get a x_t fvalue or t+48 or a 24 hours later.
    x = load_x()

    # this is a list that will contain 336 data points of our corrected data
    x_sample_predict_list = []

    # z = 336 data points for 7 days * 24 hour * 2 of actual measured wind speed (every half an hour)
    z = load_z()

    # Here is the setup of the scalar kalman filter
    # reference: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html
    # state transition matrix (we simply have a scalar)
    # what you need to multiply the last time's state to get the newest state
    # we get the x_t+1 = A * x_t, since we get the x_t+1 directly for simulation
    # we will have a = 1
    a = 1.0

    # observation matrix
    # what you need to multiply to the state, convert it to the same form as incoming measurement 
    # both state and measurements are wind speed, so set h = 1
    h = 1.0

    Q = 16.0    # expected process variance of predicted Wind Speed
    R = 9.0 # expected measurement variance of Wind Speed

    p_j = Q # process covariance is equal to the initial process covariance estimate

    # Kalman gain is equal to k = hp-_j / (hp-_j + R).  With perfect measurement
    # R = 0, k reduces to k=1/h which is 1
    k = 1.0

    # one week data
    # original R2 = 0.183
    # with delay = 6, R2 = 0.295
    # with delay = 12, R2 = 0.147   
    # with delay = 48, R2 = 0.075
    delay = 6 

    # Kalman loop
    for t, x_sample in enumerate(x):

        if t <= delay:          
            # for the first day of the forecast,
            # we don't have forecast data and measurement 
            # from a day before to do correction
            x_sample_predict = x_sample             
        else: # t > 48
            # for a priori estimate we take x_sample as is
            # x_sample = x^-_j = a x^-_j_1 + b u_j
            # Inside the NWP (numerical weather prediction, 
            # the x_sample should be on x_sample_j-1 (assumption)

            x_sample_predict_prior = a * x_sample

            # we use the measurement from t-delay (ie. could be a day ago)
            # and forecast data from t-delay, to produce a leading residual that can be used to
            # correct the forecast.
            residual = z[t-delay] - h * x_sample_predict_list[t-delay]


            p_j_prior = a**2 * p_j + Q

            k = h * p_j_prior / (h**2 * p_j_prior + R)

            # we update our prediction based on the residual
            x_sample_predict = x_sample_predict_prior + k * residual

            p_j = p_j_prior * (1 - h * k)

            #print k
            #print p_j_prior
            #print p_j
            #raw_input()

        x_sample_predict_list.append(x_sample_predict)

    # initial goodness of fit
    R2_val_initial = calculate_regression(x,z)
    R2_string_initial = "R2 initial: {0:10.3f}, ".format(R2_val_initial)    
    print R2_string_initial     # R2_val_initial = 0.183

    # final goodness of fit
    R2_val_final = calculate_regression(x_sample_predict_list,z)
    R2_string_final = "R2 final: {0:10.3f}, ".format(R2_val_final)  
    print R2_string_final       # R2_val_final = 0.117, which is worse


    timesteps = xrange(len(x))      
    pylab.plot(timesteps,x,'r-', timesteps,z,'b:', timesteps,x_sample_predict_list,'g--')
    pylab.xlabel('Time')
    pylab.ylabel('Wind Speed')
    pylab.title('Simulated Wind Speed vs Actual Wind Speed')
    pylab.legend(('predicted','measured','kalman'))
    pylab.show()


def calculate_regression(x, y):         
    R2 = 0  
    A = np.array( [x, np.ones(len(x))] )
    model, resid = np.linalg.lstsq(A.T, y)[:2]  
    R2_val = 1 - resid[0] / (y.size * y.var())          
    return R2_val

def load_x():
    return np.array([2, 3, 3, 5, 4, 4, 4, 5, 5, 6, 5, 7, 7, 7, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11,
     11, 10, 8, 8, 8, 8, 6, 3, 4, 5, 5, 5, 6, 5, 5, 5, 6, 5, 5, 6, 6, 7, 6, 8, 9, 10,
     12, 11, 10, 10, 10, 11, 11, 10, 8, 8, 9, 8, 9, 9, 9, 9, 8, 9, 8, 11, 11, 11, 12,
     12, 13, 13, 13, 13, 13, 13, 13, 14, 13, 13, 12, 13, 13, 12, 12, 13, 13, 12, 12, 
     11, 12, 12, 19, 18, 17, 15, 13, 14, 14, 14, 13, 12, 12, 12, 12, 11, 10, 10, 10, 
     10, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 7, 7, 8, 8, 8, 6, 5, 5, 
     5, 5, 5, 5, 6, 4, 4, 4, 6, 7, 8, 7, 7, 9, 10, 10, 9, 9, 8, 7, 5, 5, 5, 5, 5, 5, 
     5, 5, 6, 5, 5, 5, 4, 4, 6, 6, 7, 7, 7, 7, 6, 6, 5, 5, 4, 2, 2, 2, 1, 1, 1, 2, 3,
     13, 13, 12, 11, 10, 9, 10, 10, 8, 9, 8, 7, 5, 3, 2, 2, 2, 3, 3, 4, 4, 5, 6, 6,
     7, 7, 7, 6, 6, 6, 7, 6, 6, 5, 4, 4, 3, 3, 3, 2, 2, 1, 5, 5, 3, 2, 1, 2, 6, 7, 
     7, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 
     7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 11, 11, 11, 11, 10, 10, 9, 10, 10, 10, 2, 2,
     2, 3, 1, 1, 3, 4, 5, 8, 9, 9, 9, 9, 8, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7,
     7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 7, 5, 5, 5, 5, 5, 6, 5])

def load_z():
    return np.array([3, 2, 1, 1, 1, 1, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 2, 2, 2,
     2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 4, 4, 4, 5, 4, 4, 5, 5, 5, 6, 6,
     6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 8, 8, 8, 8, 8, 8, 9, 10, 9, 9, 10, 10, 9,
     9, 10, 9, 9, 10, 9, 8, 9, 9, 7, 7, 6, 7, 6, 6, 7, 7, 8, 8, 8, 8, 8, 8, 7, 6, 7,
     8, 8, 7, 8, 9, 9, 9, 9, 10, 9, 9, 9, 8, 8, 10, 9, 10, 10, 9, 9, 9, 10, 9, 8, 7, 
     7, 7, 7, 8, 7, 6, 5, 4, 3, 5, 3, 5, 4, 4, 4, 2, 4, 3, 2, 1, 1, 2, 1, 2, 1, 4, 4,
     4, 4, 4, 3, 3, 3, 1, 1, 1, 1, 2, 3, 3, 2, 3, 3, 3, 2, 2, 5, 4, 2, 5, 4, 1, 1, 1, 
     1, 1, 1, 1, 2, 2, 1, 1, 3, 3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4,
     4, 4, 5, 5, 5, 4, 3, 3, 3, 3, 3, 3, 3, 3, 1, 2, 2, 3, 3, 1, 2, 1, 1, 2, 4, 3, 1,
     1, 2, 0, 0, 0, 2, 1, 0, 0, 2, 3, 2, 4, 4, 3, 3, 4, 5, 5, 5, 4, 5, 4, 4, 4, 5, 5, 
     4, 3, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 5, 5, 5, 4, 5, 5, 5, 5, 6, 5, 5, 8, 9, 8, 9,
     9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 9, 10, 9, 8, 8, 9, 8, 9, 9, 10, 9, 9, 9,
     7, 7, 9, 8, 7, 6, 6, 5, 5, 5, 5, 3, 3, 3, 4, 6, 5, 5, 6, 5])

if __name__ == '__main__': main()  # this avoids executing main on import your_module

------------------------- -------------------------

Observations: 观察:

1) If yesterday's forecast is over-predicting (positive bias), then today, I would make corrections by subtracting away the bias. 1)如果昨天的预测过度预测(正偏差),那么今天,我将通过减去偏差来进行校正。 In practice, if today I happen to be under-predicting, then subtracting the positive bias lead to an even worse prediction. 实际上,如果今天我碰巧预测不足,那么减去正偏差会导致更差的预测。 And I actually observe a wider swing of data with poorer overall fit. 而且我实际上观察到更广泛的数据波动,总体拟合度较差。 What is wrong with my example? 我的例子有什么问题?

2) Most Kalman filter resource indicates that Kalman filter minimizes the a posteriori covariance p_j = E{(x_j – x^_j)}, and has the proof selecting K to minimize the p_j. 2)大多数卡尔曼滤波器资源表明卡尔曼滤波器使后验协方差p_j = E {(x_j – x ^ _j)}最小,并有证明选择K来使p_j最小。 But can someone explain how minimizing the a posteriori covariance actually minimizes the effects of the process white noise w? 但是,有人能解释一下如何使后验协方差最小化实际上如何使过程白噪声w的影响最小化? In a real time case, let's say the actual wind speed and measured wind speed is 5 m/s. 在实时情况下,假设实际风速和测量风速为5 m / s。 The prediction wind speed is 6 m/s, ie. 预测风速为6 m / s,即。 there was a noise of w = 1 m/s. 噪声为w = 1 m / s。 The residual is 5 – 6 = -1 m/s. 残差为5 – 6 = -1 m / s。 You correct by taking the 1 m/s from your prediction to get back 5 m/s. 您可以通过将预测中的1 m / s取回5 m / s来进行更正。 Is that how the effect of the process noise is minimized? 这样可以最大程度地减小过程噪声的影响吗?

3) Here is a paper that mentioned applying KF to smooth weather forecast. 3)这是一篇提到将KF应用于平稳天气预报的论文。 http://hal.archives-ouvertes.fr/docs/00/50/59/93/PDF/Louka_etal_jweia2008.pdf . http://hal.archives-ouvertes.fr/docs/00/50/59/93/PDF/Louka_etal_jweia2008.pdf The interesting point is on pg 9 eq (7) that ”as soon as the new observation value y_t is known, the estimate of x at time t becomes x_t = x_t/t-1 = K(y_t – H_t * x_t/t-1) ”. 有趣的一点是在pg 9 eq(7)上,“一旦知道新的观测值y_t,在时间t的x的估计就变成x_t = x_t / t-1 = K(y_t – H_t * x_t / t- 1)”。 If I were to paraphrase it in reference to actual time, then “as soon as the new observation value is known now, the estimate now becomes x_t …. 如果我参照实际时间来解释它,那么“现在知道新的观测值后,估计值现在变成x_t…。 ” I get how KF can bring your data close to measurement in real time. ”我了解了KF如何使您的数据实时接近测量。 But if you are correcting the forecast data at t=now, with measurement data from t=now, how is that a forecast anymore? 但是,如果您要用t = now的测量数据来校正t = now的预测数据,那么该预测又如何呢?

Thanks! 谢谢!

UPDATE1: UPDATE1:

4) I have added a delay in the code to investigate how much later can the forecast be than the current bias calculated from the current measurement, if we want the R2 between the Kalman processed data vs measure data time series to improve from the unprocessed data vs. measure data. 4)如果我们希望卡尔曼处理后的数据与测量数据时间序列之间的R2从未处理的数据中得到改善,我在代码中添加了一个延迟,以调查预测比从当前测量值计算出的当前偏差晚多少时间。与测量数据。 In this example, if the measurement is used to improve the forecast 6 time step (3 hours from now) it is still useful (R2 goes from 0.183 to 0.295). 在此示例中,如果使用测量来改进预测的6个时间步长(从现在开始3个小时),它仍然有用(R2从0.183变为0.295)。 But if the measurement is used to improve the forecast 1 day from now, then it destroys the correlation (R2 goes down to 0.075). 但是,如果从现在起1天之内使用该度量来改善预测,那么它将破坏相关性(R2降至0.075)。

I updated my test scalar implementation, without making the assumption of perfect measurement R of 1, which was what reduced the kalman gain to a constant value of 1. Now I am seeing an improvement on the time series with reduced RMSE error. 我更新了我的测试标量实现,而没有假设完美测量R为1,这就是将kalman增益减小到恒定值1的原因。现在,我看到时间序列有了改进,RMSE误差减小了。

#! /usr/bin/python

import numpy as np
import pylab

import os

# RMSE improved
def main():

    # x = 336 data points of simulated wind speed for 7 days * 24 hour * 2 (every half an hour)
    # Imagine at time t, we will get a x_t fvalue or t+48 or a 24 hours later.
    x = load_x()

    # this is a list that will contain 336 data points of our corrected data
    x_sample_predict_list = []

    # z = 336 data points for 7 days * 24 hour * 2 of actual measured wind speed (every half an hour)
    z = load_z()

    # Here is the setup of the scalar kalman filter
    # reference: http://www.swarthmore.edu/NatSci/echeeve1/Ref/Kalman/ScalarKalman.html
    # state transition matrix (we simply have a scalar)
    # what you need to multiply the last time's state to get the newest state
    # we get the x_t+1 = A * x_t, since we get the x_t+1 directly for simulation
    # we will have a = 1
    a = 1.0

    # observation matrix
    # what you need to multiply to the state, convert it to the same form as incoming measurement 
    # both state and measurements are wind speed, so set h = 1
    h = 1.0

    Q = 1.0     # expected process noise of predicted Wind Speed    
    R = 1.0     # expected measurement noise of Wind Speed

    p_j = Q # process covariance is equal to the initial process covariance estimate

    # Kalman gain is equal to k = hp-_j / (hp-_j + R).  With perfect measurement
    # R = 0, k reduces to k=1/h which is 1
    k = 1.0

    # one week data
    # original R2 = 0.183
    # with delay = 6, R2 = 0.295
    # with delay = 12, R2 = 0.147   
    # with delay = 48, R2 = 0.075
    delay = 6 

    # Kalman loop
    for t, x_sample in enumerate(x):

        if t <= delay:          
            # for the first day of the forecast,
            # we don't have forecast data and measurement 
            # from a day before to do correction
            x_sample_predict = x_sample             
        else: # t > 48
            # for a priori estimate we take x_sample as is
            # x_sample = x^-_j = a x^-_j_1 + b u_j
            # Inside the NWP (numerical weather prediction, 
            # the x_sample should be on x_sample_j-1 (assumption)

            x_sample_predict_prior = a * x_sample

            # we use the measurement from t-delay (ie. could be a day ago)
            # and forecast data from t-delay, to produce a leading residual that can be used to
            # correct the forecast.
            residual = z[t-delay] - h * x_sample_predict_list[t-delay]

            p_j_prior = a**2 * p_j + Q

            k = h * p_j_prior / (h**2 * p_j_prior + R)

            # we update our prediction based on the residual
            x_sample_predict = x_sample_predict_prior + k * residual

            p_j = p_j_prior * (1 - h * k)

            #print k
            #print p_j_prior
            #print p_j
            #raw_input()

        x_sample_predict_list.append(x_sample_predict)

    # initial goodness of fit
    R2_val_initial = calculate_regression(x,z)
    R2_string_initial = "R2 original: {0:10.3f}, ".format(R2_val_initial)   
    print R2_string_initial     # R2_val_original = 0.183

    original_RMSE = (((x-z)**2).mean())**0.5
    print "original_RMSE"
    print original_RMSE 
    print "\n"

    # final goodness of fit
    R2_val_final = calculate_regression(x_sample_predict_list,z)
    R2_string_final = "R2 final: {0:10.3f}, ".format(R2_val_final)  
    print R2_string_final       # R2_val_final = 0.267, which is better

    final_RMSE = (((x_sample_predict-z)**2).mean())**0.5
    print "final_RMSE"
    print final_RMSE    
    print "\n"


    timesteps = xrange(len(x))      
    pylab.plot(timesteps,x,'r-', timesteps,z,'b:', timesteps,x_sample_predict_list,'g--')
    pylab.xlabel('Time')
    pylab.ylabel('Wind Speed')
    pylab.title('Simulated Wind Speed vs Actual Wind Speed')
    pylab.legend(('predicted','measured','kalman'))
    pylab.show()


def calculate_regression(x, y):         
    R2 = 0  
    A = np.array( [x, np.ones(len(x))] )
    model, resid = np.linalg.lstsq(A.T, y)[:2]  
    R2_val = 1 - resid[0] / (y.size * y.var())          
    return R2_val

def load_x():
    return np.array([2, 3, 3, 5, 4, 4, 4, 5, 5, 6, 5, 7, 7, 7, 8, 8, 8, 9, 9, 10, 10, 10, 11, 11,
     11, 10, 8, 8, 8, 8, 6, 3, 4, 5, 5, 5, 6, 5, 5, 5, 6, 5, 5, 6, 6, 7, 6, 8, 9, 10,
     12, 11, 10, 10, 10, 11, 11, 10, 8, 8, 9, 8, 9, 9, 9, 9, 8, 9, 8, 11, 11, 11, 12,
     12, 13, 13, 13, 13, 13, 13, 13, 14, 13, 13, 12, 13, 13, 12, 12, 13, 13, 12, 12, 
     11, 12, 12, 19, 18, 17, 15, 13, 14, 14, 14, 13, 12, 12, 12, 12, 11, 10, 10, 10, 
     10, 9, 9, 8, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 6, 6, 6, 7, 7, 8, 8, 8, 6, 5, 5, 
     5, 5, 5, 5, 6, 4, 4, 4, 6, 7, 8, 7, 7, 9, 10, 10, 9, 9, 8, 7, 5, 5, 5, 5, 5, 5, 
     5, 5, 6, 5, 5, 5, 4, 4, 6, 6, 7, 7, 7, 7, 6, 6, 5, 5, 4, 2, 2, 2, 1, 1, 1, 2, 3,
     13, 13, 12, 11, 10, 9, 10, 10, 8, 9, 8, 7, 5, 3, 2, 2, 2, 3, 3, 4, 4, 5, 6, 6,
     7, 7, 7, 6, 6, 6, 7, 6, 6, 5, 4, 4, 3, 3, 3, 2, 2, 1, 5, 5, 3, 2, 1, 2, 6, 7, 
     7, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 
     7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 11, 11, 11, 11, 10, 10, 9, 10, 10, 10, 2, 2,
     2, 3, 1, 1, 3, 4, 5, 8, 9, 9, 9, 9, 8, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7,
     7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 7, 5, 5, 5, 5, 5, 6, 5])

def load_z():
    return np.array([3, 2, 1, 1, 1, 1, 3, 3, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 2, 2, 2,
     2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 4, 4, 4, 4, 5, 4, 4, 5, 5, 5, 6, 6,
     6, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 7, 8, 8, 8, 8, 8, 8, 9, 10, 9, 9, 10, 10, 9,
     9, 10, 9, 9, 10, 9, 8, 9, 9, 7, 7, 6, 7, 6, 6, 7, 7, 8, 8, 8, 8, 8, 8, 7, 6, 7,
     8, 8, 7, 8, 9, 9, 9, 9, 10, 9, 9, 9, 8, 8, 10, 9, 10, 10, 9, 9, 9, 10, 9, 8, 7, 
     7, 7, 7, 8, 7, 6, 5, 4, 3, 5, 3, 5, 4, 4, 4, 2, 4, 3, 2, 1, 1, 2, 1, 2, 1, 4, 4,
     4, 4, 4, 3, 3, 3, 1, 1, 1, 1, 2, 3, 3, 2, 3, 3, 3, 2, 2, 5, 4, 2, 5, 4, 1, 1, 1, 
     1, 1, 1, 1, 2, 2, 1, 1, 3, 3, 3, 3, 3, 4, 3, 4, 3, 4, 4, 4, 4, 3, 3, 4, 4, 4, 4,
     4, 4, 5, 5, 5, 4, 3, 3, 3, 3, 3, 3, 3, 3, 1, 2, 2, 3, 3, 1, 2, 1, 1, 2, 4, 3, 1,
     1, 2, 0, 0, 0, 2, 1, 0, 0, 2, 3, 2, 4, 4, 3, 3, 4, 5, 5, 5, 4, 5, 4, 4, 4, 5, 5, 
     4, 3, 3, 4, 4, 4, 3, 3, 3, 4, 4, 4, 5, 5, 5, 4, 5, 5, 5, 5, 6, 5, 5, 8, 9, 8, 9,
     9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 9, 10, 9, 8, 8, 9, 8, 9, 9, 10, 9, 9, 9,
     7, 7, 9, 8, 7, 6, 6, 5, 5, 5, 5, 3, 3, 3, 4, 6, 5, 5, 6, 5])

if __name__ == '__main__': main()  # this avoids executing main on import your_module

This line is not respecting the Scalar Kalman Filter : 该行不遵守标量卡尔曼滤波器

residual = z[t-delay] - h * x_sample_predict_list[t-delay]

In my opinion you should have done: 我认为您应该这样做:

 residual = z[t -delay] - h * x_sample_predict_prior

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM