简体   繁体   English

python中多元核密度估计的条件采样

[英]conditional sampling from multivariate kernel density estimate in python

One can create a multivariate kernel density estimate (KDE) with scikitlearn ( https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KernelDensity.html#sklearn.neighbors.KernelDensity ) and scipy ( https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html )可以使用 scikitlearn( https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KernelDensity.html#sklearn.neighbors.KernelDensity )和 scipy( https:// /docs.scipy.org/doc/scipy/reference/generated/scipy.stats.gaussian_kde.html )

Both allow for random sampling from the estimated distribution.两者都允许从估计分布中随机抽样。 Is there a way to do conditional sampling in either of the two libraries (or any other library)?有没有办法在两个库(或任何其他库)中的任何一个库中进行条件采样? In the 2-variable (x,y) case this would mean sample from P(x|y) (or P(y|x)), thus from a cross-section of the probability function (and that cross-section has to be rescaled to unit area under its curve).在 2 变量 (x,y) 的情况下,这意味着来自 P(x|y)(或 P(y|x))的样本,因此来自概率函数的横截面(并且该横截面必须重新缩放到其曲线下的单位面积)。

x = np.random.random(100)
y =np.random.random(100)
kde = stats.gaussian_kde([x,y])
# sampling from the whole pdf:
kde.resample()

I am looking for something like我正在寻找类似的东西

# sampling y, conditional on x
kde.sample_conditional(x=1.5) #does not exist

This function samples conditionally one column based on all other columns.此函数根据所有其他列有条件地对一列进行采样。

def conditional_sample(kde, training_data, columns, values, samples = 100):
    if len(values) - len(columns) != 0:
        raise ValueError("length of columns and values should be equal")
    if training_data.shape[1] - len(columns) != 1:
        raise ValueError(f'Expected {training_data.shape[1] - 1} columns/values but {len(columns)} have be given')  
    cols_values_dict = dict(zip(columns, values))
    
    #create array to sample from
    steps = 10000  
    X_test = np.zeros((steps,training_data.shape[1]))
    for i in range(training_data.shape[1]):
        col_data = training_data.iloc[:, i]
        X_test[:, i] = np.linspace(col_data.min(),col_data.max(),steps)
    for col in columns: 
        idx_col_training_data = list(training_data.columns).index(col)
        idx_in_values_list = list(cols_values_dict.keys()).index(col)
        X_test[:, idx_col_training_data] = values[idx_in_values_list] 
        
    #compute probability of each sample
    prob_dist = np.exp(kde.score_samples(X_test))/np.exp(kde.score_samples(X_test)).sum()
    
    #sample using np.random.choice
    return X_test[np.random.choice(range(len(prob_dist)), p=prob_dist, size = (samples))]

This function works for use cases with only 1-dimensional conditions.此函数适用于只有一维条件的用例。 It's built in a way that the condition doesn't need to be fulfilled exactly, but with some tolerance.它的构建方式不需要完全满足条件,但具有一定的容忍度。

from scipy import stats
import numpy as np

x = np.random.random(1000)
y = np.random.random(1000)
kde = stats.gaussian_kde([x,y])

def conditional_sample(kde, condition, precision = 0.5,  sample_size = 1000):

    block_size = int(sample_size / (2 * precision / 100))
    sample = kde.resample(block_size)

    lower_limit = np.percentile(sample[1], stats.percentileofscore(sample[1], condition) - precision)
    upper_limit = np.percentile(sample[1], stats.percentileofscore(sample[1], condition) + precision)

    conditional_sample = sample[:, (upper_limit > sample[-1]) * (sample[-1] > lower_limit)]

    return conditional_sample[0:-1]

conditional_sample(kde=kde, condition=0.6)

Not a universal solution but works well for my use case.不是通用解决方案,但适用于我的用例。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM