简体   繁体   English

用于特征选择的单变量线性回归测试?

[英]univariate linear regression tests for feature selection?

I have been reading about the f_regression function that is available under the package feature_selection of scikit. 我一直在阅读有关scikit的feature_selection包下可用的f_regression函数的信息。 According to what I read and I cite it says: 根据我阅读的内容,我引用它说:

Linear model for testing the individual effect of each of many regressors. 用于测试许多回归变量各自的效果的线性模型。 This is a scoring function to be used in a feature seletion procedure, not a free standing feature selection procedure. 这是要在特征选择过程中使用的评分功能,而不是独立式特征选择过程。

This is done in 2 steps: 分两个步骤完成:

  • The correlation between each regressor and the target is computed, that is, ((X[:, i] - mean(X[:, i])) * (y - mean_y)) / (std(X[:, i]) * std(y)). 计算每个回归变量与目标之间的相关性,即(((X [:, i]-mean(X [:, i]))*(y-mean_y))/(std(X [:, i] )* std(y))。
  • It is converted to an F score then to a p-value. 将其转换为F分数,然后转换为p值。

So in the first part I suppose that they are calculating the correlation coefficients, but I cannot find how to do the part of converting from those correlation coefficients to F score and then to p-values. 因此,在第一部分中,我假设它们正在计算相关系数,但是我找不到如何执行从这些相关系数转换为F得分然后转换为p值的部分。 Could anybody would put some analytical example to know how to do that process? 任何人都可以举一些分析的例子来知道该如何做吗?

Thanks 谢谢

If we use this example : 如果我们使用以下示例

import numpy as np
import pandas as pd 
import matplotlib.pyplot as plt
from sklearn.feature_selection import f_regression
from scipy import stats

np.random.seed(0)
X = np.random.rand(1000, 3)
y = X[:, 0] + np.sin(6 * np.pi * X[:, 1]) + 0.1 * np.random.randn(1000)

y = x 1 + sin(6 * pi * x 2 ) + 0.1 * N(0, 1), that is the third feature is completely irrelevant. y = x 1 + sin(6 * pi * x 2 )+ 0.1 * N(0,1),即第三个特征是完全不相关的。

f_test, p_values = f_regression(X, y)
f_test_norm = f_test/np.max(f_test)

plt.figure(figsize=(25, 5))
for i in range(3):
    plt.subplot(1, 3, i + 1)
    plt.scatter(X[:, i], y, edgecolor='black', s=20)
    plt.xlabel("$x_{}$".format(i + 1), fontsize=14)
    if i == 0:
        plt.ylabel("$y$", fontsize=14)
    plt.title("Normalized F-test={:.2f},F-test={:.2f}, p-value={:.2f}".format(f_test_norm[i],f_test[i],p_values[i]),
              fontsize=16)
plt.show()

在此处输入图片说明

The values for F-test and p-value are as follows: F检验和p值的值如下:

>>> f_test, p_values
(array([187.42118421,  52.52357392,   0.47268298]),
 array([3.19286906e-39, 8.50243215e-13, 4.91915197e-01]))

Let's first compute correlation: 让我们首先计算相关性:

df = pd.DataFrame(X)
df['y'] = y
            0         1         2         y
0    0.548814  0.715189  0.602763  1.004714
1    0.544883  0.423655  0.645894  0.900226
2    0.437587  0.891773  0.963663 -0.919160
...

>>> df.corr()['y']
0    0.397624
1   -0.223601
2    0.021758
y    1.000000

corr = df.corr()['y'][:3]

Then, according to this , they calculate degrees_of_freedom , as len(y) - 2 if center parameter is true and len(y) - 1 otherwise: 然后,根据 ,他们计算degrees_of_freedom ,如len(y) - 2如果center参数为真和len(y) - 1否则:

degrees_of_freedom = y.size - (2 if center else 1)

F-statistic is calculated as F统计量计算为

F = corr ** 2 / (1 - corr ** 2) * degrees_of_freedom which in our case gives: F = corr ** 2 / (1 - corr ** 2) * degrees_of_freedom ,在我们的情况下为:

0    187.421184
1     52.523574
2      0.472683
Name: y, dtype: float64

p-values are then calculated using survival function : 然后使用生存函数计算p值:

pv = stats.f.sf(F, 1, degrees_of_freedom)
>>> pv
array([3.19286906e-39, 8.50243215e-13, 4.91915197e-01])

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM