简体   繁体   English

scikit功能重要性选择经验

[英]scikit feature importance selection experiences

Scikit-learn has a mechanism to rank features (classification) using extreme randomized trees. Scikit-learn有一种使用极端随机树对特征(分类)进行排名的机制。

forest = ExtraTreesClassifier(n_estimators=250,
                          compute_importances=True,
                          random_state=0)

I have a question if this method is doing a "univariate" or "multivariate" feature ranking. 我有一个问题,如果这种方法是做一个“单变量”或“多变量”功能排名。 Univariate case is where individual features are compared to each other. 单变量情况是将各个特征相互比较的地方。 I would appreciate some clarifications here. 我会在此澄清一些澄清。 Any other parameters that I should try to fiddle? 我应该尝试其他任何参数吗? Any experiences and pitfalls with this ranking methhod are also appreciated. 这种排名方法的任何经验和陷阱也值得赞赏。 THe output of this ranking identify feature numbers(5,20,7. I would like to check if the feature number really corresponds to the row in the feature matrix. THat is, the feature number 5 corresponds to the sixth row in the feature matrix (starts with 0). 此排名的输出识别特征编号(5,20,7。我想检查特征编号是否真的对应于特征矩阵中的行。即,特征编号5对应于特征矩阵中的第六行。 (从0开始)。

I'm not an expert but this is not univariate. 我不是专家,但这不是单变量的。 In fact the total feature importance is computed from the feature importance of each tree (taking the mean value i think). 实际上,总特征重要性是根据每棵树的特征重要性计算的(取我认为的平均值)。

For each tree, the importances are computed from the impurity of the split . 对于每棵树,重要性是根据分裂的杂质计算

I used this method and it seems to give good results, better from my point of view than the univariate method. 我使用这种方法,它似乎给出了良好的结果,从我的观点来看比单变量方法更好。 But I don't know any technique to test the results except the knowledge of the dataset. 但除了数据集的知识外,我不知道测试结果的任何技术。

To order, the feature correctly you should follow this example and modify it a bit like so to use pandas.DataFrame and their proper column names: 要正确使用该功能,您应该遵循此示例并稍微修改它以使用pandas.DataFrame及其正确的列名:

import numpy as np

from sklearn.ensemble import ExtraTreesClassifier

X = pandas.DataFrame(...)
Y = pandas.Series(...)

# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
                              random_state=0)

forest.fit(X, y)

feature_importance = forest.feature_importances_
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)[::-1]
print "Feature importance:"
i=1
for f,w in zip(X.columns[sorted_idx], feature_importance[sorted_idx]):
    print "%d) %s : %d" % (i, f, w)
    i+=1
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
nb_to_display = 30
plt.barh(pos[:nb_to_display], feature_importance[sorted_idx][:nb_to_display], align='center')
plt.yticks(pos[:nb_to_display], X.columns[sorted_idx][:nb_to_display])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM