繁体   English   中英

如何在Python Scikit-Learn中结合多种功能选择方法

[英]How to combine multiple feature selection methods in Pythons Scikit-Learn

我有一个数据集,包含超过10万行和1000列/功能以及一个输出(0和1)。 我想为模型选择最佳功能/列。 我当时正在考虑在scikit-learn中结合多种feature selection方法,但是我不知道这是正确的方法还是正确的方法。 另外,您会在下面的代码中看到,当我使用pca它说f1列是最重要的功能,最后它说我应该使用列2(功能f2 ),为什么会这样呢?好/正确/正常吗? 请参见下面的代码,我为此使用了伪数据:

import pandas as pd

from sklearn.feature_selection import RFE, SelectFromModel
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split

from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC


df = pd.DataFrame({'f1':[1,5,3,4,5,16,3,1,0],
                   'f2':[0.1,0.5,0.3,0.4,0.5,1.6,0.3,0.1,1],
                   'f3':[12,41,53,13,53,13,65,24,21],
                   'f4':[1,6,3,4,4,18,5,2,5],
                   'f5':[10,15,32,41,51,168,27,13,2],
                   'result':[1,0,1,0,0,0,1,1,0]})

print(df)

x = df.iloc[:,:-1]
y = df.iloc[:,-1]

# Printing the shape of my data before PCA
print(x.shape)

# Doing PCA to reduce number of features
pca = PCA()
fit = pca.fit(x)

pca_result = list(fit.explained_variance_ratio_)
print(pca_result)

#I see that 'f1', 'f2' and 'f3' are the most important values
#so now, my x is:
x = df[['f1', 'f2', 'f3']]
print(x.shape) #new shape of x

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 0)

classifiers = [['Linear SVM', SVC(kernel = 'linear', gamma = 'scale')],
               ['Decission tree', DecisionTreeClassifier()],
               ['Random Forest', RandomForestClassifier(n_estimators = 100)]]


# now i use 'SelectFromModel' so that I can get the optimal number of features/columns
my_acc = 0
for c in classifiers:

    clf = c[1].fit(x_train, y_train)

    model = SelectFromModel(clf, prefit=True)
    model_score = clf.score(x_test, y_test)
    column_res = model.transform(x_train).shape
    print(model_score, column_res)
    if model_score > my_acc:

        my_acc = model_score
        column_res = model.transform(x_train).shape
        number_of_columns = column_res[1]
        my_cls = c[0]

# classifier with the best accuracy and his number of columns is:
print(my_cls)
print('Number of columns',number_of_columns)


#Can I call 'RFE' now, is it correct / good / right thing to do?
# I want to find the best column for this
my_acc = 0
for c in classifiers:

    model = c[1]
    rfe = RFE(model, number_of_columns)
    fit = rfe.fit(x_train, y_train)
    acc = fit.score(x_test, y_test)

    if acc > my_acc:
        my_acc = acc
        list_of_results = fit.support_

        final_model_name = c[0]
        final_model = c[1]

        print()

print(c[0])
print(my_acc)
print(list_of_results)

#I got the result that says that I should use second column, and In the PCA it says that first column is the most important
#Is this good / normal / correct?

这是正确的方法,还是我做错了什么?

解释您的代码:

pca = PCA()
fit = pca.fit(x)

pca将保留您的所有功能: Number of components to keep. if n_components is not set all components are kept Number of components to keep. if n_components is not set all components are kept

到命令:

pca_result = list(fit.explained_variance_ratio_)

这篇文章很好地解释了它: Python scikit学习pca.explained_variance_ratio_ cutoff

您应该使用:

fit.explained_variance_ratio_.cumsum()

因为输出是每个维度要保留的以%为单位的方差。 将pca用于功能重要性是错误的。

只有具有SelectModel的零件才可以选择特征。 您可以在第一步中运行SelectModel ,然后再使用PCA进一步减小尺寸,但是如果您有足够的内存来运行它,则无需减小尺寸。

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM