簡體   English   中英

Python Mlens Ensemble:KeyError:“[Int64Index([... dtype='int64', length=105)] 均不在 [columns] 中”

[英]Python Mlens Ensemble: KeyError: "None of [Int64Index([... dtype='int64', length=105)] are in the [columns]"

以下是我收到此錯誤的代碼的小版本: KeyError:“[Int64Index([...], dtype='int64')] 均不在 [columns] 中”

'...' 是一系列數字,似乎與我的 X 和 y 數據幀的索引相匹配。

我在一個非常大的數據集上使用帶有 SuperLearner 的 Mlens package 到 model(因此可擴展性很重要)。 我的目標是使用 dataframe 結構而不是 Numpy 數組。 這將解決下游問題。

到目前為止,我已經探索了這篇文章和其他相關文章,但解決方案似乎不適用於這里。

該數據集是在這里找到的鳶尾花數據集,名稱為 a.csv:<https://datahub.io/machine-learning/iris#data/

請注意,自定義隨機森林 function 效果很好。 但是 mlens/SuperLearner 錯誤。

from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from mlens.ensemble.super_learner import SuperLearner
import numpy as np
import pandas as pd

df = pd.read_csv("/home/marktest/iris_csv.csv")
type(df)
N_FOLDS = 5
RF_ESTIMATORS = 100
RANDOM_STATE = 42
class RFBasedFeatureSelector(BaseEstimator):
  
    def __init__(self, n_estimators):
        self.n_estimators = n_estimators
        self.selector = None

    def fit(self, X, y):
        clf = RandomForestClassifier(n_estimators=self.n_estimators, random_state = RANDOM_STATE, class_weight = 'balanced')
        clf = clf.fit(X, y)
        self.selector = SelectFromModel(clf, prefit=True, threshold = 0.01)

    def transform(self, X):
        if self.selector is None:
            raise AttributeError('The selector attribute has not been assigned. You cannot call transform before first calling fit or fit_transform.')
        return self.selector.transform(X)

    def fit_transform(self, X, y):
        self.fit(X, y)
        return self.transform(X)
df.head()
X = df.iloc[:,0:3]                                    # split off features into new dataframe
y = df.iloc[:,4]                                     # split off outcome into new dataframe

X, X_val, y, y_val = train_test_split(X, y, test_size=.3, random_state=RANDOM_STATE, stratify=y)
from mlens.metrics import make_scorer
from sklearn.metrics import roc_auc_score, balanced_accuracy_score
accuracy_scorer = make_scorer(roc_auc_score, average='micro', greater_is_better=True)

clf = RandomForestClassifier(RF_ESTIMATORS, random_state=RANDOM_STATE,class_weight='balanced')
scaler = StandardScaler()
feature_selector = RFBasedFeatureSelector(RF_ESTIMATORS)
clf.fit(feature_selector.fit_transform(scaler.fit_transform(X), y), y)
accuracy_score(y_val, clf.predict(feature_selector.transform(scaler.transform(X_val))))

ensemble = SuperLearner(folds=N_FOLDS, shuffle=True, random_state=RANDOM_STATE, scorer=balanced_accuracy_score, backend="threading")

preprocessing = {'pipeline-1': [StandardScaler(), RFBasedFeatureSelector(RF_ESTIMATORS)]
                 
                }

estimators = {'pipeline-1': [RandomForestClassifier(RF_ESTIMATORS, random_state=RANDOM_STATE, class_weight='balanced'),                 
                                         ]
                 }

ensemble.add(estimators, preprocessing)

ensemble.add_meta(LogisticRegression(solver='liblinear', class_weight = 'balanced'))
ensemble.fit(X,y)```

我認為問題出在shuffle=True 我有一個類似的問題,當設置'shuffle = False'時,它不再給出錯誤消息。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM