简体   繁体   English

具有分层折叠的嵌套交叉验证

[英]Nested cross validation with stratified folds

I am trying to implement a random forest regressor using scikit-learn pipes and nested cross-validation.我正在尝试使用 scikit-learn 管道和嵌套交叉验证来实现随机森林回归器。 The dataset is about housing prices, with several features (some numeric other categorical) and a continuous target variable (median_house_value).该数据集是关于房价的,有几个特征(一些数字其他分类)和一个连续的目标变量(median_house_value)。

Data columns (total 10 columns):
 #   Column              Non-Null Count  Dtype  
---  ------              --------------  -----  
 0   longitude           20640 non-null  float64
 1   latitude            20640 non-null  float64
 2   housing_median_age  20640 non-null  float64
 3   total_rooms         20640 non-null  float64
 4   total_bedrooms      20433 non-null  float64
 5   population          20640 non-null  float64
 6   households          20640 non-null  float64
 7   median_income       20640 non-null  float64
 8   median_house_value  20640 non-null  float64
 9   ocean_proximity     20640 non-null  object 

I decided to manually create two stratified 5-folds splits (inner, outer loop for nested cv).我决定手动创建两个分层的 5 折拆分(嵌套 cv 的内部、外部循环)。 The stratification is based upon a modified version of the median_income feature:分层基于中值收入特征的修改版本:

df.insert(9, "income_cat", 
                  pd.cut(df["median_income"],bins=[0., 1.5, 3.0, 4.5, 6., np.inf], labels=[1,2,3,4,5]))

This is the code for the splitting folds这是分割折叠的代码

cv1_5 = StratifiedShuffleSplit(n_splits = 5, test_size = .2, random_state = 42)
cv1_splits = []

# create first 5 stratified folds indices
for train_index, test_index in cv1_5.split(df, df["income_cat"]):
    cv1_splits.append((train_index, test_index))

cv2_5 = StratifiedShuffleSplit(n_splits = 5, test_size = .2, random_state = 43)
cv2_splits = []

# create second 5 stratified folds indices
for train_index, test_index in cv2_5.split(df, df["income_cat"]):
    cv2_splits.append((train_index, test_index))
    
# set initial dataset
X = df.drop("median_house_value", axis=1)
y = df["median_house_value"].copy()

This is the preprocess pipe这是预处理管道

# create preprocess pipe
preprocess_pipe = Pipeline(
    [
        ("ctransformer", ColumnTransformer([
                ( 
                    "num_pipe", 
                    Pipeline([
                        ("imputer", SimpleImputer(strategy="median")),
                        ("scaler", StandardScaler())
                    ]), 
                    list(X.select_dtypes(include=[np.number]))
                ),
                ( 
                    "cat_pipe", 
                    Pipeline([
                        ("encoder", OneHotEncoder()),
                    ]), 
                    ["ocean_proximity"])
            ])
        ),
    ]
)

And this is the final pipe (including the preprocess one)这是最终的管道(包括预处理管道)

pipe = Pipeline([
    ("preprocess", preprocess_pipe),
    ("model", RandomForestRegressor())
])

I am using nested cross validation to tune the hyperparameters of the final pipe and compute the generalization error我正在使用嵌套交叉验证来调整最终管道的超参数并计算泛化误差

Here is the parameter grid这是参数网格

param_grid = [
    {
        "preprocess__ctransformer__num_pipe__imputer__strategy": ["mean","median"],
        "model__n_estimators": [3, 10, 30, 50, 100, 150, 300], "model__max_features": [2,4,6,8]
    }
]

This is the final step这是最后一步

grid_search = GridSearchCV(pipe, param_grid, cv = cv1_splits, 
    scoring = "neg_mean_squared_error", 
    return_train_score = True)

clf = grid_search.fit(X, y)

generalization_error = cross_val_score(clf.best_estimator_, X = X, y = y, cv = cv2_splits)
generalization_error

Now, here comes the glitch (bottom two lines of preceding code snippet):现在,出现了故障(前面代码片段的底部两行):

If I follow scikit-learn instructions ( link ), I should write:如果我按照 scikit-learn 说明( 链接),我应该写:

generalization_error = cross_val_score(clf, X = X, y = y, cv = cv2_splits, scoring = "neg_mean_squared_error")
    generalization_error

Unfortunately calling cross_val_score(clf, X = X...) gives me an error ( indices are out of bound for the train/test splits) and the generalization error array contains only NaNs.不幸的是,调用cross_val_score(clf, X = X...)给了我一个错误(索引超出了训练/测试分割的范围)并且泛化错误数组只包含 NaN。

On the other hand, if I write like so:另一方面,如果我这样写:

generalization_error = cross_val_score(clf.best_estimator_, X = X, y = y, cv = cv2_splits, scoring = "neg_mean_squared_error")
        generalization_error

The script runs flawlessly and I am able to see the generalization error array filled with scores.该脚本运行完美,我能够看到填充了分数的泛化错误数组。 Can I stick to the last way of doing things, or there is something wrong in the whole process?我能坚持最后的做事方式,还是整个过程有问题?

For me the problem here can be in the use of cv1_splits and cv2_splits , rather than cv1_5 and cv2_5 (in particular it is the use of cv1_splits to cause the issue).对我来说,这里的问题可能在于使用cv1_splitscv2_splits ,而不是cv1_5cv2_5 (特别是使用cv1_splits导致问题)。

In general, cross_val_score() calls fit() on a clone of the clf estimator;通常, cross_val_score()clf估计器的克隆上调用fit() in such a case it is a GridSearchCV estimator to be fitted onto several X_inner_train sets ( subsets of X stratified according to cv1_splits , same dimension of X - see here for notation).在这种情况下,它是一个 GridSearchCV 估计器,可以拟合到几个X_inner_train集(根据cv1_splits分层的 X子集,X 的相同维度 - 请参阅此处的符号)。 Being cv1_splits built from X, it contains indices which are consistent with respect to X dimension, but which might not be consistent with respect to X_inner_train dimension.作为从 X 构建的cv1_splits ,它包含与 X 维度一致的索引,但可能与X_inner_train维度X_inner_train

Instead, by passing cv1_5 to the GridSearchCV estimator, the estimator itself takes care of splitting the inner training sets coherently (see here for reference).相反,通过将cv1_5传递给 GridSearchCV 估计器,估计器本身负责一致地拆分内部训练集(请参阅此处以供参考)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM