简体   繁体   中英

Pass parameters to lower-level XGBoost estimators in multilabel classification

I have a multi-label classification problem in which I want to train a XGBoost model for each label (4 in total); I then combine the four XGBoost estimators thanks to sklearn.multioutput.MultiOutputClassifier ( link ).

Also, I would like to perform a random search on XGBoost's hyper-parameters thanks to RandomizedSearchCV .

Below there is some reproducible code that explains my intentions.

import xgboost as xgb
from sklearn.model_selection import train_test_split, RandomizedSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.datasets import make_multilabel_classification

# create dataset
X, y = make_multilabel_classification(n_samples=3000, n_features=50, n_classes=4, n_labels=1,
                                      allow_unlabeled=False, random_state=42)

# Split dataset into training and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)

# hyper-parameters space for the random search
random_grid = {
    'n_estimators': [200, 300, 400], 
    'learning_rate': [0.05, 0.1, 0.2],
    'max_depth': [3, 4, 5],
    'min_child_weight': [1, 3]
}

xgb_estimator = xgb.XGBClassifier(objective='binary:logistic')
xgb_model = MultiOutputClassifier(xgb_estimator)

# random search instance
xgb_random_search = RandomizedSearchCV(
    estimator=xgb_model, param_distributions=random_grid,
    scoring=['accuracy'], refit='accuracy', n_iter=2, cv=3, verbose=True, random_state=1234, n_jobs=2
)

# fit the random search
xgb_random_search.fit(X_train, y_train)

However, this code gives the following (summarised) error:

ValueError: Invalid parameter n_estimators for estimator MultiOutputClassifier.
    Check the list of available parameters with `estimator.get_params().keys()`

In fact, after running the line of code suggested by the error message, I realised that I was passing the hyper-parameters in random_grid to the MultiOutputClassifier called xgb_model rather than XGBoost called xgb_estimator , which is the "lower-level" estimator (as it is "contained" within xgb_model ).

The question is: how can I pass the hyper-parameters in random_grid to the "lower-level" XGBoost estimators? I feel like it is possible with some **kwargs operation, but after some trials I didn't find a way to use them.

If you run xgb_model.get_params() , you'll find that the name of the parameters are all prepended with estimator__ (double-underscore). So your parameter space should look like

random_grid = {
    'estimator__n_estimators': [200, 300, 400], 
    'estimator__learning_rate': [0.05, 0.1, 0.2],
    'estimator__max_depth': [3, 4, 5],
    'estimator__min_child_weight': [1, 3]
}

This is consistent with other sklearn nesting models like Pipeline and ColumnTransformer .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM