简体   繁体   English

如何使用 KerasClassifier 计算损失

[英]How to calculate loss with KerasClassifier

I'm using KerasClassifier from sklearn to wrap my Keras model in order to perform K-fold cross validation.我正在使用 sklearn 中的 KerasClassifier 来包装我的 Keras 模型,以执行 K 折交叉验证。

model = KerasClassifier(build_fn=create_model, epochs=20, batch_size=8, verbose = 1)    
kfold = KFold(n_splits=10)
scoring = ['accuracy', 'precision', 'recall', 'f1']
results = cross_validate(estimator=model,
                               X=x_train,
                               y=y_train,
                               cv=kfold,
                               scoring=scoring,
                               return_train_score=True,
                              return_estimator=True)

Then I choose the best model between the 10 estimators returned by the function, according to metrics:然后我根据指标在函数返回的 10 个估计器中选择最佳模型:

best_model = results['estimators'][2] #for example the second model

Now, I want to perform a predict on x_test and get accuracy and loss metrics.现在,我想对x_test执行预测并获得准确度和损失指标。 How can I do it?我该怎么做? I tried model.evaluate(x_test, y_test) but the model is a KerasClassifier so I get an error.我试过model.evaluate(x_test, y_test)但模型是 KerasClassifier 所以我得到一个错误。

Point is that your KerasClassifier instance mimics standard scikit-learn classifiers.关键是您的KerasClassifier实例模仿了标准的 scikit-learn分类器。 In other terms, it is kind of a scikit-learn beast and, as is, it does not provide method .evaluate() .换句话说,它是一种scikit-learn 野兽,而且,它不提供方法.evaluate()

Therefore, you might just call best_model.score(X_test, y_test) which will automatically return the accuracy as standard sklearn classifiers do.因此,您可以调用best_model.score(X_test, y_test) ,它会像标准 sklearn 分类器一样自动返回准确率。 On the other hand, you can access the loss values obtained during training via the history_ attribute of your KerasClassifier instance.另一方面,您可以通过KerasClassifier实例的history_属性访问在训练期间获得的损失值。

Here's an example:这是一个例子:

!pip install scikeras    

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split, cross_validate, KFold
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from scikeras.wrappers import KerasClassifier

X, y = make_classification(n_samples=100, n_features=20, n_informative=5, random_state=42)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

def build_nn():
    ann = Sequential()
    ann.add(Dense(20, input_dim=X_train.shape[1], activation='relu', name="Hidden_Layer_1"))
    ann.add(Dense(1, activation='sigmoid', name='Output_Layer'))
    ann.compile(loss='binary_crossentropy', optimizer= 'adam', metrics = 'accuracy')
return ann

keras_clf = KerasClassifier(model = build_nn, optimizer="adam", optimizer__learning_rate=0.001, epochs=100, verbose=0)

kfold = KFold(n_splits=10)
scoring = ['accuracy', 'precision', 'recall', 'f1']
results = cross_validate(estimator=keras_clf, X=X_train, y=y_train, scoring=scoring, cv=kfold, return_train_score=True, return_estimator=True)

best_model = results['estimator'][2]

# accuracy
best_model.score(X_test, y_test)

# loss values
best_model.history_['loss']

Eventually observe that, when in doubt, you can call dir(object) to get the list of all properties and methods of the specified object ( dir(best_model) in your case).最终观察到,如果有疑问,您可以调用dir(object)来获取指定对象的所有属性和方法的列表( dir(best_model)在您的情况下)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM