简体   繁体   English

为什么当我把它设置为300时keras只做了10个时代?

[英]Why is keras only doing 10 epochs when I set it to 300?

I'm using a combination of sklearn and Keras running with Theano as its back-end. 我正在使用sklearn和Keras的组合与Theano作为后端运行。 I'm using the following code- 我正在使用以下代码 -

import numpy as np
import pandas as pd
from pandas import Series, DataFrame
import keras
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.constraints import maxnorm
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import SGD
from keras.wrappers.scikit_learn import KerasClassifier
from keras.constraints import maxnorm
from keras.utils.np_utils import to_categorical
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
from datetime import datetime
import time
from datetime import timedelta
from __future__ import division

seed = 7
np.random.seed(seed)

Y = data['Genre']
del data['Genre']
X = data

encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)

X = X.as_matrix().astype("float")

calls=[EarlyStopping(monitor='acc', patience=10), ModelCheckpoint('C:/Users/1383921/Documents/NNs/model', monitor='acc', save_best_only=True, mode='auto', period=1)]

def create_baseline(): 
    # create model
    model = Sequential()
    model.add(Dense(18, input_dim=9, init='normal', activation='relu'))
    model.add(Dense(9, init='normal', activation='relu'))
    model.add(Dense(12, init='normal', activation='softmax'))
    # Compile model
    sgd = SGD(lr=0.01, momentum=0.8, decay=0.0, nesterov=False)
    model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
    return model

np.random.seed(seed)
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasClassifier(build_fn=create_baseline, nb_epoch=300, batch_size=16, verbose=2)))
pipeline = Pipeline(estimators)
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(pipeline, X, encoded_Y, cv=kfold, fit_params={'mlp__callbacks':calls})
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))

The result when I start running this last part is- 我开始运行最后一部分的结果是 -

Epoch 1/10
...
Epoch 2/10

etc. 等等

It's supposed to be Epoch 1/300 and it works just fine when I run it on a different notebook. 它应该是Epoch 1/300 ,当我在不同的笔记本上运行时,它工作得很好。

What do you guys think is happening? 你们认为发生了什么? np_epoch=300 ... np_epoch=300 ...

What Keras version is this? 什么Keras版本是这个? If its greater than 2.0, then nb_epoch was changed to just epochs. 如果它大于2.0,那么nb_epoch就变成了epochs。 Else it defaults to 10. 否则它默认为10。

In Keras 2.0 the nb_epoch parameter was renamed to epochs so when you set epochs=300 it runs 300 epochs. 在Keras 2.0 nb_epoch参数更名为epochs ,所以当你设定epochs=300运行300个时代。 If you use nb_epoch=300 it will default to 10 instead. 如果你使用nb_epoch=300 ,它将默认为10。

Another solution to your problem : Forget about nb_epoch (doesn't work). 问题的另一个解决方案 :忘掉nb_epoch(不起作用)。 Pass epochs inside fit_params: 在fit_params中传递时期:

results = cross_val_score(pipeline, X, encoded_Y, cv=kfold, 
          fit_params={'epochs':300,'mlp__callbacks':calls})

And that would work. 那会有用。 fit_params goes straight into the Fit method and it will get the right epochs. fit_params直接进入Fit方法,它将获得正确的时代。

The parameter name in your function should be epochs instead of nb_epochs . 在你的函数参数名称应epochs而不是nb_epochs Be very careful though. 但是要非常小心。 For example, I trained my ANN with the old fashioned way of declaring the parameters ( nb_epochs = number ), and it worked (the iPython console only showed me some warnings), but when I plugged the same parameter names in the cross_val_score function , it did not work. 例如,我用老式的方式训练我的ANN来声明参数( nb_epochs = number ),并且它工作(iPython控制台只向我显示了一些警告),但是当我在cross_val_score function插入相同的参数名称时,它不工作。

I think that what sklearn calls "Epoch" is one step of your crossvalidation. 我认为sklearn所说的“Epoch”是你的交叉验证的一步。 So it does 300 epochs of training 10 times :-) is that possible? 所以它有300次训练10次:-)可能吗? Try with verbose=1 尝试使用verbose=1

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Keras:如果我多次训练 10 个 epoch,是否需要重新加载模型? - Keras: Is there a need to reload the model if I train for 10 epochs multiple times? 当我实施Keras的Fully Convolutional Networks时,为什么我的代码会抛出KeyError:'epochs' - Why does my code throwing KeyError: 'epochs' when I implemented Fully Convolutional Networks by Keras 为什么 Keras 在 25 个 epoch 中只运行 5 个? - Why does Keras run only 5 epochs out of 25? 使用 tensorflow keras 训练模型时,如何每 10 个 epoch 打印一个日志行? - How to print one log line per every 10 epochs when training models with tensorflow keras? 使用验证集确定Keras中的纪元数 - Use validation set to determine number epochs in Keras 每10个时期报告一次Keras模型评估指标? - Report Keras model evaluation metrics every 10 epochs? 每 10 个 epoch 保存一次模型 tensorflow.keras v2 - Save model every 10 epochs tensorflow.keras v2 为什么 keras 的准确性和损失在不同时期之间没有变化以及如何解决 - Why is keras accuracy and loss not changing between epochs and how to fix 为什么我的 Keras model 没有按规定加载所有 5 个时期? - Why my Keras model is not loading through all 5 epochs as specified? 在进行迁移学习和微调时如何管理 epoch - How to manage epochs when doing Transfer Learning and Fine-tuning
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM