简体   繁体   English

如何在训练期间替换损失函数 tensorflow.keras

[英]How to replace loss function during training tensorflow.keras

I want to replace the loss function related to my neural network during training, this is the network:我想在训练期间替换与我的神经网络相关的损失函数,这是网络:

model = tensorflow.keras.models.Sequential()
        model.add(tensorflow.keras.layers.Conv2D(32, kernel_size=(3, 3), activation="relu", input_shape=input_shape))
        model.add(tensorflow.keras.layers.Conv2D(64, (3, 3), activation="relu"))
        model.add(tensorflow.keras.layers.MaxPooling2D(pool_size=(2, 2)))
        model.add(tensorflow.keras.layers.Dropout(0.25))
        model.add(tensorflow.keras.layers.Flatten())
        model.add(tensorflow.keras.layers.Dense(128, activation="relu"))
        model.add(tensorflow.keras.layers.Dropout(0.5))
        model.add(tensorflow.keras.layers.Dense(output_classes, activation="softmax"))
        model.compile(loss=tensorflow.keras.losses.categorical_crossentropy, optimizer=tensorflow.keras.optimizers.Adam(0.001), metrics=['accuracy'])
        history = model.fit(x_train, y_train, batch_size=128, epochs=5, validation_data=(x_test, y_test))

so now I want to change tensorflow.keras.losses.categorical_crossentropy with another, so I made this:所以现在我想用另一个改变tensorflow.keras.losses.categorical_crossentropy ,所以我做了这个:

model.compile(loss=tensorflow.keras.losses.mse, optimizer=tensorflow.keras.optimizers.Adam(0.001), metrics=['accuracy'])
    history = model.fit(x_improve, y_improve, epochs=1, validation_data=(x_test, y_test)) #FIXME bug during training

but I have this error:但我有这个错误:

ValueError: No gradients provided for any variable: ['conv2d/kernel:0', 'conv2d/bias:0', 'conv2d_1/kernel:0', 'conv2d_1/bias:0', 'dense/kernel:0', 'dense/bias:0', 'dense_1/kernel:0', 'dense_1/bias:0'].

Why?为什么? How can I fix it?我该如何解决? There is another way to change loss function?还有另一种改变损失函数的方法吗?

Thanks谢谢

So, a straightforward answer I would give is: switch to pytorch if you want to play this kind of games.所以,我给出的一个直接答案是:如果你想玩这类游戏,请切换到 pytorch。 Since in pytorch you define your training and evaluation functions, it takes just an if statement to switch from a loss function to another one.由于在 pytorch 中定义了训练和评估函数,因此只需要一个 if 语句即可从损失函数切换到另一个损失函数。

Also, I see in your code that you want to switch from cross_entropy to mean_square_error, the former is suitable for classification the latter for regression, so this is not really something you can do, in the code that follows I switched from mean squared error to mean squared logarithmic error, which are both loss suitable for regression.另外,我在你的代码中看到你想从 cross_entropy 切换到 mean_square_error,前者适合分类,后者适合回归,所以这不是你能做的,在接下来的代码中我从均方误差切换到均方对数误差,都是适合回归的损失。

Despite other answers offers solutions to your question (see change-loss-function-dynamically-during-training ) it is not clear wether you can trust or not the results.尽管其他答案为您的问题提供了解决方案(请参阅change-loss-function-dynamically-during-training ),但您是否可以信任结果尚不清楚。 Some people found that even with a customised function sometimes Keras keep training with the first loss.有些人发现,即使使用自定义函数,有时 Keras 也会以第一个损失进行训练。

Solution:解决方案:

My solution is based on train_on_batch, which allows us to train a model in a for loop and therefore stop training it whenever we prefer to recompile the model with a new loss function.我的解决方案基于 train_on_batch,它允许我们在 for 循环中训练模型,因此只要我们更喜欢用新的损失函数重新编译模型,就停止训练它。 Please note that recompiling the model does not reset the weights (see:Does recompiling a model re-initialize the weights? ).请注意,重新编译模型不会重置权重(请参阅:重新编译模型是否重新初始化权重? )。

The dataset can be found here Boston housing dataset数据集可以在这里找到波士顿住房数据集

# Regression Example With Boston Dataset: Standardized and Larger
from pandas import read_csv
from keras.models import Sequential
from keras.layers import Dense
from sklearn.model_selection import train_test_split
from keras.losses import mean_squared_error, mean_squared_logarithmic_error
from matplotlib import pyplot
import matplotlib.pyplot as plt

# load dataset
dataframe = read_csv("housing.csv", delim_whitespace=True, header=None)
dataset = dataframe.values

# split into input (X) and output (Y) variables
X = dataset[:,0:13]
y = dataset[:,13]

trainX, testX, trainy, testy = train_test_split(X, y, test_size=0.33, random_state=42)

# create model
model = Sequential()
model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu'))
model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))

batch_size = 25

# have to define manually a dict to store all epochs scores 
history = {}
history['history'] = {}
history['history']['loss'] = []
history['history']['mean_squared_error'] = []
history['history']['mean_squared_logarithmic_error'] = []
history['history']['val_loss'] = []
history['history']['val_mean_squared_error'] = []
history['history']['val_mean_squared_logarithmic_error'] = []

# first compiling with mse
model.compile(loss='mean_squared_error', optimizer='adam', metrics=[mean_squared_error, mean_squared_logarithmic_error])

# define number of iterations in training and test
train_iter = round(trainX.shape[0]/batch_size)
test_iter = round(testX.shape[0]/batch_size)

for epoch in range(2):
    
    # train iterations 
    loss, mse, msle = 0, 0, 0
    for i in range(train_iter):
        
        start = i*batch_size
        end = i*batch_size + batch_size
        batchX = trainX[start:end,]
        batchy = trainy[start:end,]
        
        loss_, mse_, msle_ = model.train_on_batch(batchX,batchy)
                
        loss += loss_
        mse += mse_
        msle += msle_
    
    history['history']['loss'].append(loss/train_iter)
    history['history']['mean_squared_error'].append(mse/train_iter)
    history['history']['mean_squared_logarithmic_error'].append(msle/train_iter)
    
    # test iterations 
    val_loss, val_mse, val_msle = 0, 0, 0
    for i in range(test_iter):
        
        start = i*batch_size
        end = i*batch_size + batch_size
        batchX = testX[start:end,]
        batchy = testy[start:end,]
        
        val_loss_, val_mse_, val_msle_ = model.test_on_batch(batchX,batchy)
        
        val_loss += val_loss_
        val_mse += val_mse_
        val_msle += msle_
        
    history['history']['val_loss'].append(val_loss/test_iter)
    history['history']['val_mean_squared_error'].append(val_mse/test_iter)
    history['history']['val_mean_squared_logarithmic_error'].append(val_msle/test_iter)
    
# recompiling the model with new loss
model.compile(loss='mean_squared_logarithmic_error', optimizer='adam', metrics=[mean_squared_error, mean_squared_logarithmic_error])

for epoch in range(2):
    
    # train iterations 
    loss, mse, msle = 0, 0, 0
    for i in range(train_iter):
        
        start = i*batch_size
        end = i*batch_size + batch_size
        batchX = trainX[start:end,]
        batchy = trainy[start:end,]
    
        loss_, mse_, msle_ = model.train_on_batch(batchX,batchy)
        
        loss += loss_
        mse += mse_
        msle += msle_
        
    history['history']['loss'].append(loss/train_iter)
    history['history']['mean_squared_error'].append(mse/train_iter)
    history['history']['mean_squared_logarithmic_error'].append(msle/train_iter)
     
    # test iterations 
    val_loss, val_mse, val_msle = 0, 0, 0
    for i in range(test_iter):
        
        start = i*batch_size
        end = i*batch_size + batch_size
        batchX = testX[start:end,]
        batchy = testy[start:end,]
        
        val_loss_, val_mse_, val_msle_ = model.test_on_batch(batchX,batchy)
        
        val_loss += val_loss_
        val_mse += val_mse_
        val_msle += msle_
        
    history['history']['val_loss'].append(val_loss/test_iter)
    history['history']['val_mean_squared_error'].append(val_mse/test_iter)
    history['history']['val_mean_squared_logarithmic_error'].append(val_msle/test_iter)
    
# Some plots to check what is going on   
# loss function 
pyplot.subplot(311)
pyplot.title('Loss')
pyplot.plot(history['history']['loss'], label='train')
pyplot.plot(history['history']['val_loss'], label='test')
pyplot.legend()

# Only mean squared error 
pyplot.subplot(312)
pyplot.title('Mean Squared Error')
pyplot.plot(history['history']['mean_squared_error'], label='train')
pyplot.plot(history['history']['val_mean_squared_error'], label='test')
pyplot.legend()

# Only mean squared logarithmic error 
pyplot.subplot(313)
pyplot.title('Mean Squared Logarithmic Error')
pyplot.plot(history['history']['mean_squared_logarithmic_error'], label='train')
pyplot.plot(history['history']['val_mean_squared_logarithmic_error'], label='test')
pyplot.legend()
plt.tight_layout()
pyplot.show()

The resulting plot confirm that the loss function is changing after the second epoch:结果图确认损失函数在第二个时期后发生变化:

在此处输入图片说明

The drop in the loss function is due to the fact that the model is switching from normal mean squared error to the logarithmic one, which has much lower values.损失函数的下降是由于模型从正态均方误差切换到对数误差,后者的值要低得多。 Printing the scores also prove that the used loss truly changed:打印分数也证明使用的损失确实发生了变化:

print(history['history']['loss'])
[599.5209197998047, 570.4041115897043, 3.8622902120862688, 2.1578191178185597]
print(history['history']['mean_squared_error'])
[599.5209197998047, 570.4041115897043, 510.29034205845426, 425.32058388846264]
print(history['history']['mean_squared_logarithmic_error'])
[8.624503476279122, 6.346359729766846, 3.8622902120862688, 2.1578191178185597]

In the first two epochs the values of loss are equal to ones of mean_square_error and during the third and fourth epochs the values becomes equal to the ones of mean_square_logarithmic_error, which is the new loss that was set.在前两个时期,损失的值等于 mean_square_error 的值,在第三和第四个时期,值变得等于 mean_square_logarithmic_error 的值,这是设置的新损失。 So it seems that using train_on_batch allows to change loss function, nevertheless I want to stress out again that this is basically what one should do on pytoch to achieve the same results, with the difference that the behaviour of pytorch (in this scenario and in my opinion) is more reliable.因此,似乎使用 train_on_batch 允许更改损失函数,但我想再次强调,这基本上是在 pytoch 上应该做的以达到相同的结果,区别在于 pytorch 的行为(在这种情况下和我的意见)更可靠。

I'm currently working on google colab with Tensorflow and Keras and i was not able to recompile a model mantaining the weights, every time i recompile a model like this:我目前正在使用 Tensorflow 和 Keras 开发 google colab,但每次我重新编译这样的模型时,我都无法重新编译保持权重的模型:

with strategy.scope():
  model = hd_unet_model(INPUT_SIZE)
  model.compile(optimizer=Adam(lr=0.01), 
                loss=tf.keras.losses.MeanSquaredError() ,
                metrics=[tf.keras.metrics.MeanSquaredError()]) 

the weights gets resetted.权重被重置。 so i found an other solution, all you need to do is:所以我找到了另一个解决方案,您需要做的就是:

  1. Get the model with the weights you want ( load it or something else )获取具有您想要的权重的模型(加载它或其他东西)
  2. gets the weights of the model like this:像这样获取模型的权重:
weights = model.get_weights()
  1. recompile the model ( to change the loss function )重新编译模型(改变损失函数)
  2. set again the weights of the recompiled model like this:再次设置重新编译模型的权重,如下所示:
model.set_weights(weights)
  1. launch the training启动培训

i tested this method and it seems to work.我测试了这个方法,它似乎有效。

so to change the loss mid-Training you can:所以要改变训练中的损失,你可以:

  1. Compile with the first loss.用第一个损失编译。
  2. Train of the first loss.火车的第一次损失。
  3. Save the weights.保存权重。
  4. Recompile with the second loss.用第二次损失重新编译。
  5. Load the weights.加载权重。
  6. Train on the second loss.训练第二次失败。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 tensorflow.keras model 准确度、损失和验证指标在训练期间保持 ZA81259CEF8E959C2297DF1D456EZ5 - tensorflow.keras model accuracy, loss, and validation metrics remain static during 30 epochs of training 在 Keras 训练期间如何在损失函数内打印? - How do I print inside the loss function during training in Keras? In Tensorflow.keras 2.0, when a model has multiple outputs, how to define a flexible loss function for model.fit()? - In Tensorflow.keras 2.0, when a model has multiple outputs, how to define a flexible loss function for model.fit()? 训练期间的 Tensorflow 自定义损失函数 NaN - Tensorflow custom loss function NaNs during training Tensorflow.keras 没有训练和停止错误“没有算法工作!” 在 Colab - Tensorflow.keras not training and stoping with error 'No algorithm worked!' in Colab 在训练过程中如何在Tensorflow Python中打印训练损失 - How to print training loss in tensorflow python during training process 如何将 tensorflow.keras 模型移动到 GPU - How to move a tensorflow.keras model to GPU 如何解决 tensorflow.keras 中的值错误? - how to solve Value Error in tensorflow.keras? 在TensorFlow培训期间打印丢失 - Printing the loss during TensorFlow training 在Keras中,如何在训练期间访问Word2Vec(嵌入)矢量以实现自定义损失功能 - In Keras, how can I access Word2Vec (embedding) vectors for custom loss function during training
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM