简体   繁体   English

输出损失/成本函数,单位为keras

[英]Output the loss/cost function in keras

I am trying to find the cost function in Keras. 我正在尝试在Keras中找到成本函数。 I am running an LSTM with the loss function categorical_crossentropy and I added a Regularizer. 我正在运行具有损失函数categorical_crossentropy的LSTM,并添加了正则化器。 How do I output what the cost function looks like after my Regularizer this for my own analysis? 在我的正则化器之后,如何输出成本函数的形式进行自己的分析?

model = Sequential()
model.add(LSTM(
    NUM_HIDDEN_UNITS,
    return_sequences=True,
    input_shape=(PHRASE_LEN, SYMBOL_DIM),
    kernel_regularizer=regularizers.l2(0.01)
    ))
model.add(Dropout(0.3))
model.add(LSTM(NUM_HIDDEN_UNITS, return_sequences=False))
model.add(Dropout(0.3))
model.add(Dense(SYMBOL_DIM))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
    optimizer=RMSprop(lr=1e-03, rho=0.9, epsilon=1e-08))

How do i output what the cost function looks like after my regularizer this for my own analysis? 在我进行正则化后,如何输出成本函数的形式进行自己的分析?

Surely you can achieve this by obtaining the output ( yourlayer.output ) of the layer you want to see and print it (see here ). 当然,您可以通过获取要查看并打印的图层的输出( yourlayer.output )来实现此目的(请参见此处 )。 However there are better ways to visualize these things. 但是,有更好的方法可视化这些东西。

Meet Tensorboard . 认识Tensorboard

This is a powerful visualization tool that enables you to track and visualize your metrics, outputs, architecture, kernel_initializations, etc. The good news is that there is already a Tensorboard Keras Callback that you can use for this purpose; 这是一个功能强大的可视化工具,使您能够跟踪和可视化指标,输出,体系结构,kernel_initializations等。好消息是,已经有一个Tensorboard Keras回调可用于此目的; you just have to import it. 您只需要导入它。 To use it just pass an instance of the Callback to your fit method, something like this: 要使用它,只需将Callback的一个实例传递给fit方法,如下所示:

from keras.callbacks import TensorBoard
#indicate folder to save, plus other options
tensorboard = TensorBoard(log_dir='./logs/run1', histogram_freq=1,
    write_graph=True, write_images=False)  

#save it in your callback list
callbacks_list = [tensorboard]
#then pass to fit as callback, remember to use validation_data also
model.fit(X, Y, callbacks=callbacks_list, epochs=64, 
    validation_data=(X_test, Y_test), shuffle=True)

After that, start your Tensorboard sever (it runs locally on your pc) by executing: 之后,通过执行以下命令启动Tensorboard服务器(它在您的PC上本地运行):

tensorboard --logdir=logs/run1

For example, this is what my Kernels look like on two different models I tested (to compare them you have to save separate runs and then start Tensorboard on the parent directory instead). 例如,这就是我在两个不同模型上测试过的内核的外观(要进行比较,您必须保存单独的运行,然后在父目录上启动Tensorboard)。 This is on the Histograms tab, on my second layer: 这在第二层的“直方图”选项卡上:

在此处输入图片说明

The model on the left I initialized with kernel_initializer='random_uniform' , thus its shape is the one of a Uniform Distribution. 我用kernel_initializer='random_uniform'初始化的左侧模型,因此其形状是均匀分布之一。 The model on the right I initialized with kernel_initializer='normal' , thus why it appears as a Gaussian distribution throughout my epochs (about 30). 右边的模型我用kernel_initializer='normal'初始化,因此为什么它在整个时期(约30个)都表现为高斯分布。

This way you could visualize how your kernels and layers "look like", in a more interactive and understandable way than printing outputs. 这样,您可以以比打印输出更具交互性和更易理解的方式可视化内核和层的“外观”。 This is just one of the great features Tensorboard has, and it can help you develop your Deep Learning models faster and better. 这只是Tensorboard的重要功能之一,它可以帮助您更快更好地开发深度学习模型。

Of course there are more options to the Tensorboard Callback and for Tensorboard in general, so I do suggest you thoroughly read the links provided if you decide to attempt this. 当然,Tensorboard Callback和Tensorboard通常有更多选项,因此,如果您决定尝试这样做,我建议您通读提供的链接。 For more information you can check this and also this questions. 欲了解更多信息,您可以检查这个也是这个问题。

Edit: So, you comment you want to know how your regularized loss "looks" analytically. 编辑:因此,您发表评论要知道您的正规化损失在分析上的“外观”。 Let's remember that by adding a Regularizer to a loss function we are basically extending the loss function to include some "penalty" or preference in it. 让我们记住,通过在损失函数中添加一个正则化器,我们基本上是在扩展损失函数,使其包含一些“惩罚”或偏好。 So, if you are using cross_entropy as your loss function and adding an l2 regularizer (that is Euclidean Norm) with a weight of 0.01 your whole loss function would look something like: 因此,如果您使用cross_entropy作为损失函数并添加权重为0.01的l2正则化器(即欧几里得范数),则整个损失函数将类似于:

在此处输入图片说明

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM