简体   繁体   English

我使用 fit_generator 的损失是 0.0000e+00(使用 Keras)

[英]My loss with fit_generator is 0.0000e+00 (using Keras)

I am trying to use Keras on a “large” dataset for my GPU.我正在尝试在 GPU 的“大型”数据集上使用 Keras。 To do so, I make use of fit_generator, the problem is that my loss is 0.0000e+00 every time.为此,我使用了 fit_generator,问题是每次我的损失都是 0.0000e+00。

My print class and generator function:我的打印类和生成器功能:

class printbatch(callbacks.Callback):
    def on_batch_end(self, batch, logs={}):
        if batch%10 == 0:
            print "Batch " + str(batch) + " ends"
    def on_epoch_begin(self, epoch, logs={}):
        print(logs)
    def on_epoch_end(self, epoch, logs={}):
        print(logs)

def simpleGenerator():
    X_train = f.get('X_train')
    y_train = f.get('y_train')
    total_examples = len(X_train)
    examples_at_a_time = 6
    range_examples = int(total_examples/examples_at_a_time)

    while 1:
        for i in range(range_examples): # samples
            yield X_train[i*examples_at_a_time:(i+1)*examples_at_a_time], y_train[i*examples_at_a_time:(i+1)*examples_at_a_time]

This is how I use them:这就是我使用它们的方式:

f = h5py.File(cache_file, 'r')

pb = printbatch()
sg = simpleGenerator()

class_weighting = [0.2595, 0.1826, 4.5640, 0.1417, 0.5051, 0.3826, 9.6446, 1.8418, 6.6823, 6.2478, 3.0, 7.3614]

history = autoencoder.fit_generator(sg, samples_per_epoch=366, nb_epoch=10, verbose=2, show_accuracy=True, callbacks=[pb], validation_data=None, class_weight=class_weighting)

This is (a part of) my output:这是(一部分)我的输出:

{}
Epoch 1/100
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
120s - loss: 0.0000e+00
[…]
{}
Epoch 9/10
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
124s - loss: 0.0000e+00
{}
Epoch 10/10
Batch 0 ends
Batch 10 ends
Batch 20 ends
Batch 30 ends
Batch 40 ends
Batch 50 ends
Batch 60 ends
{'loss': 0.0}
127s - loss: 0.0000e+00
Training completed in 1263.76883411 seconds

X_train and y_train shapes are: X_train 和 y_train 形状是:

X_train.shape
Out[5]: (366, 3, 360, 480)
y_train.shape
Out[6]: (366, 172800, 12)

So my question is, how could I solve the 'loss: 0.0000e+00' issue?所以我的问题是,如何解决“损失:0.0000e+00”问题?

Thank you for your time.感谢您的时间。

Edit: the model, the original comes from pradyu1993.github.io/2016/03/08/segnet-post.html by Pradyumna.编辑:模型,原文来自 Pradyumna 的 pradyu1993.github.io/2016/03/08/segnet-post.html。

class UnPooling2D(Layer):
    """A 2D Repeat layer"""
    def __init__(self, poolsize=(2, 2)):
        super(UnPooling2D, self).__init__()
        self.poolsize = poolsize

    @property
    def output_shape(self):
        input_shape = self.input_shape
        return (input_shape[0], input_shape[1],
                self.poolsize[0] * input_shape[2],
                self.poolsize[1] * input_shape[3])

    def get_output(self, train):
        X = self.get_input(train)
        s1 = self.poolsize[0]
        s2 = self.poolsize[1]
        output = X.repeat(s1, axis=2).repeat(s2, axis=3)
        return output

    def get_config(self):
        return {"name":self.__class__.__name__,
            "poolsize":self.poolsize}

def create_encoding_layers():
    kernel = 3
    filter_size = 64
    pad = 1
    pool_size = 2
    return [
    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(filter_size, kernel, kernel,     border_mode='valid'),
    BatchNormalization(),
    Activation('relu'),
    MaxPooling2D(pool_size=(pool_size, pool_size)),

    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(128, kernel, kernel, border_mode='valid'),
    BatchNormalization(),
    Activation('relu'),
    MaxPooling2D(pool_size=(pool_size, pool_size)),

    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(256, kernel, kernel, border_mode='valid'),
    BatchNormalization(),
    Activation('relu'),
    MaxPooling2D(pool_size=(pool_size, pool_size)),

    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(512, kernel, kernel, border_mode='valid'),
    BatchNormalization(),
    Activation('relu'),
]


def create_decoding_layers():
    kernel = 3
    filter_size = 64
    pad = 1
    pool_size = 2
    return[
    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(512, kernel, kernel, border_mode='valid'),
    BatchNormalization(),

    UpSampling2D(size=(pool_size,pool_size)),
    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(256, kernel, kernel, border_mode='valid'),
    BatchNormalization(),

    UpSampling2D(size=(pool_size,pool_size)),
    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(128, kernel, kernel, border_mode='valid'),
    BatchNormalization(),

    UpSampling2D(size=(pool_size,pool_size)),
    ZeroPadding2D(padding=(pad,pad)),
    Convolution2D(filter_size, kernel, kernel, border_mode='valid'),
    BatchNormalization(),
]

And:和:

autoencoder = models.Sequential()
autoencoder.add(Layer(input_shape=(3, img_rows, img_cols)))
autoencoder.encoding_layers = create_encoding_layers()
autoencoder.decoding_layers = create_decoding_layers()
for l in autoencoder.encoding_layers:
    autoencoder.add(l)
for l in autoencoder.decoding_layers:
    autoencoder.add(l)

autoencoder.add(Convolution2D(12, 1, 1, border_mode='valid',))
autoencoder.add(Reshape((12,img_rows*img_cols), input_shape=(12,img_rows,img_cols)))
autoencoder.add(Permute((2, 1)))
autoencoder.add(Activation('softmax'))
autoencoder.compile(loss="categorical_crossentropy", optimizer='adadelta')

I solved this issue.我解决了这个问题。 The problem was that in '.theanorc' I had float16: this is not enough, so I changed it to float64 and now it works.问题是在'.theanorc'中我有float16:这还不够,所以我把它改成了float64,现在它可以工作了。

This is my '.theanorc' at the moment:这是我目前的“.theanorc”:

[global]
device = gpu
floatX = float64
optimizer_including=cudnn

[lib]
cnmem=0.90

[blas]
ldflags = -L/usr/local/lib -lopenblas

[nvcc]
fastmath = True

[cuda]
root = /usr/local/cuda/

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 keras val_loss:0.0000e+00 - val_accuracy:0.0000e+00 - keras val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 Keras 损失:0.0000e+00 且精度保持不变 - Keras loss: 0.0000e+00 and accuracy stays constant keras,val_accuracy,val_loss是loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00的问题 - keras , val_accuracy, val_loss is loss: 0.0000e+00 val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00 problem 为什么我的准确率总是0.0000e+00,而且损失巨大? - Why is my accuracy always 0.0000e+00, and loss and huge? 获得固定精度:0.5000,有时 0.0000e+00 in Keras model 使用 Google Colab - Getting a fixed accuracy: 0.5000 and sometimes 0.0000e+00 in Keras model using Google Colab 获得精度:0.0000e+00 在我的张量流 model - Getting accuracy: 0.0000e+00 in my Tensor flow model “损失:0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000”是什么意思? - what does "loss: 0.0000e+00 - acc: 1.0000 - val_loss: 0.0000e+00 - val_acc: 1.0000" mean? 在 Keras 中使用 fit_generator - Using fit_generator in Keras 如何解决 LSTM 问题中的 loss: nan &accuracy: 0.0000e+00? 张量流 2.x - How to solve loss: nan & accuracy: 0.0000e+00 in a LSTM problem? Tensorflow 2.x Twitter 情绪分析常数零(0.0000e+00)损失值 - Twitter Sentiment Analysis Constant Zero (0.0000e+00) Loss value
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM