简体   繁体   English

Keras-WGAN 评论家和生成器的准确度停留在 0

[英]Keras-WGAN Critic And Generator Accuracy Stuck At 0

I am trying to implement WGAN in Keras.我正在尝试在 Keras 中实现 WGAN。 I am using David Foster's Generative Deep Learning Book and this code as reference.我正在使用 David Foster 的 Generative Deep Learning Book 和此代码作为参考。 I wrote down this simple code.我写下了这个简单的代码。 However, whenever I start training the model, the accuracy is always 0 and the losses for Critic and Discriminator are ~0.但是,每当我开始训练 model 时,准确度始终为 0,Critic 和 Discriminator 的损失约为 0。

They are stuck at these number no matter how many epochs they train for.无论他们训练多少个 epoch,他们都被困在这些数字上。 I tried various network configurations and different hyperparameters, but the result don't seem to change.我尝试了各种网络配置和不同的超参数,但结果似乎没有改变。 Google did not help much either.谷歌也没有多大帮助。 I cannot pin down the source of this behavior.我无法确定这种行为的根源。

This is the code I wrote.这是我写的代码。


from os.path import expanduser
import os
import struct as st

import numpy as np
import matplotlib.pyplot as plt

from keras.datasets import mnist
from keras.layers import Input, Dense, Reshape, Flatten, Dropout
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import RMSprop
import keras.backend as K

def wasserstein_loss(y_true, y_pred):
    return K.mean(y_true * y_pred)

class WGAN:

    def __init__(self):

        # Data Params
        self.genInput=100
        self.imChannels=1
        self.imShape = (28,28,1)

        # Build Models
        self.onBuildDiscriminator()
        self.onBuildGenerator()
        self.onBuildGAN()

        pass

    def onBuildGAN(self):

        if self.mGenerator is None or self.mDiscriminator is None: raise Exception('Generator Or Descriminator Uninitialized.')

        self.mDiscriminator.trainable=False

        self.mGAN=Sequential()
        self.mGAN.add(self.mGenerator)
        self.mGAN.add(self.mDiscriminator)

        ganOptimizer=RMSprop(lr=0.00005)
        self.mGAN.compile(loss=wasserstein_loss, optimizer=ganOptimizer, metrics=['accuracy'])

        print('GAN Model')
        self.mGAN.summary()
        pass

    def onBuildGenerator(self):

        self.mGenerator=Sequential()

        self.mGenerator.add(Dense(128 * 7 * 7, activation="relu", input_dim=self.genInput))
        self.mGenerator.add(Reshape((7, 7, 128)))
        self.mGenerator.add(UpSampling2D())
        self.mGenerator.add(Conv2D(128, kernel_size=4, padding="same"))
        self.mGenerator.add(BatchNormalization(momentum=0.8))
        self.mGenerator.add(Activation("relu"))
        self.mGenerator.add(UpSampling2D())
        self.mGenerator.add(Conv2D(64, kernel_size=4, padding="same"))
        self.mGenerator.add(BatchNormalization(momentum=0.8))
        self.mGenerator.add(Activation("relu"))
        self.mGenerator.add(Conv2D(self.imChannels, kernel_size=4, padding="same"))
        self.mGenerator.add(Activation("tanh"))

        print('Generator Model')
        self.mGenerator.summary()
        pass

    def onBuildDiscriminator(self):

        self.mDiscriminator = Sequential()

        self.mDiscriminator.add(Conv2D(16, kernel_size=3, strides=2, input_shape=self.imShape, padding="same"))
        self.mDiscriminator.add(LeakyReLU(alpha=0.2))
        self.mDiscriminator.add(Dropout(0.25))
        self.mDiscriminator.add(Conv2D(32, kernel_size=3, strides=2, padding="same"))
        self.mDiscriminator.add(ZeroPadding2D(padding=((0,1),(0,1))))
        self.mDiscriminator.add(BatchNormalization(momentum=0.8))
        self.mDiscriminator.add(LeakyReLU(alpha=0.2))
        self.mDiscriminator.add(Dropout(0.25))
        self.mDiscriminator.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
        self.mDiscriminator.add(BatchNormalization(momentum=0.8))
        self.mDiscriminator.add(LeakyReLU(alpha=0.2))
        self.mDiscriminator.add(Dropout(0.25))
        self.mDiscriminator.add(Conv2D(128, kernel_size=3, strides=1, padding="same"))
        self.mDiscriminator.add(BatchNormalization(momentum=0.8))
        self.mDiscriminator.add(LeakyReLU(alpha=0.2))
        self.mDiscriminator.add(Dropout(0.25))
        self.mDiscriminator.add(Flatten())
        self.mDiscriminator.add(Dense(1))

        disOptimizer=RMSprop(lr=0.00005)
        self.mDiscriminator.compile(loss=wasserstein_loss, optimizer=disOptimizer, metrics=['accuracy'])

        print('Discriminator Model')
        self.mDiscriminator.summary()

        pass

    def fit(self, trainData, nEpochs=1000, batchSize=64):

        lblForReal = -np.ones((batchSize, 1))
        lblForGene = np.ones((batchSize, 1))

        for ep in range(1, nEpochs+1):

            for __ in range(5):

                # Get Valid Images
                validImages = trainData[ np.random.randint(0, trainData.shape[0], batchSize) ]

                # Get Generated Images
                noiseForGene=np.random.normal(0, 1, size=(batchSize, self.genInput))
                geneImages=self.mGenerator.predict(noiseForGene)

                # Train Critic On Valid And Generated Images With Labels -1 And 1 Respectively
                disValidLoss=self.mDiscriminator.train_on_batch(validImages, lblForReal)
                disGeneLoss=self.mDiscriminator.train_on_batch(geneImages, lblForGene)

                # Perform Critic Weight Clipping
                for l in self.mDiscriminator.layers:
                    weights = l.get_weights()
                    weights = [np.clip(w, -0.01, 0.01) for w in weights]
                    l.set_weights(weights)

            # Train Generator Using Combined Model
            geneLoss=self.mGAN.train_on_batch(noiseForGene, lblForReal)

            print(' Epoch', ep, 'Critic Valid Loss,Acc', disValidLoss, 'Critic Generated Loss,Acc', disGeneLoss, 'Generator Loss,Acc', geneLoss)
        pass

    pass

if __name__ == '__main__':
    (trainData, __), (__, __) = mnist.load_data()
    trainData = (trainData.astype(np.float32)/127.5) - 1
    trainData = np.expand_dims(trainData, axis=3)

    WGan = WGAN()
    WGan.fit(trainData)

I get output very similar to the following for all configs that I try.对于我尝试的所有配置,我得到的 output 与以下非常相似。


 Epoch 1 Critic Valid Loss,Acc [-0.00016362152, 0.0] Critic Generated Loss,Acc [0.0003417502, 0.0] Generator Loss,Acc [-0.00016735379, 0.0]
 Epoch 2 Critic Valid Loss,Acc [-0.0001719332, 0.0] Critic Generated Loss,Acc [0.0003365979, 0.0] Generator Loss,Acc [-0.00017250411, 0.0]
 Epoch 3 Critic Valid Loss,Acc [-0.00017473527, 0.0] Critic Generated Loss,Acc [0.00032945914, 0.0] Generator Loss,Acc [-0.00017612436, 0.0]
 Epoch 4 Critic Valid Loss,Acc [-0.00017181305, 0.0] Critic Generated Loss,Acc [0.0003266656, 0.0] Generator Loss,Acc [-0.00016987178, 0.0]
 Epoch 5 Critic Valid Loss,Acc [-0.0001683443, 0.0] Critic Generated Loss,Acc [0.00032702673, 0.0] Generator Loss,Acc [-0.00016638976, 0.0]
 Epoch 6 Critic Valid Loss,Acc [-0.00017005506, 0.0] Critic Generated Loss,Acc [0.00032805002, 0.0] Generator Loss,Acc [-0.00017040147, 0.0]
 Epoch 7 Critic Valid Loss,Acc [-0.00017353195, 0.0] Critic Generated Loss,Acc [0.00033711304, 0.0] Generator Loss,Acc [-0.00017537423, 0.0]
 Epoch 8 Critic Valid Loss,Acc [-0.00017059325, 0.0] Critic Generated Loss,Acc [0.0003263024, 0.0] Generator Loss,Acc [-0.00016974319, 0.0]
 Epoch 9 Critic Valid Loss,Acc [-0.00017530039, 0.0] Critic Generated Loss,Acc [0.00032463064, 0.0] Generator Loss,Acc [-0.00017845634, 0.0]
 Epoch 10 Critic Valid Loss,Acc [-0.00017530067, 0.0] Critic Generated Loss,Acc [0.00033131015, 0.0] Generator Loss,Acc [-0.00017526663, 0.0]

I ran into a similar problem.我遇到了类似的问题。 The issue with WGAN is that the weight clipping method really cripples the model's ability to learn. WGAN 的问题在于权重裁剪方法确实削弱了模型的学习能力。 The learning can saturate very quick.学习可以很快饱和。 Weights are updates via backprop after every epoch but then they are clipped.权重在每个 epoch 之后通过反向传播进行更新,但随后它们被剪裁。 I would suggest that you experiment with the clipping value more to extremes.我建议您将削波值更极端地进行试验。 Try [-1,1] and [-0.0001, 0.0001].尝试 [-1,1] 和 [-0.0001, 0.0001]。 You will surely see a change.你肯定会看到变化。 An example of saturating:饱和示例: 100000 个 epoch 的 WGAN Critic 损失

As you can see, loss value went to 0.999975 in the first few hundred iterations and then didn't move at all for 100000 iterations.如您所见,损失值在前几百次迭代中达到 0.999975,然后在 100000 次迭代中完全没有变化。 I tried experimenting with different clipping values, the loss values were different but the behavior was same.我尝试使用不同的裁剪值进行试验,损失值不同但行为相同。 When I tried [-0.005, 0.005], the loss saturated at around 1, for [-0.02, 0.02] around 0.8.当我尝试 [-0.005, 0.005] 时,损失在 1 左右饱和,而 [-0.02, 0.02] 在 0.8 左右。

Your implementation looks correct but sometimes in GANs there's only so much you can do.你的实现看起来是正确的,但有时在 GAN 中你能做的只有这么多。 So I suggest you try WGAN with gradient penalty.所以我建议你尝试使用梯度惩罚的 WGAN。 It has a nice method of enforcing K-Lipschitz continuity by fixing the L2-norm of the interpolated image as close to 1 (check out the paper ).它有一个很好的方法来强制 K-Lipschitz 连续性,通过将插值图像的 L2 范数固定为接近 1(查看论文)。 For evaluation in WGAN-GP, ideally you should see the critic's loss value start at some large negative number and then converge to 0.对于 WGAN-GP 中的评估,理想情况下,您应该看到评论家的损失值从某个较大的负数开始,然后收敛到 0。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

相关问题 Keras - 自动编码器精度保持为零 - Keras - Autoencoder accuracy stuck on zero Keras - 验证损失和准确性停留在 0 - Keras - Validation Loss and Accuracy stuck at 0 Keras 二进制 Model 卡在 50% 的准确度 - Keras Binary Model Getting stuck at 50% Accuracy Keras 在二元分类问题中准确率停留在 50% - Keras accuracy stuck at 50% in a binary classification problem Keras评估生成器准确率高,但每个类的准确率低 - Keras evaluate_generator accuracy high, but accuracy of each class is low Keras 中拟合生成器输出的精度与手动计算的精度不同 - The accuracy that the fit generator outputs in Keras differs from the manually calculated accuracy Keras CNN:验证准确性停留在70%,培训准确性达到100% - Keras CNN: validation accuracy stuck at 70%, training accuracy reaching 100% 为什么 Keras 中 fit_generator 的精度、evaluate_generator 的精度和自定义精度不同? - Why are accuracy of fit_generator, accuracy of evaluate_generator and custom accuracy different in Keras? 在 Keras fit_generator 中将 shuffle 设置为 True 时准确度降低 - Accuracy reduced when shuffle set to True in Keras fit_generator Keras model.evaluate_generator 不打印预期损失(和准确性?) - Keras model.evaluate_generator not printing expected loss (and accuracy?)
 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM