简体   繁体   English

如何从我的 Keras 神经网络一次做出一个预测

[英]How to make a single prediction at a time from my Keras' Neural Net

I have made a GAN in Tensorflow 2.2.0 and have been making progress on my reduced dataset (48 samples).我在 Tensorflow 2.2.0 中制作了一个 GAN,并且在我的缩减数据集(48 个样本)上取得了进展。 To overcome some of the problems I'm now seeing on the discriminator, I've decided it's time to start using my full dataset of 1400 samples.为了克服我现在在鉴别器上看到的一些问题,我决定是时候开始使用包含 1400 个样本的完整数据集了。 Each is 4000 time steps of 3 features (4000,3).每个是 3 个特征 (4000,3) 的 4000 个时间步长。

After a lot of struggle and searching SO I'm finally starting to grasp the difference between batch_size and input_shape .经过大量的努力和搜索,我终于开始掌握batch_sizeinput_shape之间的区别。 What really helped was rewriting the below code from batch_size to shape and see that it worked the same.真正有帮助的是将下面的代码从batch_size改写为shape并看到它的工作原理是一样的。

def build_generator():
    """
    Input is assumed to be uniform random noise in the shape of (training_data.shape[0], 750,)
    """
    generator_input = Input(shape=(750,), name='generator_input')

    x = generator_input

    x = Dense(750, use_bias=True)(x)
    x = BatchNormalization(momentum=0.9)(x)
    x = LeakyReLU()(x)

    x = Reshape( (250,3) )(x)

    x = Conv1DTranspose(128, 3, strides=4, padding="same")(x)
    x = BatchNormalization()(x)
    x = LeakyReLU()(x)

    x = Conv1DTranspose(64, 3, strides=2, padding="same")(x)
    x = BatchNormalization()(x)
    x = LeakyReLU()(x)
  
    x = Conv1DTranspose(32, 3, strides=2, padding="same")(x)
    x = BatchNormalization()(x)
    x = LeakyReLU()(x)

    x = Conv1DTranspose(3, 3, strides=1, padding="same")(x)

    x = Activation('sigmoid')(x)

    generator_output = x

    return Model(generator_input, generator_output)

d = build_discriminator()
g = build_generator()

d.compile(optimizer=SGD(learning_rate=0.0006), loss="binary_crossentropy", metrics=['accuracy'])

model_input = Input(shape=(750,), name='model_input')
model_output = d(g(model_input))
GAN = Model(model_input, model_output)

GAN.compile(optimizer=SGD(learning_rate=0.0005), loss="binary_crossentropy", metrics=['accuracy'])

However, I still must be missing one piece of how batch_size and input_shape work together in Tensorflow models.但是,我仍然必须错过 Tensorflow 模型中batch_sizeinput_shape如何协同工作的一部分。 At the moment I'm only able to predict synthetic data if I pass a random seed array which is the same size as my reduced training dataset.目前,如果我传递一个与我减少的训练数据集大小相同的随机种子数组,我只能预测合成数据。 Whereas I was under the impression that once I trained the GAN I would be able to use the generator to make individual predictions of any size.而我的印象是,一旦我训练了 GAN,我就可以使用生成器进行任何大小的单独预测。 This issue of scale is relevant as it's not practical to be limited to only making predictions 1400 samples at a time.这个规模问题是相关的,因为一次只能预测 1400 个样本是不切实际的。 I've had a good look at the Model page in the Tensorflow docs and nothing really sticks out at me as to how this is done in a straightforward manner.我已经很好地查看了 Tensorflow 文档中的Model 页面,关于如何以简单的方式完成此操作,我并没有什么特别突出的地方。

#Reduced dataset length is 48 samples long
seed = tf.random.uniform(
    (48,750,), minval=-1, maxval=1, dtype=tf.dtypes.float32
)

new_samples = g.predict(seed)
new_samples.shape

#returns estimates of the correct shape
(48, 4000, 3)

Seeding the generator with one random sample returns all sorts of errors relating to the expected dimensions of the data.用一个随机样本为生成器播种会返回与数据的预期维度相关的各种错误。 The generator is expecting [None,48] but is fed [None,1] thus returning errors.生成器期望 [None,48] 但被输入 [None,1] 从而返回错误。

g.predict(seed[0])

#Returns the stack trace with the following relevant info
WARNING:tensorflow:Model was constructed with shape (None, 750) for input Tensor("generator_input:0", shape=(None, 750), dtype=float32), but it was called on an input with incompatible shape (None, 1).

ValueError: Input 0 of layer dense_1 is incompatible with the layer: expected axis -1 of input shape to have value 750 but received input with shape [None, 1]

I'm guessing there is still a gap in my knowledge as to how batch_size and input_shape relate to each other.我猜我对batch_sizeinput_shape如何相互关联的知识仍然存在差距。 Any examples of suggestions as to how this is done would be appreciated.任何关于如何完成的建议示例将不胜感激。

seed[0] is a tensor of shape (n_features,). seed[0]是一个形状为 (n_features,) 的张量。 You need to pass to the generator a tensor of shape (batch_dim, n_features) that in the case of a single sample is (1,n_features).您需要向生成器传递一个形状为 (batch_dim, n_features) 的张量,在单个样本的情况下为 (1,n_features)。

seed = tf.random.uniform(
    (48,750,), minval=-1, maxval=1, dtype=tf.dtypes.float32
)

g.predict(seed[0][None,:]) # seed[0][None,:].shape is (1,750)

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM