繁体   English   中英

警告:tensorflow:模型是用形状 (20, 37, 42) 构建的,用于输入 Tensor(“input_5:0”, shape=(20, 37, 42), dtype=float32),但是

[英]WARNING:tensorflow:Model was constructed with shape (20, 37, 42) for input Tensor(“input_5:0”, shape=(20, 37, 42), dtype=float32), but

警告:tensorflow:模型是用形状 (20, 37, 42) 构建的,用于输入 Tensor("input_5:0", shape=(20, 37, 42), dtype=float32),但在不兼容的输入上调用它形状(无,37)。

你好! 这里是深度学习菜鸟……我在使用 LSTM 层时遇到了问题。 输入是一个长度为 37 的浮点数组,包含 2 个浮点数和一个长度为 35 的单热数组转换为浮点数。 输出是一个长度为 19 的数组,包含 0 和 1。 就像标题所暗示的那样,我在重塑我的输入数据以适应模型时遇到了麻烦,我什至不确定哪些输入维度会被认为是“兼容的”

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers


import random
inputs, outputs = [], []
for x in range(10000):
    tempi, tempo = [], []
    tempi.append(random.random() - 0.5)
    tempi.append(random.random() - 0.5)
    for x2 in range(35):
        if random.random() > 0.5:
            tempi.append(1.)
        else:
            tempi.append(0.)
    for x2 in range(19):
        if random.random() > 0.5:
            tempo.append(1.)
        else:
            tempo.append(0.)
    inputs.append(tempi)
    outputs.append(tempo)

batch = 20
timesteps = 42
training_units = 0.85

cutting_point_i = int(len(inputs)*training_units)
cutting_point_o = int(len(outputs)*training_units)
x_train, x_test = np.asarray(inputs[:cutting_point_i]), np.asarray(inputs[cutting_point_i:])
y_train, y_test = np.asarray(outputs[:cutting_point_o]), np.asarray(outputs[cutting_point_o:])

input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model"

这里有几个问题。

  • 您的输入没有时间步长,您需要输入形状(n, time steps, features)
  • input_shape ,时间步长维度首先出现,而不是最后
  • 您的最后一个 LSTM 层返回了序列,因此您无法将其与 0 和 1 进行比较

我做了什么:

  • 我为您的数据添加了时间步长 (7)
  • 我置换了input_shape的维度
  • 我设置了最终的return_sequences=False

具有生成数据的完全固定示例:

import numpy as np
from tensorflow import keras
from tensorflow.keras import layers

batch = 20
n_samples = 1000
timesteps = 7
features = 10

x_train = np.random.rand(n_samples, timesteps, features)
y_train = keras.utils.to_categorical(np.random.randint(0, 10, n_samples))

input_layer = keras.Input(shape=(timesteps, features),batch_size=batch)
dense = layers.LSTM(16, activation="sigmoid", return_sequences=True)(input_layer)
hidden_layer_2 = layers.LSTM(16, activation="sigmoid", return_sequences=False)(dense)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_2)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")

model.compile(loss='categorical_crossentropy', optimizer='adam')

history = model.fit(x_train, y_train)
Train on 1000 samples
  20/1000 [..............................] - ETA: 2:50 - loss: 2.5145
 200/1000 [=====>........................] - ETA: 14s - loss: 2.3934 
 380/1000 [==========>...................] - ETA: 5s - loss: 2.3647 
 560/1000 [===============>..............] - ETA: 2s - loss: 2.3549
 740/1000 [=====================>........] - ETA: 1s - loss: 2.3395
 900/1000 [==========================>...] - ETA: 0s - loss: 2.3363
1000/1000 [==============================] - 4s 4ms/sample - loss: 2.3353

模型的正确输入是 (20, 37, 42)。 注意:这里 20 是您明确指定的 batch_size。

代码:

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

batch = 20
timesteps = 42
training_units = 0.85

x1 = tf.constant(np.random.randint(50, size =(1000,37, 42)), dtype = tf.float32)
y1 = tf.constant(np.random.randint(10, size =(1000,)), dtype = tf.int32)
 

input_layer = keras.Input(shape=(37,timesteps),batch_size=batch)
dense = layers.LSTM(150, activation="sigmoid", return_sequences=True)
x = dense(input_layer)
hidden_layer_2 = layers.LSTM(150, activation="sigmoid", return_sequences=True)(x)
hidden_layer_3 = layers.Flatten()(hidden_layer_2)
output_layer = layers.Dense(10, activation="softmax")(hidden_layer_3)
model = keras.Model(inputs=input_layer, outputs=output_layer, name="my_model")

model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
tf.keras.utils.plot_model(model, 'my_first_model.png', show_shapes=True)

模型架构:

模型

您可以清楚地看到输入大小。

要运行的代码:

model.fit(x = x1, y = y1, batch_size = batch, epochs = 10)

注意:无论您指定了什么 batch_size,您都必须在 model.fit() 命令中指定相同的 batch_size。

输出:

Epoch 1/10
50/50 [==============================] - 4s 89ms/step - loss: 2.3288 - accuracy: 0.0920
Epoch 2/10
50/50 [==============================] - 5s 91ms/step - loss: 2.3154 - accuracy: 0.1050
Epoch 3/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3114 - accuracy: 0.0900
Epoch 4/10
50/50 [==============================] - 5s 101ms/step - loss: 2.3036 - accuracy: 0.1060
Epoch 5/10
50/50 [==============================] - 5s 99ms/step - loss: 2.2998 - accuracy: 0.1000
Epoch 6/10
50/50 [==============================] - 4s 89ms/step - loss: 2.2986 - accuracy: 0.1170
Epoch 7/10
50/50 [==============================] - 4s 84ms/step - loss: 2.2981 - accuracy: 0.1300
Epoch 8/10
50/50 [==============================] - 5s 103ms/step - loss: 2.2950 - accuracy: 0.1290
Epoch 9/10
50/50 [==============================] - 5s 106ms/step - loss: 2.2960 - accuracy: 0.1210
Epoch 10/10
50/50 [==============================] - 5s 97ms/step - loss: 2.2874 - accuracy: 0.1210

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM