Error reads:
Input 0 of layer lstm_28 is incompatible with the layer: expected ndim=3, found ndim=4. Full shape received: [None, None, 15, 12]
In LSTM layer, input tf.nn.embedding_lookup(embedding, neighbor)
have shape =(15,12) and one None
is for batch size, how it comes the size of [None, None, 15,12]? How to deal with this error? Below is the dummy model that I created.
def create_model(embedding, embedding_dim, samp_size):
node = Input(shape=(None,), dtype=tf.int64)
neighbor = Input(shape=(None, samp_size), dtype=tf.int64)
label = Input(shape=(None,), dtype=tf.int64)
cell = LSTMCell(embedding_dim,)
_,h,c = LSTM(embedding_size, return_sequences=True, return_state=True)(tf.nn.embedding_lookup(embedding, neighbor))
predict_info = tf.squeeze(Dense(1, activation='relu'))(h)
return h
node_size = 1000
embedding_dim = 12
sampling_size = 15
embedding = tf.random.uniform([node_size, embedding_dim])
model = create_model (embedding, embedding_dim, sampling_size)
when using Keras functional API, do not include the None for batch dimension. For example, if your input is of dimension (batch_size, image_w, image_h, image_channels) do it like this:
inp = tf.keras.Input(shape=(IMG_W, IMG_H, IMG_CH))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.