简体   繁体   中英

WARNING:tensorflow:Model was constructed with shape for input Tensor(). but it was called on an input with incompatible shape

I'm training a model with a generator and I'm getting this Warning from Tensorflow, although I can train the model without errors, I want to fix this or at least understand why it happens.

My data from the generator have this shapes:

for x, y in model_generator(): # x[0] and x[1] are the inputs, y is the output
    print(x[0].shape, x[1].shape, y.shape)

# (20,)(20,)(20,17772) 
# 17772 --> Number of unique words in my datatset
# 20 --> Number of words per example (per sentence)

This is my model:

Model: "functional_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 20)]         0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            [(None, 20)]         0                                            
__________________________________________________________________________________________________
embedding (Embedding)           (None, 20, 50)       890850      input_1[0][0]                    
__________________________________________________________________________________________________
embedding_1 (Embedding)         (None, 20, 50)       890850      input_2[0][0]                    
__________________________________________________________________________________________________
lstm (LSTM)                     [(None, 64), (None,  29440       embedding[0][0]                  
__________________________________________________________________________________________________
lstm_1 (LSTM)                   (None, 20, 64)       29440       embedding_1[0][0]                
                                                                 lstm[0][1]                       
                                                                 lstm[0][2]                       
__________________________________________________________________________________________________
time_distributed (TimeDistribut (None, 20, 17772)    1155180     lstm_1[0][0]                     
==================================================================================================
Total params: 2,995,760
Trainable params: 1,214,060
Non-trainable params: 1,781,700
__________________________________________________________________________________________________
None

And this are the warnings I'm getting when running the model:

WARNING:tensorflow:Model was constructed with shape (None, 20) for input Tensor("input_1:0", shape=(None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 1).
WARNING:tensorflow:Model was constructed with shape (None, 20) for input Tensor("input_2:0", shape=(None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 1).
WARNING:tensorflow:Model was constructed with shape (None, 20) for input Tensor("input_1:0", shape=(None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 1).
WARNING:tensorflow:Model was constructed with shape (None, 20) for input Tensor("input_2:0", shape=(None, 20), dtype=float32), but it was called on an input with incompatible shape (None, 1).

I don't understand why I get this, the shape of the input is (20,) so should be correct, any suggestions?

EDIT

Generator:

def model_generator():
    for index, output in enumerate(training_decoder_output):
        for i in range(size):
            yield ([training_encoder_input[size*index+i], training_decoder_input[size*index+i]], output[i])

# Generator, returns inputs and ouput one by one when calling 
# (I saved the outputs in chunks on disk so that's why I iterate over it in that way)

Call to model.fit() :

model.fit(model_generator(), epochs=5)

Sample of training_encoder_input :

print(training_encoder_input[:5])

[[   3 1516   10 3355 2798    1 9105    1 9106    4  162    1  411    1
  9107 3356  612    1 9108    1]
 [   0    0    0    0    0    0    0    0    0    0    0    2 9109 2799
  5632   29 1187    2  157  275]
 [   0   54 5633 5634    1  412 4199   12 9110 5633 5634   27  443  134
  1516    7    6 4200 1280    1]
 [  23 9112  816   11 9113   33  184 9114  816    1 9115   42    3    2
    57    5 2120    3  185    1]
 [   0    0    0    0    0    0   15  301 9116    3 3357    1 9117    1
    67 5635    4  110 5635    1]]

The shape of your input should be like:

x[0].shape => (1,20,) # where 1 is batch size. 

In model None is batch size so this particular dimension should also appear in your x data. So, you need to update your generate as:

def model_generator():
for index, output in enumerate(training_decoder_output):
    for i in range(size):
        yield ([np.expand_dims(training_encoder_input[size*index+i], axis=0), np.expand_dims(training_decoder_input[size*index+i]], axis=0), np.expand_dims(output[i], axis=0))

If you have more than one batch size, you create a list/array of elements as (bs,20,) where bs is batch size.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM