简体   繁体   English

Tensorflow Keras 嵌入层错误:层权重形状不兼容

[英]Tensorflow Keras Embedding Layer Error: Layer weight shape not compatible

Can anyone recommend the best path for me to fix this type of error?任何人都可以为我推荐修复此类错误的最佳路径吗? I can't figure out what I've done wrong with my dimensions.我无法弄清楚我的尺寸做错了什么。 I have a pretrained embedding that originates in a Word2Vec gensim model, which I want to use to initialize the CNN with.我有一个源自 Word2Vec gensim model 的预训练嵌入,我想用它来初始化 CNN。 Sorry for the relatively simple question, but very new to both Keras and Tensorflow很抱歉这个相对简单的问题,但对于 Keras 和 Tensorflow 来说都很新

#CNN architecture

num_classes = num_labels

#Training params
batch_size = 8 
num_epochs = 25

#Model parameters
num_filters = 64  
weight_decay = 1e-4
kernel_size = 7 #this is the size of the window during convolution...making match the window size in Word2Vec...unsure if needed

print("training CNN ...")

model = Sequential()

#------------------------
FIXED_LENGTH=embedding_matrix.shape[1]
#------------------------

print('Vocab size:', vocab_size)
print('Output_Dim size:', w2v.vector_size)
print('Weights:', pd.Series([embedding_matrix]).shape)
print('Weights underlying shape:', embedding_matrix.shape)
print("Input Length:", FIXED_LENGTH)

#Model add word2vec embedding

model.add(Embedding(vocab_size+1, 
                      output_dim=w2v.vector_size, 
                      weights=[embedding_matrix], 
                      input_length=FIXED_LENGTH, 
                      trainable=False))
model.add(Conv1D(num_filters, kernel_size=kernel_size, activation='relu', padding='same'))
model.add(MaxPooling1D(2))
model.add(Conv1D(num_filters, 7, activation='relu', padding='same'))
model.add(GlobalMaxPooling1D())
model.add(Dropout(0.5))
model.add(Dense(32, activation='relu', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Dense(num_classes, activation='softmax'))  #multi-label (k-hot encoding)

adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
model.compile(loss='sparse_categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
model.summary()

#define callbacks
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=4, verbose=1)
callbacks_list = [early_stopping]

print('Batch size:', batch_size)
print('Num of Epochs:', num_epochs)
print('X Train Size:', x_train_pad.shape)
print('Y Train Size:', y_train.shape)

hist = model.fit(x_train_pad, 
                 y_train, 
                 batch_size=batch_size, 
                 epochs=num_epochs, 
                 callbacks=callbacks_list, 
                 validation_split=0.1, 
                 shuffle=True, 
                 verbose=2)

Output is: Output 是:

training CNN ...
Vocab size: 32186
Output_Dim size: 100
Weights: (1,)
Weights underlying shape: (32186, 100)
Input Length: 100
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-326-36db7b551866> in <module>()
     31                       weights=[embedding_matrix],
     32                       input_length=FIXED_LENGTH,
---> 33                       trainable=False))
     34 model.add(Conv1D(num_filters, kernel_size=kernel_size, activation='relu', padding='same'))
     35 model.add(MaxPooling1D(2))

c:\users\tt\anaconda3b\lib\site-packages\tensorflow_core\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

c:\users\tt\anaconda3b\lib\site-packages\tensorflow_core\python\keras\engine\sequential.py in add(self, layer)
    176           # and create the node connecting the current layer
    177           # to the input layer we just created.
--> 178           layer(x)
    179           set_inputs = True
    180 

c:\users\tt\anaconda3b\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in __call__(self, inputs, *args, **kwargs)
    815           # Build layer if applicable (if the `build` method has been
    816           # overridden).
--> 817           self._maybe_build(inputs)
    818           cast_inputs = self._maybe_cast_inputs(inputs)
    819 

c:\users\tt\anaconda3b\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   2146     # Optionally load weight values specified at layer instantiation.
   2147     if getattr(self, '_initial_weights', None) is not None:
-> 2148       self.set_weights(self._initial_weights)
   2149       self._initial_weights = None
   2150 

c:\users\tt\anaconda3b\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py in set_weights(self, weights)
   1334         raise ValueError('Layer weight shape ' + str(ref_shape) +
   1335                          ' not compatible with '
-> 1336                          'provided weight shape ' + str(w.shape))
   1337       weight_value_tuples.append((p, w))
   1338     backend.batch_set_value(weight_value_tuples)

ValueError: Layer weight shape (32187, 100) not compatible with provided weight shape (32186, 100)

The answer was the encoded sentences contains values higher than were encoded in the lexicon build stage.答案是编码的句子包含的值高于在词典构建阶段编码的值。 There should be an index in your lexicon for every value of your training and test set.您的词典中应该为您的训练和测试集的每个值都有一个索引。 If not, you have to clean the sentences before sending them to the CNN.如果没有,您必须在将它们发送到 CNN 之前清理这些句子。

Can you change vocab_size+1 argument in the Embedded layer to vocab_size .您可以将 Embedded 层中的vocab_size+1参数更改为vocab_size I think its the +1 that is causing problem我认为这是导致问题的 +1

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM