簡體   English   中英

如何確保 con1D 的 output_shape 與 keras 自動編碼器中時間序列的 input_shape 相同?

[英]How to make sure con1D's output_shape is same as input_shape with time series in keras autoencoder?

Conv1D output 運行自動編碼器擬合時,keras 自動編碼器 model 中的形狀不正確。

我嘗試使用 keras 自動編碼器 model 來壓縮和解壓縮我的時間序列數據。 但是當我使用Conv1D更改圖層時,output 形狀不正確。

我有一些形狀為 (4000, 689) 的時間序列數據,其中代表 4000 個樣本,每個樣本有 689 個特征。 我想使用Conv1D壓縮數據,但上Upsampling層和最后一個Conv1D層的 output 形狀 (?, 688, 1) 不等於輸入形狀 (, 689, 1)。

我應該如何設置這些圖層的參數? 提前致謝。

x_train = data[0:4000].values
x_test = data[4000:].values
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)

x_train 形狀:(4000, 689)
x_test 形狀: (202, 689)

我將 x_train、x_test 重塑為 3dim,如下所示。

x_tr = x_train.reshape(4000,689,1)
x_te = x_test.reshape(202,689,1)
print('x_tr shape:', x_tr.shape)
print('x_te shape:', x_te.shape)

x_tr 形狀:(4000, 689, 1)
x_te 形狀:(202, 689, 1)

input_img = Input(shape=(689,1))

x = Conv1D(16, 3, activation='relu', padding='same')(input_img)
print(x)
x = MaxPooling1D(2, padding='same')(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
x = MaxPooling1D(2, padding='same')(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
encoded = MaxPooling1D(2)(x)
print(encoded)
print('--------------')
    
    
x = Conv1D(8, 3, activation='relu', padding='same')(encoded)
print(x)
x = UpSampling1D(2)(x)
print(x)
x = Conv1D(8, 3, activation='relu', padding='same')(x)
print(x)
x = UpSampling1D(2)(x)
print(x)
x = Conv1D(16, 3, activation='relu', padding='same')(x)
print(x)
x = UpSampling1D(2)(x)
print(x)
decoded = Conv1D(1, 3, activation='sigmoid', padding='same')(x)
print(decoded)

autoencoder = Model(input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse')

當我導入這些模型並在 Jupyter 中運行上面的單元格時,它似乎沒問題。 或許。 但是我在運行autoencoder.fit時在下一個代碼中遇到錯誤。

autoencoder.fit(x_tr, x_tr, epochs=50, batch_size=128, shuffle=True, validation_data=(x_te, x_te)) 

所以我print每一層。

下面是每一層的print結果。

Tensor("conv1d_166/Relu:0", shape=(?, 689, 16), dtype=float32)
Tensor("max_pooling1d_71/Squeeze:0", shape=(?, 345, 16), dtype=float32)
Tensor("conv1d_167/Relu:0", shape=(?, 345, 8), dtype=float32)
Tensor("max_pooling1d_72/Squeeze:0", shape=(?, 173, 8), dtype=float32)
Tensor("conv1d_168/Relu:0", shape=(?, 173, 8), dtype=float32)
Tensor("max_pooling1d_73/Squeeze:0", shape=(?, 86, 8), dtype=float32)

Tensor("conv1d_169/Relu:0", shape=(?, 86, 8), dtype=float32)
Tensor("up_sampling1d_67/concat:0", shape=(?, 172, 8), dtype=float32)
Tensor("conv1d_170/Relu:0", shape=(?, 172, 8), dtype=float32)
Tensor("up_sampling1d_68/concat:0", shape=(?, 344, 8), dtype=float32)
Tensor("conv1d_171/Relu:0", shape=(?, 344, 16), dtype=float32)
Tensor("up_sampling1d_69/concat:0", shape=(?, 688, 16), dtype=float32)
Tensor("conv1d_172/Sigmoid:0", shape=(?, 688, 1), dtype=float32) 

ValueError 波紋管:

ValueError                                Traceback (most recent call last)
<ipython-input-74-56836006a800> in <module>
      3                 batch_size=128,
      4                 shuffle=True,
----> 5                 validation_data=(x_te, x_te)
      6                 )

~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
    950             sample_weight=sample_weight,
    951             class_weight=class_weight,
--> 952             batch_size=batch_size)
    953         # Prepare validation data.
    954         do_validation = False

~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
    787                 feed_output_shapes,
    788                 check_batch_axis=False,  # Don't enforce the batch size.
--> 789                 exception_prefix='target')
    790 
    791             # Generate sample-wise weight values given the `sample_weight` and

~/anaconda3/envs/keras/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
    136                             ': expected ' + names[i] + ' to have shape ' +
    137                             str(shape) + ' but got array with shape ' +
--> 138                             str(data_shape))
    139     return data
    140 

ValueError: Error when checking target: expected conv1d_172 to have shape (688, 1) but got array with shape (689, 1)

floor function 讓這發生了嗎?
如何正確修復錯誤和autoencoder.fit
提前致謝。

使用卷積層,你需要根據輸入大小,kernel大小和其他參數來推斷你的output大小。 最簡單的方法是通過網絡提供數據樣本,然后查看最后一個卷積層之后的最終向量大小。 然后,您可以根據該大小定義更多層。

這是我的 pytorch 項目中的示例:

def _infer_flat_size(self):
        encoder_output = self.encoder(torch.ones(1, *self.input_size))
        return int(np.prod(encoder_output.size()[1:])), encoder_output.size()[1:]

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM