簡體   English   中英

自動編碼器/Keras/Tensorflow:密集層與層不兼容:輸入形狀的預期軸 -1 的值為 64

[英]Autoencoder/Keras/Tensorflow: Dense layer is incompatible with the layer: expected axis -1 of input shape to have value 64

我剛開始使用 ML(尤其是自動編碼器),但在運行代碼時遇到了問題。

我已經構建了一個輸入向量“x”作為“人工數據”,並且我正在嘗試使用自動編碼器來降低這個“人工數據”的維數。

import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Lambda
import tensorflow.keras.backend as K
from keras.models import Input, Model, load_model
from keras.layers import Dense
from sklearn.model_selection import train_test_split 

N=64

z1=tf.linspace(0,1,N)
z2=tf.linspace(0,2,N)
z3=tf.linspace(0,3,N)
z4=tf.linspace(0,4,N)
z5=tf.linspace(0,5,N)


y1=np.sin(z1)
y2=np.sin(z2)
y3=np.sin(z3)
y4=np.sin(z4)
y5=np.sin(z5)


x=tf.concat([y1,y2,y3,y4,y5,z1,z2,z3,z4,z5],0)
x=np.matrix(x).T


main_input = layers.Input(shape=(N,), name='main_input')
encoded = Dense(32, activation='tanh')(main_input) 
decoded = Dense(N, activation='tanh')(encoded)

ae = Model(inputs=main_input, outputs=decoded)

print('Full autoencoder') 
print(ae.summary())
print('\n Encoder portion of autoencoder') # print(encoder.summary())


ae.compile(optimizer='adam', loss='mse', metrics=['mse'])
batch_size = 2
epochs = 100

x_train, x_test,  _, _ = train_test_split(x, x, test_size=0.33, random_state=42)



results = ae.fit(x_train,x_train,
                  batch_size = batch_size,
                  epochs = epochs,
                  validation_data = (x_train,x_train))

我收到以下錯誤:

 ValueError: Exception encountered when calling layer "model" (type Functional).
    
    Input 0 of layer "dense" is incompatible with the layer: expected axis -1 of input shape to have value 64, but received input with shape (2, 1)
    
    Call arguments received:
      • inputs=tf.Tensor(shape=(2, 1), dtype=float32)
      • training=True
      • mask=None

非常感謝!

dense.network 在使用 sin 進行模式匹配時非常准確。

from tensorflow.keras.models import Sequential
from tensorflow.keras import layers
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Lambda
import tensorflow.keras.backend as K
 from keras.models import Input, Model, load_model
from keras.layers import Dense
from sklearn.model_selection import train_test_split 

N=64
z1=np.linspace(0,1,N)
z2=np.linspace(0,2,N)
z3=np.linspace(0,3,N)
z4=np.linspace(0,4,N)
z5=np.linspace(0,5,N)

y1=np.sin(z1)**2
y2=np.sin(z2)**3
y3=np.sin(z3)
y4=np.sin(z4)
y5=np.sin(z5)


X=np.concatenate((z1,z2,z3,z4,z5))
y=np.concatenate((y1,y2,y3,y4,y5,))

#y=np.matrix(y).T
#plt.plot(X,y)
X_train, X_test, y_train, y_test= train_test_split(X,y,test_size=0.3)

model=Sequential()
model.add(layers.Input(shape=(1,), name='main_input'))
model.add(Dense(200, activation='tanh')) 
model.add(Dense(100, activation='tanh')) 
model.add(Dense(32, activation='tanh')) 
model.add(Dense(1))

model.compile(optimizer='adam', loss='mse', metrics=['mse'])

history=model.fit(X_train, y_train,  epochs=1000, verbose=0)

predictionResults=model.predict(X_test)

index=0
results=predictionResults.flatten()
for value in X_test:
    plt.scatter(value,results[index])
    index+=1
plt.plot(X,y)
plt.show()

plt.plot(history.history['loss'])
plt.title('loss accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM