I'm attempting to build an autoencoder using LocallyConnected1D
layers (instead of Dense
) but am having a lot of trouble understanding how the two layer types differ—especially when it comes to output dimensionality.
model = Sequential()
model.add(Reshape(input_shape=(input_size,), target_shape=(input_size,1))
model.add(LocallyConnected1D(encoded_size, kernel_size)
model.add(LocallyConnected1D(input_size, kernal_size_2, name="decoded_layer"))
This model will compile just fine, but when I go to train it...
model.fit(x_train, x_train,
epochs=epochs,
batch_size=batch_size,
shuffle=True,
validation_data=(x_test, x_test))
Where x_train
and x_test
are numpy arrays of shape (60000, 784) and (10000, 784), respectively. I get the following error on this line:
ValueError: Error when checking target: expected decoded_layer to have 3 dimensions, but got array with shape (60000, 784)
Shouldn't the shape of the tensor going into decoded_layer
be (60000, encoded_size
, 1)?
First, you do not have to put None
as the first dimension in your input_shape. Keras automatically assumes that there is another input dimension that is the number of samples.
Second, LocallyConnected1D
requires a 3D input. This means that your input_shape should be in the form of (int, int), with keras inferring a shape of (None, int, int)
An example:
model = Sequential()
model.add(LocallyConnected1D(64, 3, input_shape=(10,10))) #takes a 10 by 10 array for each sample
model.add(LocallyConnected1D(32, 3))
If you data isn't in the right shape you can always use a Reshape()
layer. Lets say your input is in the shape of (batch_size, 50), so each sample is a 1D vector of 50 elements:
model = Sequential()
model.add(Reshape(input_shape=(50,), target_shape=(50,1)) #makes array 3D
model.add(LocallyConnected1D(64, 3))
model.add(LocallyConnected1D(32, 3))
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.