简体   繁体   中英

How to Connect dense layer to Conv2D in keras

I want to map an input of 61 intensity values to an image of size 64×64. The code I use is given below.

Network Input=61×1 (intensity values)

OUTPUT=64×64 (image)

input_img = Input(shape=(61,))

x = Dense(250, activation='relu')(input_img)
x = Dense(500, activation='relu')(x)
x = Dense(1000, activation='relu')(x)
x = Dense(4096, activation='relu')(x)

x=Conv2D(16,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.001),kernel_initializer='glorot_uniform')(x)


x=Conv2D(1,(3,3),padding='same',kernel_regularizer=regularizers.l2(0.001),kernel_initializer='glorot_uniform')(x)


The dimensions are giving me problems. How can I shape the dimensions in the code so that I can get correct mapping as 64×64 size at output.

The code error is ValueError: Input 0 is incompatible with layer conv2d_14: expected ndim=4, found ndim=2

Thank you

The probable problem is input_img shape.

It should actually contain 3 dimensions. And internally keras will add the batch dimension making it 4.

Since you used input_img with 1 dimensions (vector), keras is adding the 2nd.

You should correct the shape of your input_img

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM