简体   繁体   中英

can use Principal Component Analysis Network instead of pooling layer in CNN?

Is it possible to replace the pooling layer in CNN with a Principal Component Analysis Network? Please elaborate. I tired below

input_shape = keras.Input(shape=(224, 224,1))

tower_1 = Conv2D(16, (3, 3), padding='same', activation='relu')(input_shape)

reshape_tower1=(tf.reshape(tower_1, [224*224, 16]))
reshape_tower1

Trans_tower1=tf.transpose(reshape_tower1)
Trans_tower1

pca_tower1 = PCA(n_components=10)

pca_tower1.fit(Trans_tower1)

result = pca_tower1.transform(Trans_tower1)

Error:

You are passing KerasTensor(type_spec=TensorSpec(shape=(16, 50176), 
dtype=tf.float32, name=None), name='tf.compat.v1.transpose_1/transpose:0', 
description="created by layer 'tf.compat.v1.transpose_1'"), an intermediate 
Keras symbolic input/output, to a TF API that does not allow registering 
custom dispatchers, such as `tf.cond`, `tf.function`, gradient tapes, or 
`tf.map_fn`. Keras Functional model construction only supports TF API calls 
that *do* support dispatching, such as `tf.math.add` or `tf.reshape`. Other 
APIs cannot be called directly on symbolic Kerasinputs/outputs. You can work 
around this limitation by putting the operation in a custom Keras layer `call` 
and calling that layer on this symbolic input/output.

You seem to get the sklearn PCA that only work with numpy array. You can try to convert your tensor to numpy array before the PCA BUT you will loose the gradients with that and so you will not be able to train the parameters of the first Conv layer. By the way even if you find a PCA that accepts tf tensor, you will face a major problem: PCA is not differentiable and so you can't train parameters in all cases.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM