繁体   English   中英

correc 二维卷积层的输入,给定带有标签的空间数据

[英]correc Input for 2d convolutional layer, given spatial data with labels

所以我正在做一个项目,我想训练一个神经网络来为 3d 空间中的点分配标签。

我的输入是氨基酸的α碳痕迹,我的标签是它的二级结构标签,例如。 3班。

我的数据一模一样:2945个训练样例,每个长度748,对应748个连续碳,每个有3个特征,即xyz坐标。 所以 X 形状是(2945, 748, 3)而 Y 形状是(2945, 748)因为它有 2945 个例子,每个例子有 748 个标签,每个碳依次排列。

我想专门使用卷积层,正如我在几篇论文中读到的那样,它们擅长空间依赖性,并且在这样的问题上做得很好,只是我无法在那里获得过去的维度。

我已经扩展了暗淡: X_train = np.expand_dims(X_train,1)得到None, 1, 748, 3原样(我认为):( ( batch, height, width, channels)还是我完全错过了这里的重点?
后面会指定batch,height是1,example的width是748,channels是3? 作为它的xyz

input_shape = (1, 748, 3)
model = Sequential(
[
    Input(shape = input_shape ),
    Conv2D(filters=16, kernel_size=9, padding='same',
           activation = tf.nn.relu),
    Dense(4, activation='softmax')
]

) model.summary()

model总结:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 1, 748, 16)        64        
_________________________________________________________________
dense (Dense)                (None, 1, 748, 3)         51        
=================================================================
Total params: 115
Trainable params: 115
Non-trainable params: 0

和 ofc 错误: ValueError: Shapes (None, 1, 748) and (None, 1, 748, 3) are incompatible ,

我知道它可以与 1 个单元密集层一起使用,但如果维度为 1,我还能获得 3 个状态的分类吗? 应该。

我的想法对吗? 还是有误解?

我将非常感谢任何建议。 提前致谢。

好的,我对我的标签进行了 one_hot 编码,并使用以下方法扩展了维度一:

np.expand_dims(Y, 1)

达到的尺寸:

Y train shape: (2356, 1, 748, 4)
Y test shape: (589, 1, 748, 4)

用 4 个单位添加到 Dense 中的内容与尺寸相匹配,

工作代码片段

import tensorflow as tf
import numpy as np
from tensorflow.keras import datasets
import tensorflow.keras as keras

X_train = np.random.random((2356,1,748,3))
y_train = np.random.random((2356, 1, 748, 4))

dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train))
train_data = dataset.shuffle(len(X_train)).batch(32)
train_data = train_data.prefetch(
        buffer_size=tf.data.experimental.AUTOTUNE)

input_shape = (1, 748, 3)
model = tf.keras.Sequential(
[
    keras.Input(shape = input_shape ),
    keras.layers.Conv2D(filters=16, kernel_size=9, padding='same',
           activation = tf.nn.relu),
    keras.layers.Dense(4, activation='softmax')
])

model.summary()

model.compile(optimizer='adam',
                loss=tf.keras.losses.CategoricalCrossentropy(),
                metrics=['accuracy'])

model.fit(X_train, y_train, epochs=5, batch_size=5, verbose=1)

Output

Model: "sequential_10"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_10 (Conv2D)          (None, 1, 748, 16)        3904      
                                                                 
 dense_10 (Dense)            (None, 1, 748, 4)         68        
                                                                 
=================================================================
Total params: 3,972
Trainable params: 3,972
Non-trainable params: 0
_________________________________________________________________
Epoch 1/5
472/472 [==============================] - 4s 5ms/step - loss: 3.4528 - accuracy: 0.2508
Epoch 2/5
472/472 [==============================] - 2s 4ms/step - loss: 3.8109 - accuracy: 0.2506
Epoch 3/5
472/472 [==============================] - 2s 5ms/step - loss: 3.8099 - accuracy: 0.2507
Epoch 4/5
472/472 [==============================] - 2s 5ms/step - loss: 3.8021 - accuracy: 0.2506
Epoch 5/5
472/472 [==============================] - 3s 5ms/step - loss: 3.7919 - accuracy: 0.2507
<keras.callbacks.History at 0x7f55c00dc550>

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM