繁体   English   中英

多任务“嵌套”神经网络的实现

[英]Implementation of multitask "nested" neural network

我正在尝试实现论文使用的多任务神经网络,但我很不确定我应该如何编码多任务网络,因为作者没有提供该部分的代码。

网络架构看起来像():

网络架构

为了更简单,网络架构可以概括为(对于演示,我将它们对单个嵌入对的更复杂的操作更改为串联): 更简单的版本

作者正在总结单个任务和成对任务的损失,并使用总损失来优化每个批次中三个网络(编码器、MLP-1、MLP-2)的参数,但我有点不知所措关于如何将不同类型的数据组合在一个批次中以馈入共享初始编码器的两个不同网络。 我试图搜索具有类似结构的其他网络,但没有找到任何来源。 将不胜感激任何想法!

这实际上是一种常见的模式。 它将通过如下代码解决。

class Network(nn.Module):
   def __init__(self, ...):
      self.encoder = DrugTargetInteractiongNetwork()
      self.mlp1 = ClassificationMLP()
      self.mlp2 = PairwiseMLP()

   def forward(self, data_a, data_b):
      a_encoded = self.encoder(data_a)
      b_encoded = self.encoder(data_b)

      a_classified = self.mlp1(a_encoded)
      b_classified = self.mlp1(b_encoded)

      # let me assume data_a and data_b are of shape
      # [batch_size, n_molecules, n_features].
      # and that those n_molecules are not necessarily
      # equal.
      # This can be generalized to more dimensions.
      a_broadcast, b_broadcast = torch.broadcast_tensors(
         a_encoded[:, None, :, :],
         b_encoded[:, :, None, :],
      )

      # this will work if your mlp2 accepts an arbitrary number of
      # learding dimensions and just broadcasts over them. That's true
      # for example if it uses just Linear and pointwise
      # operations, but may fail if it makes some specific assumptions
      # about the number of dimensions of the inputs
      pairwise_classified = self.mlp2(a_broadcast, b_broadcast)

      # if that is a problem, you have to reshape it such that it
      # works. Most torch models accept at least a leading batch dimension
      # for vectorization, so we can "fold" the pairwise dimension
      # into the batch dimension, presenting it as
      # [batch*n_mol_1*n_mol_2, n_features]
      # to mlp2 and then recover it back
      B, N1, N_feat = a_broadcast.shape
      _B, N2, _N_feat = b_broadcast.shape
      a_batched = a_broadcast.reshape(B*N1*N2, N_feat)
      b_batched = b_broadcast.reshape(B*N1*N2, N_feat)
      # above, -1 would suffice instead of B*N1*N2, just being explicit
      batch_output = self.mlp2(a_batched, b_batched)

      # this should be exactly the same as `pairwise_classified`
      alternative_classified = batch_output.reshape(B, N1, N2, -1)

      return a_classified, b_classified, pairwise_classified

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM