简体   繁体   English

如何从有向图实现 PyTorch NN

[英]How to implement a PyTorch NN from a directed graph

I'm new to Pytorch and teaching myself, and I want to create ANNs that takes in a directed graph.我是 Pytorch 的新手并自学,我想创建接受有向图的 ANN。 I also want to pass predefined weights & biases for each connection into it, but willing to ignore that for now.我还想将每个连接的预定义权重和偏差传递给它,但现在愿意忽略它。

My motivation for these conditions is that I'm trying to implement the NEAT algorithm, which is basically using a Genetic Algorithm to evolve the network.我对这些条件的动机是我正在尝试实现NEAT算法,该算法基本上使用遗传算法来进化网络。

For example, let graph = dict{'1':[[], [4, 7]], '2':[[], [6]], '3':[[], [6]], '4':[[1, 7], []], '5':[[7], []], '6':[[2, 3], [7]], '7':[[1, 6], [4, 5]]} represent the directed graph.例如,让graph = dict{'1':[[], [4, 7]], '2':[[], [6]], '3':[[], [6]], '4':[[1, 7], []], '5':[[7], []], '6':[[2, 3], [7]], '7':[[1, 6], [4, 5]]}表示有向图。

示例图

My code for what I'm thinking is:我的想法是:

class Net(torch.nn.Module):
    def __init__(self, graph):
        super(Net, self).__init__()
        self.graph = graph
        self.walk_graph()

    def walk_graph(self):
        graph_remaining = copy.deepcopy(self.graph)
        done = False  # Has every node/connection been processed?
        while not done:
            processed = []  # list of tuples, of a node and the nodes it outputs to
            for node_id in graph_remaining.keys():
                if len(graph_remaining[node_id][0]) == 0:  # if current node has no incoming connections
                    try:
                        # if current node has been processed, but waited for others to finish
                        if callable(getattr(self, 'layer{}'.format(node_id))):
                            D_in = len(eval('self.layer{}'.format(node_id)).in_features)
                            D_out = len(eval('self.layer{}'.format(node_id)).out_features)
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(D_in, D_out))
                        cat_list = [] # list of input tensors
                        for i in self.graph[node_id][0]: # search the entire graph for inputs
                            cat_list.append(globals()['out_{}'.format(i)]) # add incoming tensor to list
                        # create concatenated tensor for incoming tensors
                        # I'm not confident about this
                        globals()['in_{}'.format(node_id)] = torch.cat(cat_list, len(cat_list))
                    except AttributeError:  # if the current node hasn't been waiting
                        try:
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(len(self.graph[node_id][0]), len(self.graph[node_id][1])))
                        except ZeroDivisionError:  # Input/Output nodes have zero inputs/outputs in the graph
                            setattr(self, 'layer{}'.format(node_id), torch.nn.Linear(1, 1))
                    globals()['out_{}'.format(node_id)] = getattr(self, 'layer' + node_id)(globals()['in_{}'.format(node_id)])
                    processed.append((node_id, graph_remaining[node_id][1]))

            for node_id, out_list in processed:
                for out_id in out_list:
                    try:
                        graph_remaining[str(out_id)][0].remove(int(node_id))
                    except ValueError:
                        pass
                try:
                    del graph_remaining[node_id]
                except KeyError:
                    pass

            done = True
            for node_id in self.graph.keys():
                if len(graph_remaining[node_id][0]) != 0 or len(graph_remaining[node_id][1]) != 0:
                    done = False
        return None

I'm a little out of my comfort zone on this, but if you have a better idea, or can point out how this is fatally flawed, I'm all ears.我在这方面有点超出我的舒适区,但是如果您有更好的主意,或者可以指出这是如何存在致命缺陷的,我会全力以赴。 I know I'm missing a forward function, and could use some advice about how to restructure.我知道我缺少转发功能,可以使用一些有关如何重组的建议。

Since you don't plan on doing any actual training of the network, PyTorch might not be your best option in this case.由于您不打算对网络进行任何实际训练,因此在这种情况下,PyTorch 可能不是您的最佳选择。

NEAT is about recombining and mutating neural networks - both their structure and their weights and biases - and thereby achieving better results. NEAT 是关于重组和变异神经网络——包括它们的结构、它们的权重和偏差——从而获得更好的结果。 PyTorch generally is a deep learning framework, meaning that you define the structure (or architecture) of your network and then use algorithms like stochastic gradient descent to update the weights and biases in order to improve your performance. PyTorch 通常是一个深度学习框架,这意味着您定义网络的结构(或架构),然后使用随机梯度下降等算法更新权重和偏差以提高性能。 As a consequence of this, PyTorch works based on modules and submodules of neural networks, like fully connected layers, convolutional layers and so on.因此,PyTorch 基于神经网络的模块和子模块工作,如全连接层、卷积层等。

The problem with this discrepancy is that NEAT not only requires you to store a lot more information (like their ID for recombination etc.) about the individual nodes than PyTorch supports, it also doesn't fit in very well with the "layer-wise" approach of deep learning frameworks.这种差异的问题在于,NEAT 不仅要求您存储比 PyTorch 支持的更多关于单个节点的信息(例如它们用于重组的 ID 等),而且它也不太适合“分层” ” 深度学习框架的方法。

In my opinion, you will be better off implementing the forward pass through the network yourself.在我看来,您最好自己通过网络实现前向传递。 If you're unsure how to do that, this video gives a very good explanation.如果您不确定如何做到这一点,这个视频给出了一个很好的解释。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM