简体   繁体   English

更新python igraph中的容量

[英]Updating the capacity in python igraph

I'm currently reviewing a working system to identify areas that can be optimised. 我目前正在审查一个工作系统,以确定可以优化的区域。 I've found that the loop below increases the run time by around 70% 我发现下面的循环将运行时间增加了约70%

for t in G.get_edgelist():
    eid = G.get_eid(*t)
    origin = G.vs[t[0]]['name']
    destin = G.vs[t[1]]['name']

    if fc.cpdict[origin]['node_type'] == 'dependency':
        cp_func[nodes.index(destin)] *= cp_func[nodes.index(origin)]

    cap = cp_func[nodes.index(origin)]
    G.es[eid]["capacity"] = cap

The system needs to update the capacity of the edges that have changed since the last iteration of the model time. 系统需要更新自模型时间的上一次迭代以来已更改的边的容量。 In why-is-add-edge-function-so-slow-ompared-to-add-edges an answer states 为什么添加边缘功能如此缓慢而难以添加边缘的问题中给出了答案

The reason is that igraph uses an indexed edge list as its data structure in the C layer. 原因是igraph在C层中使用索引边缘列表作为其数据结构。 The index makes it possible to query the neighbors of a specific vertex in constant time. 索引使得可以在恒定时间内查询特定顶点的邻居。 This is good if your graph rarely changes, but it becomes a burden when the modification operations are far more frequent than the queries, since whenever you add or remove an edge, you have to update the index. 如果图形很少更改,则很好,但是当修改操作比查询频繁得多时,它将成为负担,因为无论何时添加或删除边,都必须更新索引。

Is there a better way to do this update. 有没有更好的方法来执行此更新。

In case someone else is looking for help, or has a better solution. 如果其他人正在寻求帮助,或者有更好的解决方案。 After reviewing the documentation I went with the following changes: 阅读文档后,我进行了以下更改:

    def update_capacity(self, components, comp_sample_func):
      for comp_index, (comp_id, component) in enumerate(components.iteritems()):
        for dest_index, dest_comp_id in enumerate(component.destination_components.iterkeys()):
            if component.node_type == 'dependency':
                comp_sample_func[dest_index] *= comp_sample_func[comp_index]

            edge_id = self.comp_graph.get_eid(comp_id, dest_comp_id)
            self.comp_graph.es[edge_id]['capacity'] = comp_sample_func[comp_index]

I created the nodes to with the same order as my ordered dictionary and then retrieved the vertice with the indices. 我以与有序字典相同的顺序创建了节点,然后使用索引检索了顶点。 This gave a 10-20% improvement. 这带来了10-20%的改善。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM