简体   繁体   中英

Memory utilization in recursive vs an iterative graph traversal

I have looked at some common tools like Heapy to measure how much memory is being utilized by each traversal technique but I don't know if they are giving me the right results. Here is some code to give the context.

The code simply measures the number of unique nodes in a graph. Two traversal techniques provided viz. count_bfs and count_dfs

import sys
from guppy import hpy
class Graph:
    def __init__(self, key):
        self.key = key       #unique id for a vertex 
        self.connections = []
        self.visited = False 

def count_bfs(start):
    parents = [start]
    children = []
    count = 0
    while parents:
        for ind in parents:
            if not ind.visited:
                count += 1
                ind.visited = True
                for child in ind.connections:
                        children.append(child)

        parents = children
        children = []
    return count

def count_dfs(start):
    if not start.visited:
          start.visited = True
    else:
          return 0

    n = 1
    for connection in start.connections:
        n += count_dfs(connection)
    return n


def construct(file, s=1): 
    """Constructs a Graph using the adjacency matrix given in the file

    :param file: path to the file with the matrix
    :param s: starting node key. Defaults to 1

    :return start vertex of the graph
    """
    d = {}
    f = open(file,'rU')
    size = int(f.readline())
    for x in xrange(1,size+1):
        d[x] = Graph(x)
    start = d[s]
    for i in xrange(0,size):
           l = map(lambda x: int(x), f.readline().split())
           node = l[0]
           for child in l[1:]:
               d[node].connections.append(d[child])
    return start


if __name__ == "__main__":
    s = construct(sys.argv[1])
    #h = hpy()
    print(count_bfs(s))
    #print h.heap()
    s = construct(sys.argv[1])
    #h = hpy()
    print(count_dfs(s))
    #print h.heap()

I want to know by what factor is the total memory utilization different in the two traversal techniques viz. count_dfs and count_bfs ? One might have the intuition that dfs may be expensive as a new stack is created for every function call. How can the total memory allocations in each traversal technique be measured?
Do the (commented) hpy statements give the desired measure?

Sample file with connections:

4
1 2 3
2 1 3
3 4 
4

This being a python question, it may be more important how much stack space is used than how much total memory. Cpython has a low limit of 1000 frames because it shares its call stack with the c call stack, which in turn is limited to the order of one megabyte in most places. For this reason you should almost* always prefer iterative solutions to recursive ones when the recursion depth is unbounded.

* other implementations of python may not have this restriction. The stackless variants of cpython and pypy have this exact property

It's tough to measure exactly how much memory is being used because systems vary in how they implement stack frames. Generally speaking, recursive algorithms use far more memory than iterative algorithms because each stack frame must store the state of its variables whenever a new function call occurs. Consider the difference between dynamic programming solutions and recursive solutions. Runtime is far faster on an iterative implementation of an algorithm than a recursive one.

If you really must know how much memory your code uses, load your software in a debugger such as OllyDbg ( http://www.ollydbg.de/ ) and count the bytes. Happy coding!

For your specific problem, I don't know if there's going to be an easy solution. That's because, the peak memory usage of a graph traversal depends on the details of the graph itself.

For a depth-first traversal, the greatest usage will come when the algorithm has gone to the deepest depth. In your example graph, it will traverse 1->2->3->4 , and create a stack frame for each level. So while it is at 4 it has allocated the most memory.

For the breadth-first traversal, the memory used will be proportional to the number of nodes at each depth plus the number of child nodes at the next depth. Those values are stored in lists, which are probably more efficient than stack frames. In the example, since the first node is connected to all the others, it happens immediately during the first step [1]->[2,3,4] .

I'm sure there are some graphs that will do much better with one search or the other.

For example, imagine a graph that looked like a linked list, with all the vertices in a single long chain. The depth-first traversal will have a very-high peak memory useage, since it will recurse all the way down the chain, allocating a stack frame for each level. The breadth-first traversal will use much less memory, since it will only have a single vertex with a single child to keep track of on each step.

Now, contrast that with a graph that is a depth-2 tree. That is, there's a single root element that is connected to a great many children, none of which are connected to each other. The depth first traversal will not use much memory at any given time, as it will only need to traverse two nodes before it has to back up and try another branch. The depth-first traversal on the other hand will be putting all of the child nodes in memory at once, which for a big tree could be problematic.

Your current profiling code won't find the peak memory usage you want, because it only finds the memory used by objects on the heap at the time you call heap . That's likely to be the the same before and after your traversals. Instead, you'll need to insert profiling code into the traversal functions themselves. I can't find a pre-built package of guppy to try it myself, but I think this untested code will work:

from guppy import hpy

def count_bfs(start):
    hp = hpy()
    base_mem = hpy.heap().size
    max_mem = 0
    parents = [start]
    children = []
    count = 0
    while parents:
        for ind in parents:
            if not ind.visited:
                count += 1
                ind.visited = True
                for child in ind.connections:
                        children.append(child)
        mem = hpy.heap().size - base_mem
        if mem > max_mem:
            max_mem = mem
        parents = children
        children = []
    return count, max_mem

def count_dfs(start, hp=hpy(), base_mem=None):
    if base_mem is None:
        base_mem = hp.heap().size

    if not start.visited:
          start.visited = True
    else:
          return 0, hp.heap().size - base_mem

    n = 1
    max_mem = 0
    for connection in start.connections:
        c, mem = count_dfs(connection, base_mem)
        if mem > max_mem:
            max_mem = mem
        n += c
    return n, max_mem

Both traversal functions now return a (count, max-memory-used) tuple. You can try them out on a variety of graphs to see what the differences are.

Of the two, depth-first uses less memory if most traversals end up hitting most of the graph.

Breadth-first can be better when the target is near the starting node, or when the number of nodes doesn't go up very quickly so the parents/children arrays in your code stay small (eg another answer mentioned linked list as worst-case for DFS).

If the graph you're searching is spatial data, or has what's known as an "admissible heuristic," A* is another algorithm that's pretty good: http://en.wikipedia.org/wiki/A_star

However, premature optimization is the root of all evil. Look at the actual data you want to use; if it fits in a reasonable amount of memory, and the search runs in a reasonable time, it doesn't matter which algorithm you use. NB, what's "reasonable" depends on the application you're using it for and the amount of resources on the hardware that will be running it.

For either search order implemented iteratively with the standard data structure describing it (queue for BFS, stack for DFS), I can construct a graph that uses O(n) memory trivially. For BFS, it's an n-star, and for DFS it's an n-chain. I don't believe either of them can be implemented for the general case to do better than that, so that also gives an Omega(n) lower bound on maximum memory usage. So, with efficient implementations of each, it should generally be a wash.

Now, if your input graphs have some characteristics that bias them more toward one of those extremes or the other, that might inform your decision on which to use in practice.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM