简体   繁体   中英

Is the complexity of Dijkstra's correct?

I have a question regarding to runtime complexity of Dijkstra's algorithm. (see pseudo code in CLRS vertion 3):

DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S ← ∅ 
3 Q ← V[G]
4 while Q != ∅ 
5   do u ← EXTRACT-MIN(Q)
6   S ← S ∪ {u} 
7   for each vertex v ∈ Adj[u]
8     do RELAX(u, v,w)

I understand that line3 is O(V), line5 is O(VlogV) in total; line7 is O(E) in total, line8 implies decrease_key() so logV for each Relax() operation. But in relax(), after d[v]>d[u]+weight and decides to be relaxed, shouldn't we look up the position of v in queue Q before we call decrease_key(Q, pos, d[v]) to replace the key of pos with d[v]? note this look up itself costs O(V). so each Relax() should cost O(V), not O(logV), right?

A question regarding to space complexity: to compare the vertex in queue Q, I design a struct/class vertex with distance as one member and then I implement such as operator< to sort vertex by comparing their distance. but it seems I have to define a duplicate array dist[] in order to do dist[v] = dist[u]+weight in Relax(). If I do not define the duplicate array, I have to look up position of v and u in queue Q and then obtain and check their distance. is it suppose to work in this way? or maybe my implementation is not good?

Dijkstra's Algorithm (as you wrote it) does not have a runtime complexity unless you specify the datastructures . You are somehow right saying that "line 7" accounts with O(E) operations, but let's go through the lines (fortunately, Dijkstra is "easy" to analyze).

  1. Initializing means "giving all vertices a infinite distance, except for the source, which has distance 0. Pretty easy, this can be done in O(V).

  2. What is the set S good for? You use it "write only".

  3. You put all elements to a queue. Here be dragons. What is a (priority!) queue? A datastructure with operations add, optionally decreaseKey (needed for Dijkstra), remove (not needed in Dijkstra), extractMin. Depending on the implementation, these operations have certain runtimes. For example, you can build a dumb PQ that is just a (marking) set - then adding and decreasing a key is constant time, but for extracting the minimum, you have to search. The canonical solution in Dijkstra is to use a queue (like a heap) that implements all relevant operations in O(log n). Let's analyze for this case, although technically speaking a Fibonacci-Heap would be better. Don't implement the queue on your own. It's amazing how much you can save by using a real PQ implementation.

  4. You go through the loop n times.

  5. Every time, you extract the minimum, which is in O(n log n) total (over all iterations).

  6. What is the set S good for?

  7. You go through the edges of each vertex at most once, ie you tough each edge at most twice, so in total you do whatever happens inside the loop O(E) times.

  8. Relaxing means checking whether you have to decrease a key and do so. We already know that each such operation can add O(log V) in the queue (if it's a heap), and we have to do it O(E) times, so it'S O(E log V), which dominates the total runtime.

If you take a Fibonacci-Heap, you can go down to O(VlogV+E), but that's academic. Real implementations tune heaps. If you want to know your implementation's performance, analyze the PQ operations. But as I said, it's better to use existing implementations if you don't exactly know what your doing. Your idea of "looking up a position before calling decreaseKey" tells me you should digg deeper into that topic before you come up with an implementation which effectively takes O(V) per insert (by sorting every time some decreaseKey is called) or O(V) per extractMin (by finding the minimum on demand).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM