简体   繁体   English

Dijkstra的复杂性正确吗?

[英]Is the complexity of Dijkstra's correct?

I have a question regarding to runtime complexity of Dijkstra's algorithm. 我对Dijkstra算法的运行时复杂度有疑问。 (see pseudo code in CLRS vertion 3): (请参阅CLRS版本3中的伪代码):

DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S ← ∅ 
3 Q ← V[G]
4 while Q != ∅ 
5   do u ← EXTRACT-MIN(Q)
6   S ← S ∪ {u} 
7   for each vertex v ∈ Adj[u]
8     do RELAX(u, v,w)

I understand that line3 is O(V), line5 is O(VlogV) in total; 我知道第3行总计为O(V),第5行总计为O(VlogV); line7 is O(E) in total, line8 implies decrease_key() so logV for each Relax() operation. 第7行总计为O(E),第8行隐含了reduce_key(),因此每个Relax()操作的logV。 But in relax(), after d[v]>d[u]+weight and decides to be relaxed, shouldn't we look up the position of v in queue Q before we call decrease_key(Q, pos, d[v]) to replace the key of pos with d[v]? 但是在Relax()中,在d [v]> d [u] + weight并决定放松之后,我们是否应该在调用reduce_key(Q,pos,d [v]之前先查询v在队列Q中的位置? )用d [v]替换pos的键? note this look up itself costs O(V). 请注意,此查找本身的成本为O(V)。 so each Relax() should cost O(V), not O(logV), right? 所以每个Relax()应该花费O(V),而不是O(logV),对吗?

A question regarding to space complexity: to compare the vertex in queue Q, I design a struct/class vertex with distance as one member and then I implement such as operator< to sort vertex by comparing their distance. 有关空间复杂性的一个问题:为了比较队列Q中的顶点,我设计了一个具有距离的struct / class顶点作为一个成员,然后我实现了operator <等以通过比较它们的距离对顶点进行排序。 but it seems I have to define a duplicate array dist[] in order to do dist[v] = dist[u]+weight in Relax(). 但似乎我必须定义一个重复数组dist []以便在Relax()中执行dist [v] = dist [u] + weight。 If I do not define the duplicate array, I have to look up position of v and u in queue Q and then obtain and check their distance. 如果我没有定义重复数组,则必须查找v和u在队列Q中的位置,然后获取并检查它们的距离。 is it suppose to work in this way? 应该以这种方式工作吗? or maybe my implementation is not good? 也许我的执行不好?

Dijkstra's Algorithm (as you wrote it) does not have a runtime complexity unless you specify the datastructures . 除非您指定数据结构,否则 Dijkstra的算法(如您所写)不会具有运行时复杂性。 You are somehow right saying that "line 7" accounts with O(E) operations, but let's go through the lines (fortunately, Dijkstra is "easy" to analyze). 您以某种方式正确地说“第7行”涉及O(E)运算,但让我们看一下这些行(很幸运,Dijkstra很容易分析)。

  1. Initializing means "giving all vertices a infinite distance, except for the source, which has distance 0. Pretty easy, this can be done in O(V). 初始化的意思是“给所有顶点一个无限的距离,除了源的距离为0。这很容易,可以在O(V)中完成。

  2. What is the set S good for? S有什么用处? You use it "write only". 您将其“只写”使用。

  3. You put all elements to a queue. 您将所有元素放入队列。 Here be dragons. 这是龙。 What is a (priority!) queue? 什么是(优先级!)队列? A datastructure with operations add, optionally decreaseKey (needed for Dijkstra), remove (not needed in Dijkstra), extractMin. 具有操作的数据结构,可以添加,可选地减少键(Dijkstra需要),移除(Dijkstra中不需要),extractMin。 Depending on the implementation, these operations have certain runtimes. 根据实现,这些操作具有某些运行时。 For example, you can build a dumb PQ that is just a (marking) set - then adding and decreasing a key is constant time, but for extracting the minimum, you have to search. 例如,您可以构建一个只是(标记)集的哑PQ-然后在固定时间内添加和减少键,但是要提取最小值,则必须进行搜索。 The canonical solution in Dijkstra is to use a queue (like a heap) that implements all relevant operations in O(log n). Dijkstra中的规范解决方案是使用一个队列(如堆)来实现O(log n)中的所有相关操作。 Let's analyze for this case, although technically speaking a Fibonacci-Heap would be better. 让我们分析这种情况,尽管从技术上讲斐波那契堆会更好。 Don't implement the queue on your own. 不要自己实现队列。 It's amazing how much you can save by using a real PQ implementation. 使用真正的PQ实现可以节省多少钱,这真是令人惊讶。

  4. You go through the loop n times. 您经历了n次循环。

  5. Every time, you extract the minimum, which is in O(n log n) total (over all iterations). 每次,您提取最小值,该最小值为O(n log n)总数(在所有迭代中)。

  6. What is the set S good for? S有什么用处?

  7. You go through the edges of each vertex at most once, ie you tough each edge at most twice, so in total you do whatever happens inside the loop O(E) times. 您最多要遍历每个顶点的边缘一次,即,要对每个边缘进行两次韧化处理,因此总的来说,您可以完成循环O(E)次的所有操作。

  8. Relaxing means checking whether you have to decrease a key and do so. 放松意味着检查是否必须减小键并这样做。 We already know that each such operation can add O(log V) in the queue (if it's a heap), and we have to do it O(E) times, so it'S O(E log V), which dominates the total runtime. 我们已经知道,每个这样的操作都可以在队列(如果是堆)中添加O(log V),并且我们必须执行O(E)次,因此它是O(E log V),这占了整个运行时间。

If you take a Fibonacci-Heap, you can go down to O(VlogV+E), but that's academic. 如果您使用斐波那契堆,则可以降至O(VlogV + E),但这是学术性的。 Real implementations tune heaps. 实际的实现会调整堆。 If you want to know your implementation's performance, analyze the PQ operations. 如果您想了解实现的性能,请分析PQ操作。 But as I said, it's better to use existing implementations if you don't exactly know what your doing. 但是正如我所说,如果您不完全知道自己在做什么,最好使用现有的实现。 Your idea of "looking up a position before calling decreaseKey" tells me you should digg deeper into that topic before you come up with an implementation which effectively takes O(V) per insert (by sorting every time some decreaseKey is called) or O(V) per extractMin (by finding the minimum on demand). 您“在调用reduceKey之前先查找职位”的想法告诉我,在提出一个有效地使每个插入项占用O(V)(每次调用一次reduceKey进行排序)或O( V)每次提取的最小值(通过找到所需的最小值)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM