简体   繁体   中英

Why are operations in an adjacency list O(|E|/|V|)$?

I'm studying for an exam I have soon. A chart provided to me has the following algorithmic complexities summarized for an adjacency list for a graph with N nodes and E edges.

  • Find edge - O(E/N)

  • Insert edge - O(E/N)

  • Delete edge - O(E/N)

  • Enumerate edges for node - O(E/N)

I understand what an adjacency list is - we store the vertices adjacent to each vertex by using an array of lists. But why are these operations O(E/N)? It seems to me like, if we took a graph in which every possible edge is drawn (eg, we have n(n - 1)/2 edges if the graph is undirected), then each list in the array would have N - 1 entries to store every other node

This, in my mind, would be the "worst case," wouldn't it? I don't understand how the ratio of edges to nodes is being obtained.

Can someone please explain?

I believe this question is very similar to this other question here on stackoverflow, please refer to it since it may answer your question already. For completion sake I'll try to summarise what I understand about the topic too but I'm not authority in the subject, so feel free to correct if I'm saying anything wrong:

For what I could understand you are questioning why the chart says an operation is O(E/N) when it is well known that the worst case is O(N). Well, there are 2 issues here:

  1. You're assuming that big O means "worst case input" but, by definition, we can't assume this.
  2. The chart says only O(E/N) and, as @domen commented, the text should be clearer and indicate what input case it is considering.

A quick answer here is that big O can be used to "talk" about both cases. It will be O(E/N) when we are talking about the average input and it will be O(N) when we are talking about the worst input.

Now let's see a longer answer addressing each of the enumerated issues: Accordingly to the book "Introduction to algorithms" we can define big O as:

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0}

Note that the definition doesn't say anything about worst case, it just says that if we have a function f(n) and we can provide a constant c and a n0 such that 0 <= f(n) <= cg(n) for every n >= n0 then f(n) is in O(g(n)). So forget about worst case here, if we can provide a function f(n), a constant c and a n0 that doesn't violate the above definition then f(n) is in O(n).
Here we are justing talking about the upper bound for that input case only, which could be the worst input, the average input or any other input case.
If an algorithm has "worst input" = w(n) and "average input" = a(n) where exists a c',n'0 such that 0 <= w(n) <= c'g(n) for every n >= n'0 and exists a c'',n''0 such that 0 <= a(n) <= c''g(e/n) for every n >= n''0 then we can say that the algorithm is O(n) in the worst case and O(e/n) in the average case.

If the chart doesn't not specify the f(n) it is considering (worst case or average case as an example) then we cannot affirm anything, the chart must be more specific.
The common behavior here is to assume that the text is referring to the worst case input and, that's probably why we relate Big O to the worst case, while most of the time this assumption is right sometimes (like the chart you mentioned) it's wrong.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM