简体   繁体   中英

O(n^2) complexity

Which one of the following has the O(n^2 ) complexity

public boolean findDuplicates(int[] inputData) {
        for (int i = 0; i < inputData.length; i++) {
            for (int j = 0; j < inputData.length; j++) {
                if (inputData[i] == inputData[j] && i != j) {
                    return true;
                }
            }
        }
        return false;
    }

vs

public boolean findDuplicates(int[] inputData) {
        for (int i = 0; i < inputData.length; i++) {
            for (int j = 0; j < inputData.length; j++) {
             System.out.println("...");
            }
        }
        return false;
    }

does if (inputData[i] == inputData[j] && i != j) { return true; } if (inputData[i] == inputData[j] && i != j) { return true; } in the first loop break the complexity of O(n^2) as I see I will match only 2 elements if length of the inputDate array is 2 .

I'm sorry if this noob question, but what I don't understand is that complexity refers to the total elemens iterated or total of condition satisfied ?

and how about this one (assuming We don't have to calculate the exact complexity, and assuming we ignore index out of bounds in the inner loop), is this

public boolean findDuplicates(int[] inputData) {
        for (int i = 0; i < inputData.length; i++) {
            for (int j = 1; j < inputData.length; j++) {
            ....
            }
        }
        return false;
    }

still O(n^2) ?

Both of the methods you posted have O(n^2) complexity. The conditional inside the first one doesn't change the big O.

big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

I think it is fairly clear to you that the second scenario will have O(n^2) time complexity.

In the first case, unless you can always ensure that you will find a duplicate in the first k iterations, where k is a constant which does not depend on n, it will have a complexity of O(n) [Since O(kn) where k is a constant, however large, but known, is O(n)]

However, if this k depends on n in any manner (say, you will always find a match in the first half of the array which repeats), or cannot guarantee a match for every run, then the complexity will be O(n^2). [O(n*n/k) = O(n^2) where k is a constant. Here, k is an arbitrary constant which helps find the percentage of array you have to go through before finding the first index of repeating element]

EDIT:

Did not notice your edit earlier. Yes, the third case is also O(n^2) You can also do the following optimization:

findDuplicates(int[] arr) {
    for (int i = 0; i < arr.size(); i++) {
        for (int j = i+1; j < arr.size(); j++) {
            ....
        }
    }
}

The above case also has a complexity of O(n^2), as it will go through the loop for n + n-1 + n-2 + .... + 1 times, which is (n*n+1/2), which is O(n^2)

All of above loops are O(n^2).
algorithms are analyzed normally in best, average and worst case scenarios.

For your loops with condition: Worst case: O(n^2)
Best case: Constant time. Because best scenario would be inputData[0]==inputData 1

For your loops without condition:
Now, it becomes just nested array traversal so, both worst and best case will be O(n^2).

Overall, worst case performance is used for evaluating algorithms but few algorithms works (eg Quicksort ) really well in average case compared to worst case which.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM