简体   繁体   中英

Big O: is this the tightest upper bound for recursive algorithm?

Is the time complexity for my algorithm below O(|2 (2 + log3(n)) – 1|) ?

And is there a more elegant way to write it?

int cantor(int low, int high) {
    int gap= (high - low) / 3;

    if (high < low)
        return 0;
    else if (high == low)
        return low;
    else
        return cantor(low, low + gap) + cantor(high - gap, high);
}

Running the Java program below would produce critical points, where n = integer input, o = number of operations, b = upper bound (that needs >= o )

n    o    b 

0     1     1.0   <- critical point
1     3     3.0   <- critical point
2     3     5.194250610520971
3     7     7.0   <- critical point
4     7     8.592185156484856
5     7     10.04233615383682
6     7     11.388501221041942
7     7     12.65392426064557
8     7     13.85409969044663
9     15    15.0  <- critical point
10    15    16.099749365620383
11    15    17.159572545935887
12    15    18.184370312969712
13    15    19.178087273270823
14    15    20.143957171877723
15    15    21.08467230767364
16    15    22.0025040190721
17    15    22.899390537770895
18    15    23.777002442083877
19    15    24.636792344342172
20    15    25.480033236937405
21    15    26.30784852129114
22    15    27.1212358323658
23    15    27.92108616334829
24    15    28.708199380893266
25    15    29.48329693358293
26    15    30.2470323529008
27    31    31.0  <- critical point

Here is the java code:

public class recursionTreeTimeComplexity {

    static int calls = 0;

    static int cantor(int low, int high) {

        calls++;

        int gap = (high - low) / 3;

        if (high < low)
            return 0;
        else if (high == low)
            return low;
        else
            return cantor(low, low + gap) + cantor(high - gap, high);
    }

    public static void main(String[] args) {

        for (int i = 0; i < 1000; i++) {
            calls = 0;
            cantor(0, i);

            // |(2^log3(n)+2)|-1
            System.out.println(i + "\t" + calls + "\t" + Math.abs((Math.pow(2, ((Math.log(i) / Math.log(3)) + 2)) - 1)));    
        }
    }
}

Saying an algorithm is O(f(n)) means that the time is roughly proportional to f(n) (as n gets large enough). [ This isn't entirely accurate, because the actual time could vary quite a bit and doesn't have to be monotonically increasing. 这不完全准确,因为实际时间可能会有很大差异,而且不必单调增加。 More accurately, it means there's an upper bound on the time that is roughly proportional to f(n).]

Because of this, adding constants when using O-notation is irrelevant: O(f(n)+k) is the same as O(f(n)) because eventually the f(n) part will dominate and the k part will become negligible. Also, since this is a proportion, multiplying by a constant is irrelevant; O(kf(n)) is the same as O(f(n)) because both of them are saying that it's basically proportional to f(n). This means that the +1 in your original expression is irrelevant, and so is the 2+ since 2 (2+x) = 4*2 x , which means you're just multiplying by a constant. So your original expression could be simplified to O(2^(log 3 n)). This seems correct; as I'm sure you noticed, if T(n) is the running time of the algorithm, then basically T(n) = 2T(n/3) [very roughly, but that's good enough for this purpose], which means that if we assume T(1)=1, then T(3)=2, T(9)=4, T(27)=8, etc.

We can simplify further. log 3 n = log 2 n / log 2 3; therefore 2^(log 3 n) = 2^(log 2 n / log 2 3) = (2^log 2 n)^(1/log 2 3) = n^(1/log 2 3) = n^(log 3 2). So the time of the algorithm can be expressed as O(n^(log 3 2)), or about O(n 0.6309 ).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM