简体   繁体   中英

Why does my processing time drop when running the same function over and over again (with incremented values)?

I was testing a new Method to replace my old one and made did some speed testing. When I now look at the graph I see, that the time it takes per iteration drops drastically. 在此处输入图片说明

Now I'm wondering why that might be. My quess would be, that my graphics card takes over the heavy work, but the first function iterates n times and the second (the blue one) doesn't have a single iteration but "heavy" calculation work with doubles.

In case system details are needed: OS: Mac OS X 10.10.4 Core: 2.8 GHz Intel Core i7 (4x) GPU: AMD Radeon R9 M370X 2048 MB

If you need the two functions:

New One:

private static int sumOfI(int i) {
    int factor;
    float factor_ = (i + 1) / 2;

    factor = (int) factor_;

    return (i % 2 == 0) ? i * factor + i / 2 : i * factor;
}

Old One:

private static int sumOfIOrdinary(int j) {
    int result = 0;
    for (int i = 1; i <= j; i++) {
        result += i;
    }
    return result;
}

To clarify my question: Why does the processing time drop that drastically?

Edit: I understand at least a little bit about cost and such. I probably didn't explain my test method good enough. I have a simple for loop which in this test counted from 0 to 1000 and I fed each value to 1 method and recorded the time it took (for the whole loop to execute), then I did the same with the other method.

So after the loop reached about 500 the same method took significantly less time to execute.

Java did not calculate anything on the graphic card (without help from other frameworks or classes). Also what you think is a "heavy" calculation is kinda easy for a cpu this day (even if division is kinda tricky). So speed depends on the bytecode generated and the Java optimisations when running a program and mostly on the Big-O Notation.

Your method sumOfI is just x statements to execute so this is O(1), regardless how large your i is its always only this x statements. But the sumOfIOrdinary uses one loop and its O(n) this will use y statements + i statements depending on the input.

So from the theory and in worst caste sumOfI is always faster as sumOfIOrdinary . You can also see this problem in the bytecode view. sumOfI is only some load and add and multiply calls to the cpu. But for a loop the bytecode also uses a goto and needs to return to an older address and needs to execute lines again this will cost time.

On my VM with i=500000 the first method needs <1 millisecond and the second method because of the loop takes 2-4 millisecond.

Links to explain Big-O-Notation:

  1. Simple Big O Notation
  2. A beginner's guide to Big O notation

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM