I have this code:
public static void main(String[] args) {
long f = System.nanoTime();
int a = 10 + 10;
long s =System.nanoTime();
System.out.println(s - f);
long g = System.nanoTime();
int b = 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10;
long h =System.nanoTime();
System.out.println(h - g);
}
With this output/s:
Test 1:
427
300
Test 2:
533
300
Test 3:
431
398
Based on my test scenarios, why does the line int b = 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10 + 10;
executes faster than int a = 10 + 10;
?
Microbenchmarks are notoriously difficult to get right, especially in "intelligent" languages such as Java, where the compiler and Hotspot can do lots of optimisations. You almost certainly aren't testing what you think you're testing. Have a read of Anatomy of a Flawed Microbenchmark for more details and examples (it's a fairly old article now, but the principles are as valid as ever).
In this particular case, I can see at least three problems right off the bat:
int a = 20;
and int b = 120;
) nanoTime
is quite high on most systems. That, combined with load from the OS, is going to mean your experimental error in measurement is much greater than the magnitude of the result itself. There are probably more potential hazards lurking as well.
The moral of the story is test your code in real-world conditions , to see how it behaves. It is in no way accurate to test small pieces of code in isolation and assume that the overall performance will be the sum of these pieces.
First of all . Java compiler perform optimization of constant expression, so your code on compile time will be converted to:
int b = 120;
As result JVM performs assign to a=20
and b=120
near the same time.
The second . You perform short measurement of big system (I mean entire computer that includes OS, swap processes, another run processes ...). SO you get snapshot of random system at very small time-period. That is why you cannot deduce true or false that a
assignment is faster than b
. To proof this you have to place code measurement into rather big loop - do the same approximately 1,000,000 times. Such big repeating allows you smooth the expectation (in mathematical sense of this word)
This is not the correct way to measure performance.
First of all , do not measure such small piece of code. instead , call it millions of times like suggested by @NilsH, and get the avarage time by dividing the elapsed time in the number of calls.
Second, The JVM will likely perform optimizations on your code, so you need to give it a "warm up" time. Make a few millions run "on dry" without measuring the time at all, than begin your measurments.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.