简体   繁体   中英

How to disable compiler and JVM optimizations?

I have this code that is testing Calendar.getInstance().getTimeInMillis() vs System.currentTimeMilli() :

long before = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++)
{
  long before1 = getTimeInMilli();
  doSomeReallyHardWork();
  long after1 = getTimeInMilli();
}
long after = getTimeInMilli();
System.out.println(getClass().getSimpleName() + " total is " + (after - before));

I want to make sure no JVM or compiler optimization happens, so the test will be valid and will actually show the difference.

How to be sure?

EDIT : I changed the code example so it will be more clear. What I am checking here is how much time it takes to call getTimeInMilli() in different implementations - Calendar vs System .

I think you need to disable JIT. Add to your run command next option:

-Djava.compiler=NONE

You want optimization to happen, because it will in real life - the test wouldn't be valid if the JVM didn't optimize in the same way that it would in the real situation you're interested in.

However, if you want to make sure that the JVM doesn't remove calls that it could potentially consider no-ops otherwise, one option is to use the result - so if you're calling System.currentTimeMillis() repeatedly, you might sum all the return values and then display the sum at the end.

Note that you may still have some bias though - for example, there may be some optimization if the JVM can cheaply determine that only a tiny amount of time has passed since the last call to System.currentTimeMillis() , so it can use a cached value. I'm not saying that's actually the case here, but it's the kind of thing you need to think about. Ultimately, benchmarks can only really test the loads you give them.

One other thing to consider: assuming you want to model a real world situation where the code is run a lot , you should run the code a lot before taking any timing - because the Hotspot JVM will optimize progressively harder, and presumably you care about the heavily-optimized version and don't want to measure the time for JITting and the "slow" versions of the code.

As Stephen mentioned, you should almost certainly take the timing outside the loop... and don't forget to actually use the results...

What you are doing looks like benchmarking, you can read Robust Java benchmarking to get some good background about how to make it right. In few words, you don't need to turn it off, because it won't be what happens on production server.. instead you need to know the close the possible to 'real' time estimation / performance. Before optimization you need to 'warm up' your code, it looks like:

// warm up
for (int j = 0; j < 1000; j++) {
    for (int i = 0; i < TIMES_TO_ITERATE; i++)
    {
        long before1 = getTimeInMilli();
        doSomeReallyHardWork();
        long after1 = getTimeInMilli();
    }
}

// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
    for (int i = 0; i < TIMES_TO_ITERATE; i++)
    {
        long before1 = getTimeInMilli();
        doSomeReallyHardWork();
        long after1 = getTimeInMilli();
    }
}
long after = getTimeInMilli();

System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time

When we measure performance of our code we use this approach, it give us more less real time our code needs to work. Even better to measure code in separated methods:

public void doIt() {
    for (int i = 0; i < TIMES_TO_ITERATE; i++)
    {
        long before1 = getTimeInMilli();
        doSomeReallyHardWork();
        long after1 = getTimeInMilli();
    }
}

// warm up
for (int j = 0; j < 1000; j++) {
    doIt()
}

// measure time
long before = getTimeInMilli();
for (int j = 0; j < 1000; j++) {
    doIt();
}
long after = getTimeInMilli();

System.out.prinltn( "What to expect? " + (after - before)/1000 ); // average time

Second approach is more precise, but it also depends on VM. Eg HotSpot can perform "on-stack replacement" , it means that if some part of method is executed very often it will be optimized by VM and old version of code will be exchanged with optimized one while method is executing. Of course it takes extra actions from VM side. JRockit does not do it, optimized version of code will be used only when this method is executed again (so no 'runtime' optimization... I mean in my first code sample all the time old code will be executed... except for doSomeReallyHardWork internals - they do not belong to this method, so optimization will work well).

UPDATED: code in question was edited while I was answering ;)

Sorry, but what you are trying to do makes little sense.

If you turn off JIT compilation, then you are only going to measure how long it takes to call that method with JIT compilation turned off . This is not useful information ... because it tells you little if anything about what will happen when JIT compilation is turned on 1 .

The times between JIT on and off can be different by a huge factor. You are unlikely to want to run anything in production with JIT turned off.

A better approach would be to do this:

long before1 = getTimeInMilli();
for (int i = 0; i < TIMES_TO_ITERATE; i++) {
    doSomeReallyHardWork();
}
long after1 = getTimeInMilli();

... and / or use the nanosecond clock.


If you are trying to measure the time taken to call the two versions of getTimeInMillis() , then I don't understand the point of your call to doSomeReallyHardWork() . A more senible benchmark would be this:

public long test() {
    long before1 = getTimeInMilli();
    long sum = 0;
    for (int i = 0; i < TIMES_TO_ITERATE; i++) {
        sum += getTimeInMilli();
    }
    long after1 = getTimeInMilli();
    System.out.println("Took " + (after - before) + " milliseconds");
    return sum;
}

... and call that a number of times, until the times printed stabilize.

Either way, my main point still stands, turning of JIT compilation and / or optimization would mean that you were measuring something that is not useful to know, and not what you are really trying to find out. (Unless, that is, you are intending to run your application in production with JIT turned off ... which I find hard to believe ...)


1 - I note that someone has commented that turning off JIT compilation allowed them to easily demonstrate the difference between O(1) , O(N) and O(N^2) algorithms for a class. But I would counter that it is better to learn how to write a correct micro-benchmark . And for serious purposes, you need to learn how to derive the complexity of the algorithms ... mathematically. Even with a perfect benchmark, you can get the wrong answer by trying to "deduce" complexity from performance measurements. (Take the behavior of HashMap for example.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM