簡體   English   中英

JMH microbenchmarking遞歸快速排序

[英]JMH microbenchmarking recursive quicksort

您好我正在嘗試微觀基准測試各種排序算法,我遇到了jmh和基准測試快速排序的奇怪問題。 也許我的實施有問題。 如果有人能幫我看看問題在哪里,我會很感興趣。 首先,我使用ubuntu 14.04和jdk 7以及jmh 0.9.1。 以下是我嘗試做基准測試的方法:

@OutputTimeUnit(TimeUnit.MILLISECONDS)
@BenchmarkMode(Mode.AverageTime)
@Warmup(iterations = 3, time = 1)
@Measurement(iterations = 3, time = 1)
@State(Scope.Thread)
public class SortingBenchmark {

private int length = 100000;

private Distribution distribution = Distribution.RANDOM;

private int[] array;

int i = 1;

@Setup(Level.Iteration)
public void setUp() {
    array = distribution.create(length);
}

@Benchmark
public int timeQuickSort() {
    int[] sorted = Sorter.quickSort(array);
    return sorted[i];
}

@Benchmark
public int timeJDKSort() {
    Arrays.sort(array);
    return array[i];
}

public static void main(String[] args) throws RunnerException {
    Options opt = new OptionsBuilder().include(".*" + SortingBenchmark.class.getSimpleName() + ".*").forks(1)
            .build();

    new Runner(opt).run();
}
}

還有其他算法,但我把它們排除了,因為它們或多或少都可以。 現在快速排序由於某種原因非常緩慢。 時間的大小更慢! 甚至更多 - 我需要為其分配更多的堆棧空間,以便在沒有StackOverflowException的情況下運行。 看起來由於某種原因,quicksort只是做了很多遞歸調用。 有趣的是,當我在我的主類中運行算法時 - 它運行正常(具有相同的隨機分布和100000個元素)。 不需要堆棧增加,簡單的納米基准測試顯示非常接近其他算法的時間。 在基准測試中,使用jmh進行測試時JDK排序速度非常快,並且與其他使用naive納米基准測試的算法更加一致。 我在這里做錯了什么或錯過了什么? 這是我的快速排序算法:

public static int[] quickSort(int[] data) {
    Sorter.quickSort(data, 0, data.length - 1);
    return data;
}
private static void quickSort(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    if (sublistFirstIndex < sublistLastIndex) {
        // move smaller elements before pivot and larger after
        int pivotIndex = partition(data, sublistFirstIndex, sublistLastIndex);
        // apply recursively to sub lists
        Sorter.quickSort(data, sublistFirstIndex, pivotIndex - 1);
        Sorter.quickSort(data, pivotIndex + 1, sublistLastIndex);
    }
}
private static int partition(int[] data, int sublistFirstIndex, int sublistLastIndex) {
    int pivotElement = data[sublistLastIndex];
    int pivotIndex = sublistFirstIndex - 1;
    for (int i = sublistFirstIndex; i < sublistLastIndex; i++) {
        if (data[i] <= pivotElement) {
            pivotIndex++;
            ArrayUtils.swap(data, pivotIndex, i);
        }
    }
    ArrayUtils.swap(data, pivotIndex + 1, sublistLastIndex);
    return pivotIndex + 1; // return index of pivot element
}

現在我明白了,因為我的數據庫選擇,如果我在已經排序的數據上運行它,我的算法將非常慢(O(n ^ 2))。 但是我仍然在隨機運行它,甚至當我嘗試在我的main方法中對排序數據運行它時,隨機數據上的jmh版本要快得多。 我很確定我在這里遺漏了一些東西。 您可以在此處找到包含其他算法的完整項目: https//github.com/ignl/SortingAlgos/

好的,因為這里確實應該有一個答案(而不是必須通過問題下面的評論),我把它放在這里,因為我被燒了。

JMH中的迭代是一批基准方法調用(取決於迭代設置的時間長度)。 因此,使用@Setup(Level.Iteration)只會在調用序列的開頭進行設置。 由於數組在第一次調用之后進行排序,因此在后續調用的最壞情況(排序數組)上調用quicksort。 這就是為什么它需要這么長時間或者打擊堆棧的原因。

所以解決方案是使用@Setup(Level.Invocation)。 但是,正如Javadoc所述:

**
     * Invocation level: to be executed for each benchmark method execution.
     *
     * <p><b>WARNING: HERE BE DRAGONS! THIS IS A SHARP TOOL.
     * MAKE SURE YOU UNDERSTAND THE REASONING AND THE IMPLICATIONS
     * OF THE WARNINGS BELOW BEFORE EVEN CONSIDERING USING THIS LEVEL.</b></p>
     *
     * <p>This level is only usable for benchmarks taking more than a millisecond
     * per single {@link Benchmark} method invocation. It is a good idea to validate
     * the impact for your case on ad-hoc basis as well.</p>
     *
     * <p>WARNING #1: Since we have to subtract the setup/teardown costs from
     * the benchmark time, on this level, we have to timestamp *each* benchmark
     * invocation. If the benchmarked method is small, then we saturate the
     * system with timestamp requests, which introduce artificial latency,
     * throughput, and scalability bottlenecks.</p>
     *
     * <p>WARNING #2: Since we measure individual invocation timings with this
     * level, we probably set ourselves up for (coordinated) omission. That means
     * the hiccups in measurement can be hidden from timing measurement, and
     * can introduce surprising results. For example, when we use timings to
     * understand the benchmark throughput, the omitted timing measurement will
     * result in lower aggregate time, and fictionally *larger* throughput.</p>
     *
     * <p>WARNING #3: In order to maintain the same sharing behavior as other
     * Levels, we sometimes have to synchronize (arbitrage) the access to
     * {@link State} objects. Other levels do this outside the measurement,
     * but at this level, we have to synchronize on *critical path*, further
     * offsetting the measurement.</p>
     *
     * <p>WARNING #4: Current implementation allows the helper method execution
     * at this Level to overlap with the benchmark invocation itself in order
     * to simplify arbitrage. That matters in multi-threaded benchmarks, when
     * one worker thread executing {@link Benchmark} method may observe other
     * worker thread already calling {@link TearDown} for the same object.</p>
     */ 

正如Aleksey Shipilev建議的那樣,將陣列復制成本吸收到每個基准測試方法中。 由於您要比較相對性能,因此不應影響您的結果。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM