简体   繁体   中英

Performance of put() in LinkedBlockingQueue

I am working on a stock trading application whose key feature is to accept some data from another system as fast as possible ( without blocking ). My application will then process the data at a later time.

So what I though was to let the messages queue in LinkedBlockingQueue/LinkedTransferQueue and then periodically drain the queue and process the data in a background thread.

So something along the lines of:

    private final LinkedTransferQueue<Data> queue = new LinkedTransferQueue<Data>();

    public void store( int index, long time, String[] data ) throws InterruptedException{
       Data data = new Data( index, time, data );
       queue.put( data );
    }

    private class BackgroundProcessor implements Runnable{

    private List<Data> entryList = new LinkedList<Data>( );

    @Override
    public void run(){

        try {
            while ( keepProcessing ){

                int count = queue.drainTo( entryList );

                for ( Data data : entryList ){
                //process data
                }
            }
        } catch( Exception e ){
               logger.error("Exception while processing data.", e);
        } 
    }

}

I then wanted to test the performance of this approach:

    public void testStore( String[] dataArray ) throws InterruptedException{

        int size = 100 * 1000;

        long iTime = System.nanoTime();
        for ( int i=0; i < size; i++ ){
           store( i, System.nanoTime, dataArray );
        }
        long fTime = System.nanoTime();

        System.err.println("Average Time (nanos): " +   (fTime - iTime)/size;

        float avgTimeInMicros = ((float) (fTime - iTime)/(size * 1000));
        System.err.println("Average Time (micros): " + avgTimeInMicros);
    }

I see that in my testStore(), if size = 100,0000, I can create the Data object ( which is an immutable object) and enqueue in 0.8 micro-second. However, if I decrease the size to 50, it takes as much as 20 micros.

I am assuming, the jvm after a while optimizes my code. However, in my application, getting 50 data messages in a burst is more realistic, is there a way to tune the jvm ( or my code ) to enqueue in 1-2 micros regardless of the burst size?

PS I tried this test on jdk 1.6 with -mx == -ms 512m.

Process 10,000 and then test for bursts of 50 after the JVM has warmed up. For a trading system, you would ensure the JVM is warmed up before you start trading.

If you want your trading system to be consistently fast, you could consider how it can be done without discarding any objects.

You might find the Disruptor library interesting. It is designed to handle 5 M messages/second or more. http://code.google.com/p/disruptor/

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM