简体   繁体   中英

What's the fastest way to write a very small string to a file in Java?

My code needs to take an integer value between 0 and 255 and write it to a file as a string. It needs to be fast as it may be called repeatedly very quickly, so any optimisation will become noticeable when under heavy load. There are other questions on here dealing with efficient ways to write large amounts of data to file, but how about small amounts of data?

Here's my current approach:

public static void writeInt(final String filename, final int value)
{
    try
    {
        // Convert the int to a string representation in a byte array
        final String string = Integer.toString(value);
        final byte[] bytes = new byte[string.length()];
        for (int i = 0; i < string.length(); i++)
        {
            bytes[i] = (byte)string.charAt(i);
        }

        // Now write the byte array to file
        final FileOutputStream fileOutputStream = new FileOutputStream(filename);
        fileOutputStream.write(bytes, 0, bytes.length);
        fileOutputStream.close();
    }
    catch (IOException exception)
    {
        // Error handling here
    }
}

I don't think a BufferedOutputStream will help here: the overhead of building the flushing the buffer is probably counter-productive for a 3-character write, isn't it? Are there any other improvements I can make?

I think this is about as efficient as you can get given the requirements of the 0-255 range requirement. Using a buffered writer will be less efficient since it would create some temporary structures that you don't need to create with so few bytes being written.

static byte[][] cache = new byte[256][];
public static void writeInt(final String filename, final int value)
{
    // time will be spent on integer to string conversion, so cache that
    byte[] bytesToWrite = cache[value];
    if (bytesToWrite == null) {
        bytesToWrite = cache[value] = String.valueOf(value).getBytes();
    }

    FileOutputStream fileOutputStream = null;
    try {
        // Now write the byte array to file
        fileOutputStream = new FileOutputStream(filename);
        fileOutputStream.write(bytesToWrite);
        fileOutputStream.close();
    } catch (IOException exception) {
        // Error handling here
    } finally {
        if (fileOutputStream != null) {
            fileOutputStream.close()
        }
    }
}

You cannot make it faster IMO. BufferedOutputStream would be of no help here if not otherwise. If we look at the src we'll see that FileOutputStream.write(byte b[], int off, int len) sends byte array directly to native method while BufferedOutputStream.write(byte b[], int off, int len) is synchronized and copies array to its buffer first, and on close it will flush the bytes from buffer to the actual stream.

Besides the slowest part in this case is opening / closing the file.

I think, the bottleneck here is IO and these two improvements could help:

  • to think of a granularity of updates. Ie if you need no more than 20 updates per second out of your app, then you could optimize your app to write no more than 1 update per 1/20 second. And this could be very benefitial depending on the environment.
  • Java NIO has proved to be much faster for large sizes, so it also makes sense to experiment with small sizes, eg to write to a Channel instead of InputStream .

Sorry for coming so late to the party :)

I think trying to optimise the code is probably not the right approach. If you're writing the same tiny file repeatedly, and you have to write it each time rather than buffering it in your application, then by far the biggest consideration will be filesystem and the storage hardware.

The point is that if you're actually hitting the hardware every time then you will upset it severely. If your system is caching the writes, though, then you might be able to have it not hit the hardware very often at all: the data will have been overwritten before it gets there, and only the new data will be written.

But this depends on two things. For one, what does your filesystem do when it gets a new write before it's written the old one? Some filesystems might still end up writing extra entries in a journal, or even writing the old file in one place and then the new file in a different physical location. That would be a killer.

For another, what does your hardware do when asked to overwrite something? If it's a conventional hard drive, it will probably just overwrite the old data. If it's flash memory (as it might well be if this is Android), the wear levelling will kick in, and it'll keep writing to different bits of the drive.

You really need to do whatever you can, in terms of disk caching and filesystem, to ensure that if you send 1000 updates before anything pushes the cache to disk, only the last update gets written.

Since this is Android, you're probably looking at ext2/3/4. Look carefully at the journalling options, and investigate what the effect of the delayed allocation in ext4 would be. Perhaps the best option will be to go with ext4, but turn the journalling off.

A quick google search brought up a benchmark of different write/read operations with files of various sizes:

http://designingefficientsoftware.wordpress.com/2011/03/03/efficient-file-io-from-csharp/

The author comes to the conclusion that WinFileIO.WriteBlocks performs fastest for writing data to a file, altough I/O operations rely heavily on multiple factors, such as operating system file caching, file indexing, disk fragmentation, filesystem caching etc.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM