简体   繁体   中英

What is the optimal file output buffer size?

See the below code for example. size is 1MB, and it certainly runs faster than when it is 1. I think it is due to that the number of IO system calls is reduced. Does this mean I will always benefit from a larger buffer size? I hoped so and ran some tests, but it seems that there is some limit. size being 2 will run much faster than when it is 1, but it doesn't go further that way.

Could someone explain this better? What is the optimal buffer size likely to be? And why don't I benefit much from expanding its size infinitely.

By the way, in this example I wrote to stdout for simplicity, but I'm also thinking about when writing to files in the disk.

enum
{
  size = 1 << 20
};

void fill_buffer(char (*)[size]);

int main(void)
{
  long n = 100000000;
  for (;;)
  {
    char buf[size];
    fill_buffer(&buf);
    if (n <= size)
    {
      if (fwrite(buf, 1, n, stdout) != n)
      {
        goto error;
      }
      break;
    }
    if (fwrite(buf, 1, size, stdout) != size)
    {
      goto error;
    }
    n -= size;
  }
  return EXIT_SUCCESS;
error:
  fprintf(stderr, "fwrite failed\n");
  return EXIT_FAILURE;
}

You usually don't need the best buffer size, which may requires querying the OS for system parameters and do complex estimations or even benchmarking on the target environment, and it's dynamic. Lucky you just need a value that is good enough .

I would say a 4K~16K buffer suit most normal usages. Where 4K is the magic number for page size supported by normal machine (x86, arm) and also multiple of usual physical disk sector size(512B or 4K).

If you are dealing with huge amount of data (giga-bytes) you may realise simple fwrite-model is inadequate for its blocking nature.

On a large partition, cluster size is often 32 KB. On a large read / write request, if the system sees that there are a series of contiguous clusters, it will combine them into a single I/O. Otherwise, it breaks up the request into multiple I/O's. I don't know what the maximum I/O size is. On some old SCSI controllers, it was 64 KB or 1 MB - 8 KB (17 or 255 descriptors, in controller). For IDE / Sata, I've been able to do IOCTL's for 2 MB, confirming it was a single I/O with an external bus monitor, but I never tested to determine the limit.

For external sorting with k way bottom up merge sort with k > 2, read / write size of 10 MB to 100 MB is used to reduce random access overhead. The request will be broken up into multiple I/O's but the read or write will be sequential (under ideal circumstances).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM