简体   繁体   中英

File system I/O buffer

Consider the following pseudo-code snippet to read a file from it's end

while (1) {
  seek(fd, offset, SEEK_END)
  read(fd, buf, n)

  // process the buffer, break on EOF...

  offset -= n
}

Now n can vary between 1 byte and let's say 1kB .

How big would be the impact on the file system for very small n s? Is this compensated by file system buffering for the most part, or should I always read larger chunks at once?

The answer depends on your operating system. Most modern OS's use a multiple of the system page size for file buffers. As such, 4KB (the most common page size on most systems) is likely to be the minimum unit the disk cache holds. The bigger problem is that your code is making a lot of redundant system calls, which are expensive. If you are concerned about performance, consider either buffering the data you think you need in big chunks and then referencing that data directly from your buffer, or calling mmap() if your system supports it and accessing the mapped file directly.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM