简体   繁体   中英

Advantages of Double BufferedWriter or BufferedReader

I know that a BufferedWriter or BufferedReader cannot directly communicate with a file. It needs to wrap another Writer object to do it. Like,

BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter("abc.txt"));

Here we are simply wrapping a FileWriter object using a BufferedWriter for IO performance advantages.

But I could also do this,

BufferedWriter bufferedWriter = new BufferedWriter(new BufferedWriter(new FileWriter("abc.txt")));

Here the FileWrite object is wrapped using a BufferedWriter which in turn is wrapped using another BufferedWriter. Or a more evil idea would be to chain it even further.

Is there any real advantage of double BufferedWriter? Or chaining it even further? The same is applicable for BufferedReader too.

There's no benefit, no.

First, you have to understand what the buffering is for. When you write to disk, the hard drive needs to physically move the disk head to the right place, then wait for the disk to spin such that it's in the right place, and then start writing bytes as the disk spins under the head. Those first two steps are much slower than the rest of the operation, relatively speaking. This means that there's a lot of fixed overhead: writing 1000 bytes is much faster than writing 1 byte 1000 times.

So, buffering is just a way of having the application write byte in a way that's easy for the application's logic — one byte at a time, three bytes, 1000 bytes, whatever — while still getting disk performance. Most write operations to the buffer don't actually cause any bytes to go to the underlying output stream; only once you hit a certain limit (say, every 1000 bytes) is everything written, all at once.

And it's the same idea on input.

So, chaining these wouldn't help. With the chain, assuming they had equal buffer sizes, you would write to the "outer" buffer, and it wouldn't write to the "inner" buffer at all... and then when it hits its limit, it would flush all of those bytes to the inner buffer. That inner buffer instantly hits its buffer limit (since they're the same limit) and flushes those bytes right to the output. You haven't had any benefits, but you did have to copy the bytes an extra time in memory (to the byte buffer).

"Buffered" here is primarily reflecting the semantics of the interface (API). Noting this, composing IO pipelines via chaining of BufferedReader is a possibility. In general, consider that consumption of a single byte at the end of the chain may involve multiple reads at the head and could, in theory and per API, simply be a computation based on data read at the head.

For the general case of block device buffering (eg reading from an IO device with block sized data transfer, such as FS or net endpoints), chaining buffers (effectively queues) certainly will increase memory consumption, immediately add latency to processing (due to the increased buffer size, in total). It typically will significantly increase throughput (with noted negative impact on latency).

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM