简体   繁体   中英

FileOutputStream vs ByteArrayOutputStream

I'm reading somebody else's code. Here's the gist of it.

A class compresses and decompresses files using GZIPInputStream and GZIPOutputStream.

Here's a snippet of what goes on during compression. inputFile and outputFile are instances of the class File .

FileInputStream fis = new FileInputStream(inputFile);
GZIPOutputStream gzos = new GZIPOutputStream(new FileOutputStream(outputFile));

//the following function copies an input stream to an output stream
IOUtils.copy(fis,gzos);

//outputFile is the compressed file
...

Now, here's what's going on during decompression.

GZIPInputStream gzis = new GZIPInputStream(new FileInputStream(inputFile));
ByteArrayOutputStream baos = new ByteArrayOutputStream();

//copies input stream to output stream
IOUtils.copy(gzis,baos);

//this method does as its name suggests
FileUtils.writeByteArrayToFile(outputFile, baos.toByteArray());

//outputFile is the decompressed file
...

What's a possible reason the original programmer chose FileOutputStream during compression and ByteArrayOutputStream during decompression? It confuses me.

Unless there's a good reason, I think I'm changing them to be consistant to avoid future confusion. Is this a good idea?

Heh, sounds like they copied and pasted code from different sources? :-P No, seriously, unless you need to inspect the decompressed data, you can just use a BufferedOutputStream for both compression and decompression.

The ByteArrayOutputStream is more memory hogging since it stores the entire content in Java's memory (in flavor of a byte[] ). The FileOutputStream writes to disk directly and is hence less memory hogging. I don't see any sensible reason to use ByteArrayOutputStream in this particular case. It is not modifying the individual bytes afterwards. It just get written unchanged to file afterwards. It's thus an unnecessary intermediate step.

The programmer used FileInputStream during compression and used buffer when decompressing. I think that the reason was that if you are failing duinr reading the file nothing bad happens. You just fail and a exception is thrown.

If you are failing while decompressing and you already started writing to file the file is corrupted. So he decided to write buffer first and then when decompression is completed to writer the buffer on disk. This solution is OK if you are dealing with relatively small files. Otherwise this requires to much memory and could produce OutOfMemeoryError.

I'd extract zip directly to temporary file and then rename the temporary file to its permanent name. Finally block should care to delete the temporary file.

ByteArrayOutputStream would give him/her a nice OutOfMemoryError ?

Seriously, they were probably done at different times. If you can, I'd consult the VCS logs.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM