[英]Difference between pipe and stream in node.js
Here's my code这是我的代码
const fs = require('fs');
const src = fs.createReadStream('bigfile3.txt');
const des = fs.createWriteStream('newTest.txt');
I can use either我可以使用
src.on('data',(chunk)=>{
des.write(chunk);});
Or或者
src.pipe(des);
Is there any difference between this two ways of handling the file operation?这两种处理文件操作的方式有什么区别吗? The pipe method gives me an error of
> "size" argument must not be larger than 2147483647
whenever I try with a large file.(~2GB)每当我尝试使用大文件时,管道方法都会给我一个错误
> "size" argument must not be larger than 2147483647
(~2GB)
Can anyone explain the working behind pipe and stream?谁能解释管道和流背后的工作? Thanks.
谢谢。
You should use the pipe method because the flow of data will be automatically managed so that the destination Writable stream is not overwhelmed by a faster Readable stream.您应该使用管道方法,因为数据流将被自动管理,以便目标 Writable 流不会被更快的 Readable 流淹没。
If your readable stream is faster than the writable stream then you may experience data loss in des.write(data)
method so better you should use src.pipe(des);
如果您的可读流比可写流快,那么您可能会在
des.write(data)
方法中遇到数据丢失,因此最好使用src.pipe(des);
If the file size is big then you should use streams, thats the correct way of doing it, I tried similar example like yours to copy 3.5 GB file with streams and pipe, it worked flawlessly in my case.如果文件大小很大,那么您应该使用流,这是正确的做法,我尝试了类似您的示例,使用流和管道复制 3.5 GB 文件,在我的情况下它完美无缺。 Check you must be doing something wrong.
检查你一定做错了什么。
The example which I tried我试过的例子
'use strict'
const fs =require('fs')
const readStream = fs.createReadStream('./Archive.zip')
const writeStream = fs.createWriteStream('./Archive3.zip')
readStream.pipe(writeStream)
However, if you still need to use stream des.write(data)
, you can handle backpressure to avoid loss of data when readStream
is faster.但是,如果您仍然需要使用流
des.write(data)
,则可以在readStream
更快时处理背压以避免数据丢失。 If the response from des.write(data)
is false
, then the writeStream
is loaded, pause the readStream src.pause()
.如果
des.write(data)
的响应为false
,则加载writeStream
,暂停 readStream src.pause()
。
To continue when writeStream
is drained, handle drain
event on writeStream and resume in the callback.要继续当
writeStream
排出,手柄drain
在回调上writeStream事件和简历。
des.on("drain", () => src.resume())
To allow higher writeStream buffer memory, you can set highWaterMark
for readStream
to a very high value, example为了允许更高的 writeStream 缓冲内存,您可以将
highWaterMark
的readStream
设置为一个非常高的值,例如
const des = fs.createWriteStream('newTest.txt',{
highWaterMark: 1628920128
});
Be careful of too massive highWaterMark
because this takes of too much memory and defeat the primary advantage of streaming data.小心太大的
highWaterMark
因为这会占用太多内存并highWaterMark
流数据的主要优势。
I will definitely still recommend using pipe
as this handles everything for you with lesser code.我肯定仍然会推荐使用
pipe
因为它可以用较少的代码为您处理所有事情。
Docs:文档:
https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback https://nodejs.org/api/stream.html#stream_writable_write_chunk_encoding_callback
https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options https://nodejs.org/api/stream.html#stream_readable_pipe_destination_options
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.