簡體   English   中英

了解 Node js 中的流

[英]Understanding streams in Node js

我遇到了一個問題,我需要使用 NodeJs 創建和下載多個文件的 zip。 我嘗試過但失敗的事情:

https://github.com/archiverjs/node-archiver/issues/364#issuecomment-573508251

https://github.com/kubernetes/kubernetes/issues/90483#issue-606722433

在 Google k8s 上使用 zip-stream NPM 的意外行為

除此之外,現在文件也被加密了,所以我必須在將它們即時添加到 zip 之前解密它們。

雖然我也解決了這個問題,但我的解決方案在服務器在本地機器上運行時完美運行,但在 Google Kubernetes Engine 上失敗。

經過更多研究,我想這可能是因為 NodeJs 中的流中存在背壓問題,但如文檔中所述,背壓由管道方法自動處理。 是否有可能是瀏覽器的接收速度與我的服務器/壓縮文件的發送速度不匹配,如果是,如何解決此問題?

與問題相關的所有示例都在上面提供的鏈接中。

除了這個可讀流之外,還通過解密來解密它。

    const handleEntries = ({ elem, uniqueFiles, archive, speedLimit }) => {
  return new Promise((resolve, reject) => {
    let fileName = elem.fileName;
    const url = elem.url;
    const decipher = elem.decipher;
    // changing fileName if same filename is already added to zip
    if (uniqueFiles[fileName] || uniqueFiles[fileName] === 0) {
      uniqueFiles[fileName]++;
    } else {
      uniqueFiles[fileName] = 0;
    }
    if (uniqueFiles[fileName]) {
      const lastDotIndex = fileName.lastIndexOf(".");
      const name = fileName.substring(0, lastDotIndex);
      const extension = fileName.substring(lastDotIndex + 1);
      fileName = `${name}(${uniqueFiles[fileName]}).${extension}`;
    }
    let readableStream = Request(url);
    // create a "Throttle" instance that reads at speedLimit bps
    if (speedLimit) {
      const throttle = new Throttle({ bps: Number(speedLimit) });
      readableStream = readableStream.pipe(throttle);
    }
    // if file is encrypted, need to decrypt it before piping to zip
    readableStream = decipher ? readableStream.pipe(decipher) : readableStream;
    archive.append(readableStream, { name: fileName });
    readableStream.on("complete", result => {
      console.log("Request stream event complete : ", fileName);
      resolve("done");
      // readableStream.unpipe();
      // readableStream.destroy();
    });
    readableStream
      .on("error", error => {
        console.log("Request stream event error fileName : ", fileName, " error : ", error);
        // readableStream.unpipe();
        // readableStream.destroy();
        resolve("done");
      })
      .on("pipe", result => {
        console.log("Request stream event pipe : ", fileName);
      })
      .on("request", result => {
        console.log("Request stream event request : ", fileName);
      })
      .on("response", result => {
        console.log("Request stream event response : ", fileName);
      })
      .on("socket", result => {
        result.setKeepAlive(true);
        console.log("Request stream event socket : ", fileName);
      });
  });
};

const useArchiver = async ({ resp, urls, speedLimit }) => {
  resp.writeHead(200, {
    "Content-Type": "application/zip",
    "Content-Disposition": `attachment; filename="${outputFileName}"`,
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Methods": "GET, POST, OPTIONS"
  });
  const uniqueFiles = {};
  const archive = Archiver("zip", { zlib: 0 });
  archive.pipe(resp);
  archive
    .on("close", result => {
      console.log("archive stream event close : ", result);
      // archive.unpipe();
      // archive.destroy();
    })
    .on("drain", result => {
      console.log("archive stream event drain : ", result);
    })
    .on("entry", result => {
      console.log("archive stream event entry : ", result.stats);
    })
    .on("error", error => {
      console.log("archive stream event error : ", error);
      reject("error");
      // archive.unpipe();
      // archive.destroy();
    })
    .on("finish", result => {
      console.log("archive stream event finish : ", result);
      // archive.unpipe();
      // archive.destroy();
    })
    .on("pipe", result => {
      console.log("archive stream event pipe : ");
    })
    .on("progress", async result => {
      console.log("archive stream event progress : ", result.entries);
      if (urls.length === result.entries.total && urls.length === result.entries.processed) {
        await archive.finalize();
        console.log("finalized : ", urls[0]);
      }
    })
    .on("unpipe", result => {
      console.log("archive stream event unpipe : ");
    })
    .on("warning", result => {
      console.log("archive stream event warning : ", result);
    });
  for (const elem of urls) {
    await handleEntries({ elem, uniqueFiles, archive, speedLimit });
  }
};

我用存檔器嘗試了這段代碼,在壓縮大文件時獲取存檔器的排放事件,管道是否處理背壓,如果是,為什么我從存檔器獲取排放事件。

嘿)我已經研究了你的代碼,可以說你使用了一個承諾函數,你不會等到她完成。 您需要使用 await new Promise() 包裝 zipStreamer.entry。 而且一定是這樣

async function doSmth() {
   const decipher = crypto.createDecipheriv(
      algorithm,
      Buffer.from(key), 
      Buffer.from(key.substring(0, 9)
   ));
   await new Promise((resolve, reject) => {
     zipStreamer.entry(readableStream.pipe(decipher), {
      name: fileName
     }, (error, result) => {
      if (!error) {
         resolve("done");
      } else {
        reject("error");
      }
     });  
   });
}

似乎我得到了解決方案,我對 Kubernetes 配置做了一些更改,即將超時從 30 秒增加到 300 秒,增加 CPU 限制,並多次測試最多 12-13 GB 的文件,它的工作原理類似於魅力。 我認為將 CPU 從 0.5 增加到 1 並增加超時對我來說是有用的

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM