简体   繁体   English

带有nodejs axios和香草javascript中的异步等待模式的数组文件下载器

[英]array files downloader with nodejs axios and async await pattern in vanilla javascript

i have a problem with a script write in vanilla javascript and runs in nodejs.我在香草 javascript 中编写脚本并在 nodejs 中运行时遇到问题。 What i want to get is:我想要得到的是:

// Get axios request
  // response stream
  // download file1..........finish
// When file is downloaded 
// go to the next request 

// Get axios request
  // response stream
  // download file2..........finish
// When file is downloaded 
// go to the next request 

I write this script in node and async await pattern我在节点和异步等待模式中编写了这个脚本

// ARRAY OF BOOKS 
ex. [{title: 'awesome book', link: 'http://www.example.com/awesome_book.pdf'},
     {title: 'new book', link: 'http://www.example.com/new.pdf'} ...etc]
const Books = require('./Books');

const fs = require('fs');
const path = require('path');
const axios = require('axios');


// 1)
// loop the array of books and create array of links
let booksLinkinkArray = Books.map( book => book.link);

// 2)
// Function get request with axios
// response is stream
function request (element) {
  try{
    return axios({
      url: element,
      method: "GET",
      responseType: "stream"
    });
  } catch(e) {
    console.log( 'errore: ' + e)
  }
}


// 3)
// Function that downloads the file
async function download(urls) {
  urls.map(async url => {
    try{
      const saveFile = await request(url);
      let file = url.split('/')[3];
        console.log(file + ' :  ' + 'init download');
      let download = fs.createWriteStream(path.join(__dirname, 'books_dir', file));
      saveFile.data.pipe(download);
      console.log(file + ' :  ' + 'downloaded')

    } catch(e) {
      console.log( 'error: ' + e)
    }

  }) }

download(booksLinkinkArray);

This script is ok, but the loop of request is too fast, and file downloads overlap as:这个脚本没问题,但是请求的循环太快了,文件下载重叠为:

// get request
// response
//download file init
// get request
// response
//download file init
ect...

file 1.......
file 2........
file 1...........
file 2..........
file 2.............. finish
file 1.............. finish

Is there a way to manage the stream so that the call is only be done after all the chuncks have been drained from the stream?有没有办法管理 ZF7B44CFFAFD5C52223D5498196C8A2E7BZ 以便只有在 stream 中的所有块都被耗尽后才能进行调用?

thanks for any replies感谢您的回复

async download(urls){
  for(const url of urls) {
     const saveFile = await request(url);
     const file = url.split('/')[3];
     const download = fs.createWriteStream(path.join(__dirname, 'books_dir', file));
     await new Promise((resolve, reject)=> {
        saveFile.data.pipe(download);
        download.on("close", resolve);
        download.on("error", console.error);
     });
  }
}
async download(){ for(const url of urls) { ..... } }

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM